On Wed, Feb 3, 2021 at 8:23 AM Joao Martins joao.m.martins@oracle.com wrote:
On 1/25/21 7:47 PM, Pavel Tatashin wrote:
When pages are isolated in check_and_migrate_movable_pages() we skip compound number of pages at a time. However, as Jason noted, it is not necessary correct that pages[i] corresponds to the pages that we skipped. This is because it is possible that the addresses in this range had split_huge_pmd()/split_huge_pud(), and these functions do not update the compound page metadata.
The problem can be reproduced if something like this occurs:
- User faulted huge pages.
- split_huge_pmd() was called for some reason
- User has unmapped some sub-pages in the range
- User tries to longterm pin the addresses.
The resulting pages[i] might end-up having pages which are not compound size page aligned.
Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge page") Reported-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Pavel Tatashin pasha.tatashin@soleen.com Reviewed-by: Jason Gunthorpe jgg@nvidia.com
[...]
/* * If we get a page from the CMA zone, since we are going to * be pinning these entries, we might as well move them out
@@ -1599,8 +1596,6 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, } } }
i += step; }
Hi Joao,
With this, longterm gup will 'regress' for hugetlbfs e.g. from ~6k -> 32k usecs when pinning a 16G hugetlb file.
Estimate or you actually measured?
Splitting can only occur on THP right? If so, perhaps we could retain the @step increment
Yes, I do not think we can split HugePage, only THP.
for compound pages but when !is_transparent_hugepage(head) or just PageHuge(head) like:
if (!is_transparent_hugepage(head) && PageCompound(page))
i += (compound_nr(head) - (pages[i] - head));
Or making specific to hugetlbfs:
if (PageHuge(head))
i += (compound_nr(head) - (pages[i] - head));
Yes, this is reasonable optimization. I will submit a follow up patch against linux-next.
Thank you, Pasha