On Wed, Feb 03, 2021 at 01:22:18PM +0000, Joao Martins wrote:
With this, longterm gup will 'regress' for hugetlbfs e.g. from ~6k -> 32k usecs when pinning a 16G hugetlb file.
Yes, but correctness demands it.
The solution is to track these pages as we discover them so we know if a PMD/PUD points and can directly skip the duplicated work
Splitting can only occur on THP right? If so, perhaps we could retain the @step increment for compound pages but when !is_transparent_hugepage(head) or just PageHuge(head) like:
Honestly I'd rather see it fixed properly which will give even bigger performance gains - avoiding the entire rescan of the page list will be a win
Jason