On Wed, Jul 01, 2020 at 03:53:47PM -0700, Ralph Campbell wrote:
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were originally part of a larger series: https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampbell@nvidia.com...
Changes in v3: Replaced the HMM_PFN_P[MU]D flags with hmm_pfn_to_map_order() to indicate the size of the CPU mapping.
Changes in v2: Make the hmm_range_fault() API changes into a separate series and add two output flags for PMD/PUD instead of a single compund page flag as suggested by Jason Gunthorpe. Make the nouveau page table changes a separate patch as suggested by Ben Skeggs. Only add support for 2MB nouveau mappings initially since changing the 1:1 CPU/GPU page table size assumptions requires a bigger set of changes. Rebase to 5.8.0-rc3.
Ralph Campbell (5): nouveau/hmm: fault one page at a time mm/hmm: add hmm_mapping order nouveau: fix mapping 2MB sysmem pages nouveau/hmm: support mapping large sysmem pages hmm: add tests for HMM_PFN_PMD flag
Applied to hmm.git.
I edited the comment for hmm_pfn_to_map_order() and added a function to compute the field.
Thanks, Jason