On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set to 27, making one section 128M. The related page struct which vmemmap points to is 2M then. Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the vmemmap to populate at the PMD section level which was suitable initially since hot plug granule is always one section(128M). However, commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the existing arm64 assumptions.
Considering the vmemmap_free -> unmap_hotplug_pmd_range path, when pmd_sect() is true, the entire PMD section is cleared, even if there is other effective subsection. For example page_struct_map1 and page_strcut_map2 are part of a single PMD entry and they are hot-added sequentially. Then page_struct_map1 is removed, vmemmap_free() will clear the entire PMD entry freeing the struct page map for the whole section, even though page_struct_map2 is still active. Similar problem exists with linear mapping as well, for 16K base page(PMD size = 32M) or 64K base page(PMD = 512M), their block mappings exceed SUBSECTION_SIZE. Tearing down the entire PMD mapping too will leave other subsections unmapped in the linear mapping.
To address the issue, we need to prevent PMD/PUD/CONT mappings for both linear and vmemmap for non-boot sections if corresponding size on the given base page exceeds 2MB(SUBSECTION_SIZE). We only permit 2MB PMD block linear mapping in 4K page size config as its PMD_SIZE matches the SUBSECTION_SIZE.
Cc: stable@vger.kernel.org # v5.4+ Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") Signed-off-by: Zhenhua Huang quic_zhenhuah@quicinc.com --- arch/arm64/mm/mmu.c | 43 +++++++++++++++++++++++++++++++++++++------ 1 file changed, 37 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e2739b69e11b..5e0f514de870 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -42,9 +42,11 @@ #include <asm/pgalloc.h> #include <asm/kfence.h>
-#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) -#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ +#define NO_PMD_BLOCK_MAPPINGS BIT(0) +#define NO_PUD_BLOCK_MAPPINGS BIT(1) /* Hotplug case: do not want block mapping for PUD */ +#define NO_BLOCK_MAPPINGS (NO_PMD_BLOCK_MAPPINGS | NO_PUD_BLOCK_MAPPINGS) +#define NO_CONT_MAPPINGS BIT(2) +#define NO_EXEC_MAPPINGS BIT(3) /* assumes FEAT_HPDS is not used */
u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); @@ -254,7 +256,7 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end,
/* try section mapping first */ if (((addr | next | phys) & ~PMD_MASK) == 0 && - (flags & NO_BLOCK_MAPPINGS) == 0) { + (flags & NO_PMD_BLOCK_MAPPINGS) == 0) { pmd_set_huge(pmdp, phys, prot);
/* @@ -356,10 +358,11 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end,
/* * For 4K granule only, attempt to put down a 1GB block + * Hotplug case: do not attempt 1GB block */ if (pud_sect_supported() && ((addr | next | phys) & ~PUD_MASK) == 0 && - (flags & NO_BLOCK_MAPPINGS) == 0) { + (flags & NO_PUD_BLOCK_MAPPINGS) == 0) { pud_set_huge(pudp, phys, prot);
/* @@ -1175,9 +1178,21 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { + unsigned long start_pfn; + struct mem_section *ms; + WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) + start_pfn = page_to_pfn((struct page *)start); + ms = __pfn_to_section(start_pfn); + + /* + * Hotplugged section does not support hugepages as + * PMD_SIZE (hence PUD_SIZE) section mapping covers + * struct page range that exceeds a SUBSECTION_SIZE + * i.e 2MB - for all available base page sizes. + */ + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms)) return vmemmap_populate_basepages(start, end, node, altmap); else return vmemmap_populate_hugepages(start, end, node, altmap); @@ -1339,9 +1354,25 @@ int arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params) { int ret, flags = NO_EXEC_MAPPINGS; + unsigned long start_pfn = page_to_pfn((struct page *)start); + struct mem_section *ms = __pfn_to_section(start_pfn);
VM_BUG_ON(!mhp_range_allowed(start, size, true));
+ /* should not be invoked by early section */ + WARN_ON(early_section(ms)); + + /* + * 4K base page's PMD_SIZE matches SUBSECTION_SIZE i.e 2MB. Hence + * PMD section mapping can be allowed, but only for 4K base pages. + * Where as PMD_SIZE (hence PUD_SIZE) for other page sizes exceed + * SUBSECTION_SIZE. + */ + if (IS_ENABLED(CONFIG_ARM64_4K_PAGES)) + flags |= NO_PUD_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + else + flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + if (can_set_direct_map()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
On Tue, Jan 07, 2025 at 03:42:52PM +0800, Zhenhua Huang wrote:
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e2739b69e11b..5e0f514de870 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -42,9 +42,11 @@ #include <asm/pgalloc.h> #include <asm/kfence.h> -#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) -#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ +#define NO_PMD_BLOCK_MAPPINGS BIT(0) +#define NO_PUD_BLOCK_MAPPINGS BIT(1) /* Hotplug case: do not want block mapping for PUD */ +#define NO_BLOCK_MAPPINGS (NO_PMD_BLOCK_MAPPINGS | NO_PUD_BLOCK_MAPPINGS)
Nit: please use a tab instead of space before (NO_PMD_...)
+#define NO_CONT_MAPPINGS BIT(2) +#define NO_EXEC_MAPPINGS BIT(3) /* assumes FEAT_HPDS is not used */ u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); @@ -254,7 +256,7 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end, /* try section mapping first */ if (((addr | next | phys) & ~PMD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
(flags & NO_PMD_BLOCK_MAPPINGS) == 0) { pmd_set_huge(pmdp, phys, prot);
/* @@ -356,10 +358,11 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end, /* * For 4K granule only, attempt to put down a 1GB block
*/* Hotplug case: do not attempt 1GB block
I don't think we need this comment added here. The hotplug case is a decision of the caller, so better to have the comment there.
if (pud_sect_supported() && ((addr | next | phys) & ~PUD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
(flags & NO_PUD_BLOCK_MAPPINGS) == 0) { pud_set_huge(pudp, phys, prot);
Nit: something wrong with the alignment here. I think the unmodified line after the 'if' one above was misaligned before your patch.
/* @@ -1175,9 +1178,21 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) {
- unsigned long start_pfn;
- struct mem_section *ms;
- WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
- start_pfn = page_to_pfn((struct page *)start);
- ms = __pfn_to_section(start_pfn);
Hmm, it would have been better if the core code provided the start pfn as it does for vmemmap_populate_compound_pages() but I'm fine with deducting it from 'start'.
- /*
* Hotplugged section does not support hugepages as
* PMD_SIZE (hence PUD_SIZE) section mapping covers
* struct page range that exceeds a SUBSECTION_SIZE
* i.e 2MB - for all available base page sizes.
*/
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms)) return vmemmap_populate_basepages(start, end, node, altmap); else return vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1339,9 +1354,25 @@ int arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params) { int ret, flags = NO_EXEC_MAPPINGS;
- unsigned long start_pfn = page_to_pfn((struct page *)start);
- struct mem_section *ms = __pfn_to_section(start_pfn);
This looks wrong. 'start' here is a physical address, you want PFN_DOWN() instead.
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- /* should not be invoked by early section */
- WARN_ON(early_section(ms));
- /*
* 4K base page's PMD_SIZE matches SUBSECTION_SIZE i.e 2MB. Hence
* PMD section mapping can be allowed, but only for 4K base pages.
* Where as PMD_SIZE (hence PUD_SIZE) for other page sizes exceed
* SUBSECTION_SIZE.
*/
- if (IS_ENABLED(CONFIG_ARM64_4K_PAGES))
flags |= NO_PUD_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
In theory we can allow contiguous PTE mappings but not PMD. You could probably do the same as a NO_BLOCK_MAPPINGS and split it into multiple components - NO_PTE_CONT_MAPPINGS and so on.
- else
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
Similarly with 16K/64K pages we can allow contiguous PTEs as they all go up to 2MB blocks.
I think we should write the flags setup in a more readable way than trying to do mental maths on the possible combinations, something like:
flags = NO_PUD_BLOCK_MAPPINGS | NO_PMD_CONT_MAPPINGS; if (SUBSECTION_SHIFT < PMD_SHIFT) flags |= NO_PMD_BLOCK_MAPPINGS; if (SUBSECTION_SHIFT < CONT_PTE_SHIFT) flags |= NO_PTE_CONT_MAPPINGS;
This way we don't care about the page size and should cover any changes to SUBSECTION_SHIFT making it smaller than 2MB.
Hi Catalin,
On 2025/1/8 3:22, Catalin Marinas wrote:
On Tue, Jan 07, 2025 at 03:42:52PM +0800, Zhenhua Huang wrote:
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e2739b69e11b..5e0f514de870 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -42,9 +42,11 @@ #include <asm/pgalloc.h> #include <asm/kfence.h> -#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) -#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ +#define NO_PMD_BLOCK_MAPPINGS BIT(0) +#define NO_PUD_BLOCK_MAPPINGS BIT(1) /* Hotplug case: do not want block mapping for PUD */ +#define NO_BLOCK_MAPPINGS (NO_PMD_BLOCK_MAPPINGS | NO_PUD_BLOCK_MAPPINGS)
Nit: please use a tab instead of space before (NO_PMD_...)
+#define NO_CONT_MAPPINGS BIT(2) +#define NO_EXEC_MAPPINGS BIT(3) /* assumes FEAT_HPDS is not used */ u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); @@ -254,7 +256,7 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end, /* try section mapping first */ if (((addr | next | phys) & ~PMD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
(flags & NO_PMD_BLOCK_MAPPINGS) == 0) { pmd_set_huge(pmdp, phys, prot);
/* @@ -356,10 +358,11 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end, /* * For 4K granule only, attempt to put down a 1GB block
*/* Hotplug case: do not attempt 1GB block
I don't think we need this comment added here. The hotplug case is a decision of the caller, so better to have the comment there.
Yeah, will remove.
if (pud_sect_supported() && ((addr | next | phys) & ~PUD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
(flags & NO_PUD_BLOCK_MAPPINGS) == 0) { pud_set_huge(pudp, phys, prot);
Nit: something wrong with the alignment here. I think the unmodified line after the 'if' one above was misaligned before your patch.
Noted and will correct in next patch.
/* @@ -1175,9 +1178,21 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) {
- unsigned long start_pfn;
- struct mem_section *ms;
- WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
- start_pfn = page_to_pfn((struct page *)start);
- ms = __pfn_to_section(start_pfn);
Hmm, it would have been better if the core code provided the start pfn as it does for vmemmap_populate_compound_pages() but I'm fine with deducting it from 'start'.
I found another bug, that even for early section, when vmemmap_populate is called, SECTION_IS_EARLY is not set. Therefore, early_section() always return false.
Since vmemmap_populate() occurs during section initialization, it may be hard to say it is a bug.. However, should we instead using SECTION_MARKED_PRESENT to check? I tested well in my setup.
Hot plug flow: 1. section_activate -> vmemmap_populate 2. mark PRESENT
In contrast, the early flow: 1. memblocks_present -> mark PRESENT 2. __populate_section_memmap -> vmemmap_populate
- /*
* Hotplugged section does not support hugepages as
* PMD_SIZE (hence PUD_SIZE) section mapping covers
* struct page range that exceeds a SUBSECTION_SIZE
* i.e 2MB - for all available base page sizes.
*/
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms)) return vmemmap_populate_basepages(start, end, node, altmap); else return vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1339,9 +1354,25 @@ int arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params) { int ret, flags = NO_EXEC_MAPPINGS;
- unsigned long start_pfn = page_to_pfn((struct page *)start);
- struct mem_section *ms = __pfn_to_section(start_pfn);
This looks wrong. 'start' here is a physical address, you want PFN_DOWN() instead.
Sorry, my mistake.Thanks for catching it.
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- /* should not be invoked by early section */
- WARN_ON(early_section(ms));
- /*
* 4K base page's PMD_SIZE matches SUBSECTION_SIZE i.e 2MB. Hence
* PMD section mapping can be allowed, but only for 4K base pages.
* Where as PMD_SIZE (hence PUD_SIZE) for other page sizes exceed
* SUBSECTION_SIZE.
*/
- if (IS_ENABLED(CONFIG_ARM64_4K_PAGES))
flags |= NO_PUD_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
In theory we can allow contiguous PTE mappings but not PMD. You could probably do the same as a NO_BLOCK_MAPPINGS and split it into multiple components - NO_PTE_CONT_MAPPINGS and so on.
- else
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
Similarly with 16K/64K pages we can allow contiguous PTEs as they all go up to 2MB blocks.
Yes!
I think we should write the flags setup in a more readable way than trying to do mental maths on the possible combinations, something like:
flags = NO_PUD_BLOCK_MAPPINGS | NO_PMD_CONT_MAPPINGS; if (SUBSECTION_SHIFT < PMD_SHIFT) flags |= NO_PMD_BLOCK_MAPPINGS; if (SUBSECTION_SHIFT < CONT_PTE_SHIFT) flags |= NO_PTE_CONT_MAPPINGS;
Good idea indeed. We no longer need to worry about PAGE SIZE CONFIG.
This way we don't care about the page size and should cover any changes to SUBSECTION_SHIFT making it smaller than 2MB.
On 1/8/25 15:37, Zhenhua Huang wrote:
/* @@ -1175,9 +1178,21 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { + unsigned long start_pfn; + struct mem_section *ms;
WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) + start_pfn = page_to_pfn((struct page *)start); + ms = __pfn_to_section(start_pfn);
Hmm, it would have been better if the core code provided the start pfn as it does for vmemmap_populate_compound_pages() but I'm fine with deducting it from 'start'.
I found another bug, that even for early section, when vmemmap_populate is called, SECTION_IS_EARLY is not set. Therefore, early_section() always return false.
Hmm, well that's unexpected.
Since vmemmap_populate() occurs during section initialization, it may be hard to say it is a bug.. However, should we instead using SECTION_MARKED_PRESENT to check? I tested well in my setup.
Hot plug flow:
- section_activate -> vmemmap_populate
- mark PRESENT
In contrast, the early flow:
- memblocks_present -> mark PRESENT
- __populate_section_memmap -> vmemmap_populate
But from a semantics perspective, should SECTION_MARKED_PRESENT be marked on a section before SECTION_IS_EARLY ? Is it really the expected behaviour here or that needs to be fixed first ?
Although SYSTEM_BOOTING state check might help but section flag seems to be the right thing to do here.
On 2025/1/8 18:52, Anshuman Khandual wrote:
I found another bug, that even for early section, when vmemmap_populate is called, SECTION_IS_EARLY is not set. Therefore, early_section() always return false.
Hmm, well that's unexpected.
Since vmemmap_populate() occurs during section initialization, it may be hard to say it is a bug.. However, should we instead using SECTION_MARKED_PRESENT to check? I tested well in my setup.
Hot plug flow:
- section_activate -> vmemmap_populate
- mark PRESENT
In contrast, the early flow:
- memblocks_present -> mark PRESENT
- __populate_section_memmap -> vmemmap_populate
But from a semantics perspective, should SECTION_MARKED_PRESENT be marked on a section before SECTION_IS_EARLY ? Is it really the expected behaviour here or that needs to be fixed first ?
The tricky part is vmemmap_populate initializes mem_map, that happens during mem_section initialization process. PRESENT or EARLY tag is in the same process as well. There doesn't appear to be a compelling reason to enforce a specific sequence..
Although SYSTEM_BOOTING state check might help but section flag seems to be the right thing to do here.
Good idea, I prefer to vote for this alternative rather than PRESENT tag. As I see we already took this stage to determine whether memmap pages are boot pages or not in common mm code: https://elixir.bootlin.com/linux/v6.13-rc3/source/mm/sparse-vmemmap.c#L465
Would like to hear Catalin's perspective ?:)
On Thu, Jan 09, 2025 at 03:04:22PM +0800, Zhenhua Huang wrote:
On 2025/1/8 18:52, Anshuman Khandual wrote:
I found another bug, that even for early section, when vmemmap_populate is called, SECTION_IS_EARLY is not set. Therefore, early_section() always return false.
[...]
Since vmemmap_populate() occurs during section initialization, it may be hard to say it is a bug.. However, should we instead using SECTION_MARKED_PRESENT to check? I tested well in my setup.
Hot plug flow:
- section_activate -> vmemmap_populate
- mark PRESENT
In contrast, the early flow:
- memblocks_present -> mark PRESENT
- __populate_section_memmap -> vmemmap_populate
But from a semantics perspective, should SECTION_MARKED_PRESENT be marked on a section before SECTION_IS_EARLY ? Is it really the expected behaviour here or that needs to be fixed first ?
The tricky part is vmemmap_populate initializes mem_map, that happens during mem_section initialization process. PRESENT or EARLY tag is in the same process as well. There doesn't appear to be a compelling reason to enforce a specific sequence..
The order in which a section is marked as present and vmemmap created does seem a bit arbitrary. At least the early code seems to rely on the for_each_present_section_nr() loop, so we'll always have this first but it's not some internal kernel API that guarantees this.
Although SYSTEM_BOOTING state check might help but section flag seems to be the right thing to do here.
Good idea, I prefer to vote for this alternative rather than PRESENT tag. As I see we already took this stage to determine whether memmap pages are boot pages or not in common mm code: https://elixir.bootlin.com/linux/v6.13-rc3/source/mm/sparse-vmemmap.c#L465
The advantage of SYSTEM_BOOTING is that we don't need to rely on the section information at all, though we could add a WARN_ON_ONCE if the section is not present.
On 2025/1/9 22:32, Catalin Marinas wrote:
On Thu, Jan 09, 2025 at 03:04:22PM +0800, Zhenhua Huang wrote:
On 2025/1/8 18:52, Anshuman Khandual wrote:
I found another bug, that even for early section, when vmemmap_populate is called, SECTION_IS_EARLY is not set. Therefore, early_section() always return false.
[...]
Since vmemmap_populate() occurs during section initialization, it may be hard to say it is a bug.. However, should we instead using SECTION_MARKED_PRESENT to check? I tested well in my setup.
Hot plug flow:
- section_activate -> vmemmap_populate
- mark PRESENT
In contrast, the early flow:
- memblocks_present -> mark PRESENT
- __populate_section_memmap -> vmemmap_populate
But from a semantics perspective, should SECTION_MARKED_PRESENT be marked on a section before SECTION_IS_EARLY ? Is it really the expected behaviour here or that needs to be fixed first ?
The tricky part is vmemmap_populate initializes mem_map, that happens during mem_section initialization process. PRESENT or EARLY tag is in the same process as well. There doesn't appear to be a compelling reason to enforce a specific sequence..
The order in which a section is marked as present and vmemmap created does seem a bit arbitrary. At least the early code seems to rely on the for_each_present_section_nr() loop, so we'll always have this first but it's not some internal kernel API that guarantees this.
Although SYSTEM_BOOTING state check might help but section flag seems to be the right thing to do here.
Good idea, I prefer to vote for this alternative rather than PRESENT tag. As I see we already took this stage to determine whether memmap pages are boot pages or not in common mm code: https://elixir.bootlin.com/linux/v6.13-rc3/source/mm/sparse-vmemmap.c#L465
The advantage of SYSTEM_BOOTING is that we don't need to rely on the section information at all, though we could add a WARN_ON_ONCE if the section is not present.
Hi Catalin,
Sorry, but I don't fully understand your comment here, IIUC we shouldn't add WARN_ON_ONCE in vmemmap_populate(). As you mentioned above, early code relies on section present. while the hotplug code does not guarantee, it will set PRESENT after calling vmemmap_populate(). By the way, seems you're not opposed to using SYSTEM_BOOTING ? If so, please take a look at latest post: https://lore.kernel.org/linux-mm/20250109093824.452925-1-quic_zhenhuah@quici... Thanks very much!
On 1/8/25 00:52, Catalin Marinas wrote:
On Tue, Jan 07, 2025 at 03:42:52PM +0800, Zhenhua Huang wrote:
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e2739b69e11b..5e0f514de870 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -42,9 +42,11 @@ #include <asm/pgalloc.h> #include <asm/kfence.h> -#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) -#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */ +#define NO_PMD_BLOCK_MAPPINGS BIT(0) +#define NO_PUD_BLOCK_MAPPINGS BIT(1) /* Hotplug case: do not want block mapping for PUD */ +#define NO_BLOCK_MAPPINGS (NO_PMD_BLOCK_MAPPINGS | NO_PUD_BLOCK_MAPPINGS)
Nit: please use a tab instead of space before (NO_PMD_...)
+#define NO_CONT_MAPPINGS BIT(2) +#define NO_EXEC_MAPPINGS BIT(3) /* assumes FEAT_HPDS is not used */ u64 kimage_voffset __ro_after_init; EXPORT_SYMBOL(kimage_voffset); @@ -254,7 +256,7 @@ static void init_pmd(pmd_t *pmdp, unsigned long addr, unsigned long end, /* try section mapping first */ if (((addr | next | phys) & ~PMD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
(flags & NO_PMD_BLOCK_MAPPINGS) == 0) { pmd_set_huge(pmdp, phys, prot);
/* @@ -356,10 +358,11 @@ static void alloc_init_pud(p4d_t *p4dp, unsigned long addr, unsigned long end, /* * For 4K granule only, attempt to put down a 1GB block
*/* Hotplug case: do not attempt 1GB block
I don't think we need this comment added here. The hotplug case is a decision of the caller, so better to have the comment there.
Agreed.
if (pud_sect_supported() && ((addr | next | phys) & ~PUD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
(flags & NO_PUD_BLOCK_MAPPINGS) == 0) { pud_set_huge(pudp, phys, prot);
Nit: something wrong with the alignment here. I think the unmodified line after the 'if' one above was misaligned before your patch.
/* @@ -1175,9 +1178,21 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) {
- unsigned long start_pfn;
- struct mem_section *ms;
- WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
- start_pfn = page_to_pfn((struct page *)start);
- ms = __pfn_to_section(start_pfn);
Hmm, it would have been better if the core code provided the start pfn as it does for vmemmap_populate_compound_pages() but I'm fine with deducting it from 'start'.
Right, that will require changing arguments in generic vmemmap_populate().
- /*
* Hotplugged section does not support hugepages as
* PMD_SIZE (hence PUD_SIZE) section mapping covers
* struct page range that exceeds a SUBSECTION_SIZE
* i.e 2MB - for all available base page sizes.
*/
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || !early_section(ms)) return vmemmap_populate_basepages(start, end, node, altmap); else return vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1339,9 +1354,25 @@ int arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params) { int ret, flags = NO_EXEC_MAPPINGS;
- unsigned long start_pfn = page_to_pfn((struct page *)start);
- struct mem_section *ms = __pfn_to_section(start_pfn);
This looks wrong. 'start' here is a physical address, you want PFN_DOWN() instead.
Agreed.
VM_BUG_ON(!mhp_range_allowed(start, size, true));
- /* should not be invoked by early section */
- WARN_ON(early_section(ms));
- /*
* 4K base page's PMD_SIZE matches SUBSECTION_SIZE i.e 2MB. Hence
* PMD section mapping can be allowed, but only for 4K base pages.
* Where as PMD_SIZE (hence PUD_SIZE) for other page sizes exceed
* SUBSECTION_SIZE.
*/
- if (IS_ENABLED(CONFIG_ARM64_4K_PAGES))
flags |= NO_PUD_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
In theory we can allow contiguous PTE mappings but not PMD. You could probably do the same as a NO_BLOCK_MAPPINGS and split it into multiple components - NO_PTE_CONT_MAPPINGS and so on.
That's a good idea.
- else
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
Similarly with 16K/64K pages we can allow contiguous PTEs as they all go up to 2MB blocks.
I think we should write the flags setup in a more readable way than trying to do mental maths on the possible combinations, something like:
flags = NO_PUD_BLOCK_MAPPINGS | NO_PMD_CONT_MAPPINGS; if (SUBSECTION_SHIFT < PMD_SHIFT) flags |= NO_PMD_BLOCK_MAPPINGS; if (SUBSECTION_SHIFT < CONT_PTE_SHIFT) flags |= NO_PTE_CONT_MAPPINGS;
Just wondering why not start with PUD level itself ? Although SUBSECTION_SHIFT might never reach the PUD level but this will help keep the flags calculations bit simple and ready for all future changes.
flags = 0; if (SUBSECTION_SHIFT < PUD_SHIFT) flags |= NO_PUD_BLOCK_MAPPINGS; if (SUBSECTION_SHIFT < CONT_PMD_SHIFT) flags |= NO_PMD_CONT_MAPPINGS;
This way we don't care about the page size and should cover any changes to SUBSECTION_SHIFT making it smaller than 2MB.
Agreed.
On 2025/1/8 18:11, Anshuman Khandual wrote:
Just wondering why not start with PUD level itself ? Although SUBSECTION_SHIFT might never reach the PUD level but this will help keep the flags calculations bit simple and ready for all future changes.
I suppose that it's because these are significantly larger than 2M, whereas Catalin assumed SUBSECTION_SIZE would not increase? His comment: "should cover any changes to SUBSECTION_SHIFT making it *smaller* than 2MB. "
flags = 0; if (SUBSECTION_SHIFT < PUD_SHIFT) flags |= NO_PUD_BLOCK_MAPPINGS; if (SUBSECTION_SHIFT < CONT_PMD_SHIFT) flags |= NO_PMD_CONT_MAPPINGS;
On Thu, Jan 09, 2025 at 03:04:48PM +0800, Zhenhua Huang wrote:
On 2025/1/8 18:11, Anshuman Khandual wrote:
Just wondering why not start with PUD level itself ? Although SUBSECTION_SHIFT might never reach the PUD level but this will help keep the flags calculations bit simple and ready for all future changes.
I suppose that it's because these are significantly larger than 2M, whereas Catalin assumed SUBSECTION_SIZE would not increase? His comment: "should cover any changes to SUBSECTION_SHIFT making it *smaller* than 2MB. "
Yeah, I was thinking of having fewer code lines. Otherwise the compiler would likely optimise them anyway to a single assignment.
linux-stable-mirror@lists.linaro.org