In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org --- drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) }
/* register memory section under specified node if it spans that node */ +struct rmsun_args { + int nid; + bool hotadd; +}; static int register_mem_sect_under_node(struct memory_block *mem_blk, - void *arg) + void *args) { unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1; - int ret, nid = *(int *)arg; + int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn; + bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */ - if (system_state == SYSTEM_BOOTING) { + if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue; @@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); }
-int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn, + bool hotadd) { + struct rmsun_args args; + + args.nid = nid; + args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn), - PFN_PHYS(end_pfn - start_pfn), (void *)&nid, + PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node); }
diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *);
#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn, - unsigned long end_pfn); + unsigned long end_pfn, bool hotadd); #else static inline int link_mem_sections(int nid, unsigned long start_pfn, - unsigned long end_pfn) + unsigned long end_pfn, bool hotadd) { return 0; } @@ -128,7 +128,7 @@ static inline int register_one_node(int nid) if (error) return error; /* link memory sections under this node */ - error = link_mem_sections(nid, start_pfn, end_pfn); + error = link_mem_sections(nid, start_pfn, end_pfn, false); }
return error; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e9d5ab5d3ca0..28028db8364a 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1080,7 +1080,8 @@ int __ref add_memory_resource(int nid, struct resource *res) }
/* link memory sections under this node.*/ - ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1)); + ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1), + true); BUG_ON(ret);
/* create new memmap entry */
On Tue, Sep 08, 2020 at 07:08:35PM +0200, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
void *args)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
bool hotadd)
{
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
} diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
unsigned long end_pfn, bool hotadd);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
Adding a random boolean flag to a function is a horrible way to do anything.
Now you need to look up what that flag means every time you run across a caller, breaking your reading of what is happening.
Make this two different functions please, that describe what they do, and have them call a common "helper function" that does the work with the flag if you really want to do this type of thing.
link_mem_sections() and link_mem_sections_hotadd()?
But not this way, please no.
thanks,
greg k-h
On 08.09.20 19:31, Greg Kroah-Hartman wrote:
On Tue, Sep 08, 2020 at 07:08:35PM +0200, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
void *args)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
bool hotadd)
{
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
} diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
unsigned long end_pfn, bool hotadd);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
Adding a random boolean flag to a function is a horrible way to do anything.
Now you need to look up what that flag means every time you run across a caller, breaking your reading of what is happening.
Make this two different functions please, that describe what they do, and have them call a common "helper function" that does the work with the flag if you really want to do this type of thing.
link_mem_sections() and link_mem_sections_hotadd()?
But not this way, please no.
For memmap_init_zone() we solved it via an enum with MEMMAP_HOTPLUG vs MEMMAP_EARLY. Maybe we can generalize, because it tries to tackle roughly the same thing.
thanks,
greg k-h
Le 08/09/2020 à 19:40, David Hildenbrand a écrit :
On 08.09.20 19:31, Greg Kroah-Hartman wrote:
On Tue, Sep 08, 2020 at 07:08:35PM +0200, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;void *args)
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
{bool hotadd)
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
}PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,unsigned long end_pfn, bool hotadd);
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
Adding a random boolean flag to a function is a horrible way to do anything.
Now you need to look up what that flag means every time you run across a caller, breaking your reading of what is happening.
Make this two different functions please, that describe what they do, and have them call a common "helper function" that does the work with the flag if you really want to do this type of thing.
link_mem_sections() and link_mem_sections_hotadd()?
But not this way, please no.
For memmap_init_zone() we solved it via an enum with MEMMAP_HOTPLUG vs MEMMAP_EARLY. Maybe we can generalize, because it tries to tackle roughly the same thing.
Thanks Dacid, I like this idea.
However, I think I'll not reuse the memmap_context enum, but introduce a new one specific to the link operation, to be more explicit, something like:
/* * When a hotplug operation is done, all pages in the memory block belong to the * same node, so there is no need to do such a check in that case. */ enum linkmem_context { LINKMEM_NO_CHECK_NODE_ID, LINKMEM_CHECK_NODE_ID, }
I'm bad at naming so feel free to argue.
Cheers, Laurent.
On 09.09.20 10:26, Laurent Dufour wrote:
Le 08/09/2020 à 19:40, David Hildenbrand a écrit :
On 08.09.20 19:31, Greg Kroah-Hartman wrote:
On Tue, Sep 08, 2020 at 07:08:35PM +0200, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;void *args)
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
{bool hotadd)
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
}PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,unsigned long end_pfn, bool hotadd);
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
Adding a random boolean flag to a function is a horrible way to do anything.
Now you need to look up what that flag means every time you run across a caller, breaking your reading of what is happening.
Make this two different functions please, that describe what they do, and have them call a common "helper function" that does the work with the flag if you really want to do this type of thing.
link_mem_sections() and link_mem_sections_hotadd()?
But not this way, please no.
For memmap_init_zone() we solved it via an enum with MEMMAP_HOTPLUG vs MEMMAP_EARLY. Maybe we can generalize, because it tries to tackle roughly the same thing.
Thanks Dacid, I like this idea.
However, I think I'll not reuse the memmap_context enum, but introduce a new one specific to the link operation, to be more explicit, something like:
/*
- When a hotplug operation is done, all pages in the memory block belong to the
- same node, so there is no need to do such a check in that case.
*/ enum linkmem_context { LINKMEM_NO_CHECK_NODE_ID, LINKMEM_CHECK_NODE_ID, }
I'm bad at naming so feel free to argue.
"context" does not really fit the two cases that rather tell you what to do (like a single flag).
I would have renamed "enum memmap_context" to something like "enum mp_context" ("memory plug") and used
MP_CONTEXT_EARLY / MP_CONTEXT_HOTPLUG
Instead of using fairly specific "LINKMEM_*_CHECK_NODE_ID" ...
I am, also bad at naming, so ... :)
Le 09/09/2020 à 10:31, David Hildenbrand a écrit :
On 09.09.20 10:26, Laurent Dufour wrote:
Le 08/09/2020 à 19:40, David Hildenbrand a écrit :
On 08.09.20 19:31, Greg Kroah-Hartman wrote:
On Tue, Sep 08, 2020 at 07:08:35PM +0200, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;void *args)
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd; for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid;
@@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
{bool hotadd)
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
}PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,unsigned long end_pfn, bool hotadd);
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
Adding a random boolean flag to a function is a horrible way to do anything.
Now you need to look up what that flag means every time you run across a caller, breaking your reading of what is happening.
Make this two different functions please, that describe what they do, and have them call a common "helper function" that does the work with the flag if you really want to do this type of thing.
link_mem_sections() and link_mem_sections_hotadd()?
But not this way, please no.
For memmap_init_zone() we solved it via an enum with MEMMAP_HOTPLUG vs MEMMAP_EARLY. Maybe we can generalize, because it tries to tackle roughly the same thing.
Thanks Dacid, I like this idea.
However, I think I'll not reuse the memmap_context enum, but introduce a new one specific to the link operation, to be more explicit, something like:
/*
- When a hotplug operation is done, all pages in the memory block belong to the
- same node, so there is no need to do such a check in that case.
*/ enum linkmem_context { LINKMEM_NO_CHECK_NODE_ID, LINKMEM_CHECK_NODE_ID, }
I'm bad at naming so feel free to argue.
"context" does not really fit the two cases that rather tell you what to do (like a single flag).
I agree "linkmem_option" might be better, isn't it?
I would have renamed "enum memmap_context" to something like "enum mp_context" ("memory plug") and used
MP_CONTEXT_EARLY / MP_CONTEXT_HOTPLUG
Instead of using fairly specific "LINKMEM_*_CHECK_NODE_ID" ...
My point was to explicitly show by the enum name, why there is a different processing done. Showing that we are at boot or hot-plug time is not telling the caller that the node id check will be by-passed and that the assumption is made that all pages belong to the same node. I think it may be good to be explicit about that.
I am, also bad at naming, so ... :)
So I'm arguing, but ... ;)
Le 08/09/2020 à 19:31, Greg Kroah-Hartman a écrit :
On Tue, Sep 08, 2020 at 07:08:35PM +0200, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times. In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;void *args)
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
{bool hotadd)
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
}PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,unsigned long end_pfn, bool hotadd);
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
Adding a random boolean flag to a function is a horrible way to do anything.
Now you need to look up what that flag means every time you run across a caller, breaking your reading of what is happening.
Make this two different functions please, that describe what they do, and have them call a common "helper function" that does the work with the flag if you really want to do this type of thing.
link_mem_sections() and link_mem_sections_hotadd()?
But not this way, please no.
Thanks Greg for commenting on this.
I agree, this is too opaque.
Cheers, Laurent.
[reposting because the malformed cc list confused my email client]
On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
Why is this check arch specific?
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved
What is the exact memory layout?
and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times.
So a single memory section/memblock belongs to two numa nodes?
In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency please?
Which physical memory range you are trying to add here and what is the node affinity?
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better way to handle a failure like that. The failure means that sysfs_create_link_nowarn has failed. Please describe why that is the case.
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The existing code is a hack on its own and it is inconsistent with other boot time detection. We are using (system_state < SYSTEM_RUNNING) at other places IIRC. Would it help to use the same here as well? Maybe we want to wrap that inside a helper (early_memory_init()) and use it at all places.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
void *args)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
bool hotadd)
{
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
} diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
unsigned long end_pfn, bool hotadd);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn)
unsigned long end_pfn, bool hotadd)
{ return 0; } @@ -128,7 +128,7 @@ static inline int register_one_node(int nid) if (error) return error; /* link memory sections under this node */
error = link_mem_sections(nid, start_pfn, end_pfn);
}error = link_mem_sections(nid, start_pfn, end_pfn, false);
return error; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e9d5ab5d3ca0..28028db8364a 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1080,7 +1080,8 @@ int __ref add_memory_resource(int nid, struct resource *res) } /* link memory sections under this node.*/
- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));
- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),
BUG_ON(ret);true);
/* create new memmap entry */ -- 2.28.0
Le 09/09/2020 à 09:40, Michal Hocko a écrit :
[reposting because the malformed cc list confused my email client]
On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
Why is this check arch specific?
I was wrong the check is not arch specific.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved
What is the exact memory layout?
For instance: [ 0.000000] Early memory node ranges [ 0.000000] node 1: [mem 0x0000000000000000-0x000000011fffffff] [ 0.000000] node 2: [mem 0x0000000120000000-0x000000014fffffff] [ 0.000000] node 1: [mem 0x0000000150000000-0x00000001ffffffff] [ 0.000000] node 0: [mem 0x0000000200000000-0x000000048fffffff] [ 0.000000] node 2: [mem 0x0000000490000000-0x00000007ffffffff]
and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times.
So a single memory section/memblock belongs to two numa nodes?
If the node id is not checked in register_mem_sect_under_node(), yes that the case.
In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency please?
laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21 total 0 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2 -rw-r--r-- 1 root root 65536 Aug 24 05:27 online -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index drwxr-xr-x 2 root root 0 Aug 24 05:27 power -r--r--r-- 1 root root 65536 Aug 24 05:27 removable -rw-r--r-- 1 root root 65536 Aug 24 05:27 state lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
Which physical memory range you are trying to add here and what is the node affinity?
None is added, the root cause of the issue is happening at boot time.
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better way to handle a failure like that. The failure means that sysfs_create_link_nowarn has failed. Please describe why that is the case.
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The existing code is a hack on its own and it is inconsistent with other boot time detection. We are using (system_state < SYSTEM_RUNNING) at other places IIRC. Would it help to use the same here as well? Maybe we want to wrap that inside a helper (early_memory_init()) and use it at all places.
I agree, this looks like a hack to check for the system_state value. I'll follow the David's proposal and introduce an enum detailing when the node id check has to be done or not. The option of the wrapper seems good to me to, but it doesn't highlight why the early processing is differing from the hot plug one. By using an enum explicitly saying that the node id check is not done seems better to me.
Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()") Signed-off-by: Laurent Dufour ldufour@linux.ibm.com Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: Andrew Morton akpm@linux-foundation.org
drivers/base/node.c | 20 +++++++++++++++----- include/linux/node.h | 6 +++--- mm/memory_hotplug.c | 3 ++- 3 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/base/node.c b/drivers/base/node.c index 508b80f6329b..27f828eeb531 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn) } /* register memory section under specified node if it spans that node */ +struct rmsun_args {
- int nid;
- bool hotadd;
+}; static int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg)
{ unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE; unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); unsigned long end_pfn = start_pfn + memory_block_pfns - 1;void *args)
- int ret, nid = *(int *)arg;
- int ret, nid = ((struct rmsun_args *)args)->nid; unsigned long pfn;
- bool hotadd = ((struct rmsun_args *)args)->hotadd;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) { int page_nid; @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk, * case, during hotplug we know that all pages in the memory * block belong to the same node. */
if (system_state == SYSTEM_BOOTING) {
if (!hotadd) { page_nid = get_nid_for_pfn(pfn); if (page_nid < 0) continue;
@@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk) kobject_name(&node_devices[mem_blk->nid]->dev.kobj)); } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
{bool hotadd)
- struct rmsun_args args;
- args.nid = nid;
- args.hotadd = hotadd; return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
}PFN_PHYS(end_pfn - start_pfn), (void *)&args, register_mem_sect_under_node);
diff --git a/include/linux/node.h b/include/linux/node.h index 4866f32a02d8..6df9a4548650 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) extern int link_mem_sections(int nid, unsigned long start_pfn,
unsigned long end_pfn);
#else static inline int link_mem_sections(int nid, unsigned long start_pfn,unsigned long end_pfn, bool hotadd);
unsigned long end_pfn)
{ return 0; }unsigned long end_pfn, bool hotadd)
@@ -128,7 +128,7 @@ static inline int register_one_node(int nid) if (error) return error; /* link memory sections under this node */
error = link_mem_sections(nid, start_pfn, end_pfn);
}error = link_mem_sections(nid, start_pfn, end_pfn, false);
return error; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e9d5ab5d3ca0..28028db8364a 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1080,7 +1080,8 @@ int __ref add_memory_resource(int nid, struct resource *res) } /* link memory sections under this node.*/
- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));
- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),
BUG_ON(ret);true);
/* create new memmap entry */ -- 2.28.0
On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
Le 09/09/2020 à 09:40, Michal Hocko a écrit :
[reposting because the malformed cc list confused my email client]
On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
Why is this check arch specific?
I was wrong the check is not arch specific.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved
What is the exact memory layout?
For instance: [ 0.000000] Early memory node ranges [ 0.000000] node 1: [mem 0x0000000000000000-0x000000011fffffff] [ 0.000000] node 2: [mem 0x0000000120000000-0x000000014fffffff] [ 0.000000] node 1: [mem 0x0000000150000000-0x00000001ffffffff] [ 0.000000] node 0: [mem 0x0000000200000000-0x000000048fffffff] [ 0.000000] node 2: [mem 0x0000000490000000-0x00000007ffffffff]
Include this into the changelog.
and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times.
So a single memory section/memblock belongs to two numa nodes?
If the node id is not checked in register_mem_sect_under_node(), yes that the case.
I do not follow. register_mem_sect_under_node is about user interface. This is independent on the low level memory representation - aka memory section. I do not think we can handle a section in multiple zones/nodes. Memblock in multiple zones/nodes is a different story and interleaving physical memory layout can indeed lead to it. This is something that we do not allow for runtime hotplug but have to somehow live with that - at least not crash.
In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency please?
laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21 total 0 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2 -rw-r--r-- 1 root root 65536 Aug 24 05:27 online -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index drwxr-xr-x 2 root root 0 Aug 24 05:27 power -r--r--r-- 1 root root 65536 Aug 24 05:27 removable -rw-r--r-- 1 root root 65536 Aug 24 05:27 state lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
OK, so there are two nodes referenced here. Not terrible from the user point of view. Such a memory block will refuse to offline or online IIRC.
Which physical memory range you are trying to add here and what is the node affinity?
None is added, the root cause of the issue is happening at boot time.
Let me clarify my question. The crash has clearly happened during the hotplug add_memory_resource - which is clearly not a boot time path. I was askin for more information about why this has failed. It is quite clear that sysfs machinery has failed and that led to BUG_ON but we are mising an information on why. What was the physical memory range to be added and why sysfs failed?
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better way to handle a failure like that. The failure means that sysfs_create_link_nowarn has failed. Please describe why that is the case.
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The existing code is a hack on its own and it is inconsistent with other boot time detection. We are using (system_state < SYSTEM_RUNNING) at other places IIRC. Would it help to use the same here as well? Maybe we want to wrap that inside a helper (early_memory_init()) and use it at all places.
I agree, this looks like a hack to check for the system_state value. I'll follow the David's proposal and introduce an enum detailing when the node id check has to be done or not.
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
Le 09/09/2020 à 11:09, Michal Hocko a écrit :
On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
Le 09/09/2020 à 09:40, Michal Hocko a écrit :
[reposting because the malformed cc list confused my email client]
On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
In register_mem_sect_under_node() the system_state’s value is checked to detect whether the operation the call is made during boot time or during an hot-plug operation. Unfortunately, that check is wrong on some architecture, and may lead to sections being registered under multiple nodes if node's memory ranges are interleaved.
Why is this check arch specific?
I was wrong the check is not arch specific.
This can be seen on PowerPC LPAR after multiple memory hot-plug and hot-unplug operations are done. At the next reboot the node's memory ranges can be interleaved
What is the exact memory layout?
For instance: [ 0.000000] Early memory node ranges [ 0.000000] node 1: [mem 0x0000000000000000-0x000000011fffffff] [ 0.000000] node 2: [mem 0x0000000120000000-0x000000014fffffff] [ 0.000000] node 1: [mem 0x0000000150000000-0x00000001ffffffff] [ 0.000000] node 0: [mem 0x0000000200000000-0x000000048fffffff] [ 0.000000] node 2: [mem 0x0000000490000000-0x00000007ffffffff]
Include this into the changelog.
and since the call to link_mem_sections() is made in topology_init() while the system is in the SYSTEM_SCHEDULING state, the node's id is not checked, and the sections registered multiple times.
So a single memory section/memblock belongs to two numa nodes?
If the node id is not checked in register_mem_sect_under_node(), yes that the case.
I do not follow. register_mem_sect_under_node is about user interface. This is independent on the low level memory representation - aka memory section. I do not think we can handle a section in multiple zones/nodes. Memblock in multiple zones/nodes is a different story and interleaving physical memory layout can indeed lead to it. This is something that we do not allow for runtime hotplug but have to somehow live with that - at least not crash.
register_mem_sect_under_node() is called at boot time and when memory is hot added. In the later case the assumption is made that all the pages of the added block are in the same node. And that's a valid assumption. However at boot time the call is made using the node's whole range, lowest address to highest address for that node. In the case there are interleaved ranges, this means the interleaved sections are registered for each nodes which is not correct.
In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency please?
laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21 total 0 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2 -rw-r--r-- 1 root root 65536 Aug 24 05:27 online -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index drwxr-xr-x 2 root root 0 Aug 24 05:27 power -r--r--r-- 1 root root 65536 Aug 24 05:27 removable -rw-r--r-- 1 root root 65536 Aug 24 05:27 state lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
OK, so there are two nodes referenced here. Not terrible from the user point of view. Such a memory block will refuse to offline or online IIRC.
No the memory block is still owned by one node, only the sysfs representation is wrong. So the memory block can be hot unplugged, but only one node's link will be cleaned, and a '/syss/devices/system/node#/memory21' link will remain and that will be detected later when that memory block is hot plugged again.
Which physical memory range you are trying to add here and what is the node affinity?
None is added, the root cause of the issue is happening at boot time.
Let me clarify my question. The crash has clearly happened during the hotplug add_memory_resource - which is clearly not a boot time path. I was askin for more information about why this has failed. It is quite clear that sysfs machinery has failed and that led to BUG_ON but we are mising an information on why. What was the physical memory range to be added and why sysfs failed?
The BUG_ON is detecting a bad state generated earlier, at boot time because register_mem_sect_under_node() didn't check for the block's node id.
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better way to handle a failure like that. The failure means that sysfs_create_link_nowarn has failed. Please describe why that is the case.
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The existing code is a hack on its own and it is inconsistent with other boot time detection. We are using (system_state < SYSTEM_RUNNING) at other places IIRC. Would it help to use the same here as well? Maybe we want to wrap that inside a helper (early_memory_init()) and use it at all places.
I agree, this looks like a hack to check for the system_state value. I'll follow the David's proposal and introduce an enum detailing when the node id check has to be done or not.
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
Cheers, Laurent.
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
WARN_ON_ONCE() would be preferred - not crash the system but still highlight the issue.
Cheers, Laurent.
Le 09/09/2020 à 11:24, David Hildenbrand a écrit :
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
WARN_ON_ONCE() would be preferred - not crash the system but still highlight the issue.
Indeed, calling sysfs_create_link() instead of sysfs_create_link_nowarn() in register_mem_sect_under_node() and ignoring EEXIST returned value should do the job.
I'll do that in a separate patch.
On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
WARN_ON_ONCE() would be preferred - not crash the system but still highlight the issue.
Many many systems now run with 'panic on warn' enabled, so that wouldn't change much :(
If you can warn, you can properly just print an error message and recover from the problem.
thanks,
greg k-h
On 09.09.20 14:30, Greg Kroah-Hartman wrote:
On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
WARN_ON_ONCE() would be preferred - not crash the system but still highlight the issue.
Many many systems now run with 'panic on warn' enabled, so that wouldn't change much :(
If you can warn, you can properly just print an error message and recover from the problem.
Maybe VM_WARN_ON_ONCE() then to detect this during testing?
(we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting used in production - behaves like BUG_ON and BUG_ON is frowned upon)
On Wed, Sep 09, 2020 at 02:32:57PM +0200, David Hildenbrand wrote:
On 09.09.20 14:30, Greg Kroah-Hartman wrote:
On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
WARN_ON_ONCE() would be preferred - not crash the system but still highlight the issue.
Many many systems now run with 'panic on warn' enabled, so that wouldn't change much :(
If you can warn, you can properly just print an error message and recover from the problem.
Maybe VM_WARN_ON_ONCE() then to detect this during testing?
If you all use that, sure.
(we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting used in production - behaves like BUG_ON and BUG_ON is frowned upon)
Yes we have, but in the end, it's good, those things should be fixed and not accessable by anything a user can trigger.
thanks,
greg k-h
On Wed 09-09-20 14:32:57, David Hildenbrand wrote:
On 09.09.20 14:30, Greg Kroah-Hartman wrote:
On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
WARN_ON_ONCE() would be preferred - not crash the system but still highlight the issue.
Many many systems now run with 'panic on warn' enabled, so that wouldn't change much :(
If you can warn, you can properly just print an error message and recover from the problem.
Maybe VM_WARN_ON_ONCE() then to detect this during testing?
(we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting used in production - behaves like BUG_ON and BUG_ON is frowned upon)
VM_WARN* is not that much different from panic on warn. Still one can argue that many workloads enable it just because. And I would disagree that we should care much about those because those are debugging features and everybody has to take consequences.
On the other hand the question is whether WARN is giving us much. So what is the advantage over a simple pr_err? We will get a backtrace. Interesting but not really that useful because there are only few code paths this can trigger from. Registers dump? Not really useful here. Taint flag, probably useful because follow up problems might give us a hint that this might be related. People tend to pay more attention to WARN splat than a single line error. Well, not really a strong reason, I would say.
So while I wouldn't argue against WARN* in general (just because somebody might be setting the system to panic), I would also think of how much useful the splat is.
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
Le 09/09/2020 à 11:09, Michal Hocko a écrit :
On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
Le 09/09/2020 à 09:40, Michal Hocko a écrit :
[...]
In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency please?
laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21 total 0 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2 -rw-r--r-- 1 root root 65536 Aug 24 05:27 online -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index drwxr-xr-x 2 root root 0 Aug 24 05:27 power -r--r--r-- 1 root root 65536 Aug 24 05:27 removable -rw-r--r-- 1 root root 65536 Aug 24 05:27 state lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
OK, so there are two nodes referenced here. Not terrible from the user point of view. Such a memory block will refuse to offline or online IIRC.
No the memory block is still owned by one node, only the sysfs representation is wrong. So the memory block can be hot unplugged, but only one node's link will be cleaned, and a '/syss/devices/system/node#/memory21' link will remain and that will be detected later when that memory block is hot plugged again.
OK, so you need to hotremove first and hotadd again to trigger the problem. It is not like you would be a hot adding something new. This is a useful information to have in the changelog.
Which physical memory range you are trying to add here and what is the node affinity?
None is added, the root cause of the issue is happening at boot time.
Let me clarify my question. The crash has clearly happened during the hotplug add_memory_resource - which is clearly not a boot time path. I was askin for more information about why this has failed. It is quite clear that sysfs machinery has failed and that led to BUG_ON but we are mising an information on why. What was the physical memory range to be added and why sysfs failed?
The BUG_ON is detecting a bad state generated earlier, at boot time because register_mem_sect_under_node() didn't check for the block's node id.
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better way to handle a failure like that. The failure means that sysfs_create_link_nowarn has failed. Please describe why that is the case.
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The existing code is a hack on its own and it is inconsistent with other boot time detection. We are using (system_state < SYSTEM_RUNNING) at other places IIRC. Would it help to use the same here as well? Maybe we want to wrap that inside a helper (early_memory_init()) and use it at all places.
I agree, this looks like a hack to check for the system_state value. I'll follow the David's proposal and introduce an enum detailing when the node id check has to be done or not.
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
Yes BUG_ON is obviously an over-reaction. The system is not in a state to die anytime soon.
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
Le 09/09/2020 à 11:09, Michal Hocko a écrit :
On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
Le 09/09/2020 à 09:40, Michal Hocko a écrit :
[...]
In that case, the system is able to boot but later hot-plug operation may lead to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency please?
laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21 total 0 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1 lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2 -rw-r--r-- 1 root root 65536 Aug 24 05:27 online -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index drwxr-xr-x 2 root root 0 Aug 24 05:27 power -r--r--r-- 1 root root 65536 Aug 24 05:27 removable -rw-r--r-- 1 root root 65536 Aug 24 05:27 state lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
OK, so there are two nodes referenced here. Not terrible from the user point of view. Such a memory block will refuse to offline or online IIRC.
No the memory block is still owned by one node, only the sysfs representation is wrong. So the memory block can be hot unplugged, but only one node's link will be cleaned, and a '/syss/devices/system/node#/memory21' link will remain and that will be detected later when that memory block is hot plugged again.
OK, so you need to hotremove first and hotadd again to trigger the problem. It is not like you would be a hot adding something new. This is a useful information to have in the changelog.
Which physical memory range you are trying to add here and what is the node affinity?
None is added, the root cause of the issue is happening at boot time.
Let me clarify my question. The crash has clearly happened during the hotplug add_memory_resource - which is clearly not a boot time path. I was askin for more information about why this has failed. It is quite clear that sysfs machinery has failed and that led to BUG_ON but we are mising an information on why. What was the physical memory range to be added and why sysfs failed?
The BUG_ON is detecting a bad state generated earlier, at boot time because register_mem_sect_under_node() didn't check for the block's node id.
------------[ cut here ]------------ kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4 CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25 NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000 REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+) MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000 CFAR: c000000000846d20 IRQMASK: 0 GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000 GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003 GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8 GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000 GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000 NIP [c000000000403f34] add_memory_resource+0x244/0x340 LR [c000000000403f2c] add_memory_resource+0x23c/0x340 Call Trace: [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable) [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0 [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500 [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80 [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190 [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0 [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50 [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90 [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290 [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290 [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130 [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270 [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c Instruction dump: 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000 ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better way to handle a failure like that. The failure means that sysfs_create_link_nowarn has failed. Please describe why that is the case.
This patch addresses the root cause by not relying on the system_state value to detect whether the call is due to a hot-plug operation or not. An additional parameter is added to link_mem_sections() to tell the context of the call and this parameter is propagated to register_mem_sect_under_node() throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The existing code is a hack on its own and it is inconsistent with other boot time detection. We are using (system_state < SYSTEM_RUNNING) at other places IIRC. Would it help to use the same here as well? Maybe we want to wrap that inside a helper (early_memory_init()) and use it at all places.
I agree, this looks like a hack to check for the system_state value. I'll follow the David's proposal and introduce an enum detailing when the node id check has to be done or not.
I am not sure an enum is going to make the existing situation less messy. Sure we somehow have to distinguish boot init and runtime hotplug because they have different constrains. I am arguing that a) we should have a consistent way to check for those and b) we shouldn't blow up easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
For the point b, one option would be ignore the link error in the case the link is already existing, but that BUG_ON() had the benefit to highlight the root issue.
Yes BUG_ON is obviously an over-reaction. The system is not in a state to die anytime soon.
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
I think I'll go with the option suggested by David, replacing the enum memmap_context a new enum memplug_context and pass that context to register_mem_sect_under_node() so that function will known when node id should be checked or not.
Cheers, Laurent.
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
If that is really the case all other places need a fix as well. Btw. could you be more specific about memory hotplug during early boot? How that happens? I am only aware of https://lkml.kernel.org/r/20200818110046.6664-1-osalvador@suse.de and that doesn't happen as early as SYSTEM_SCHEDULING.
Le 10/09/2020 à 13:12, Michal Hocko a écrit :
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
For the point a, using the enum allows to know in register_mem_sect_under_node() if the link operation is due to a hotplug operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
If that is really the case all other places need a fix as well. Btw. could you be more specific about memory hotplug during early boot? How that happens? I am only aware of https://lkml.kernel.org/r/20200818110046.6664-1-osalvador@suse.de and that doesn't happen as early as SYSTEM_SCHEDULING.
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
On my side I can't get these ACPI "early" hot-plug operations to happen so I can't check that.
If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING, the patch I proposed at first is enough to fix the issue.
On 10.09.20 13:35, Laurent Dufour wrote:
Le 10/09/2020 à 13:12, Michal Hocko a écrit :
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
> For the point a, using the enum allows to know in > register_mem_sect_under_node() if the link operation is due to a hotplug > operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
If that is really the case all other places need a fix as well. Btw. could you be more specific about memory hotplug during early boot? How that happens? I am only aware of https://lkml.kernel.org/r/20200818110046.6664-1-osalvador@suse.de and that doesn't happen as early as SYSTEM_SCHEDULING.
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
On my side I can't get these ACPI "early" hot-plug operations to happen so I can't check that.
If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING, the patch I proposed at first is enough to fix the issue.
Booting a qemu guest with 4 coldplugged DIMMs gives me:
:/root# dmesg | grep link_mem [ 0.302247] link_mem_sections() during 1 [ 0.445086] link_mem_sections() during 1 [ 0.445766] link_mem_sections() during 1 [ 0.446749] link_mem_sections() during 1 [ 0.447746] link_mem_sections() during 1
So AFAICs everything happens during SYSTEM_SCHEDULING - boot memory and ACPI (cold)plug.
To make forward progress with this, relying on the system_state is obviously not sufficient.
1. We have to fix this instance and the instance directly in get_nid_for_pfn() by passing in the context (I once had a patch to clean that up, to not have two state checks, but it got lost somewhere).
2. The "system_state < SYSTEM_RUNNING" check in register_memory_resource() is correct. Actual memory hotplug after boot is not impacted. (I remember we discussed this exact behavior back then)
3. build_all_zonelists() should work as expected, called from start_kernel() before sched_init().
Le 10/09/2020 à 14:00, David Hildenbrand a écrit :
On 10.09.20 13:35, Laurent Dufour wrote:
Le 10/09/2020 à 13:12, Michal Hocko a écrit :
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit : > On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
>> For the point a, using the enum allows to know in >> register_mem_sect_under_node() if the link operation is due to a hotplug >> operation or done at boot time. > > Yes, but let me repeat. We have a mess here and different paths check > for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
If that is really the case all other places need a fix as well. Btw. could you be more specific about memory hotplug during early boot? How that happens? I am only aware of https://lkml.kernel.org/r/20200818110046.6664-1-osalvador@suse.de and that doesn't happen as early as SYSTEM_SCHEDULING.
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
On my side I can't get these ACPI "early" hot-plug operations to happen so I can't check that.
If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING, the patch I proposed at first is enough to fix the issue.
Booting a qemu guest with 4 coldplugged DIMMs gives me:
:/root# dmesg | grep link_mem [ 0.302247] link_mem_sections() during 1 [ 0.445086] link_mem_sections() during 1 [ 0.445766] link_mem_sections() during 1 [ 0.446749] link_mem_sections() during 1 [ 0.447746] link_mem_sections() during 1
So AFAICs everything happens during SYSTEM_SCHEDULING - boot memory and ACPI (cold)plug.
To make forward progress with this, relying on the system_state is obviously not sufficient.
- We have to fix this instance and the instance directly in
get_nid_for_pfn() by passing in the context (I once had a patch to clean that up, to not have two state checks, but it got lost somewhere).
- The "system_state < SYSTEM_RUNNING" check in
register_memory_resource() is correct. Actual memory hotplug after boot is not impacted. (I remember we discussed this exact behavior back then)
- build_all_zonelists() should work as expected, called from
start_kernel() before sched_init().
I'm bit confused now. Since hotplug operation is happening at SYSTEM_SCHEDULING like the regular memory registration, would it be enough to add a parameter to register_mem_sect_under_node() (reworking the memmap_context enum)? That way the check is not based on the system state but on the calling path.
On 10.09.20 14:36, Laurent Dufour wrote:
Le 10/09/2020 à 14:00, David Hildenbrand a écrit :
On 10.09.20 13:35, Laurent Dufour wrote:
Le 10/09/2020 à 13:12, Michal Hocko a écrit :
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote: > Le 09/09/2020 à 12:59, Michal Hocko a écrit : >> On Wed 09-09-20 11:21:58, Laurent Dufour wrote: [...] >>> For the point a, using the enum allows to know in >>> register_mem_sect_under_node() if the link operation is due to a hotplug >>> operation or done at boot time. >> >> Yes, but let me repeat. We have a mess here and different paths check >> for the very same condition by different ways. We need to unify those. > > What are you suggesting to unify these checks (using a MP_* enum as > suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
If that is really the case all other places need a fix as well. Btw. could you be more specific about memory hotplug during early boot? How that happens? I am only aware of https://lkml.kernel.org/r/20200818110046.6664-1-osalvador@suse.de and that doesn't happen as early as SYSTEM_SCHEDULING.
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
On my side I can't get these ACPI "early" hot-plug operations to happen so I can't check that.
If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING, the patch I proposed at first is enough to fix the issue.
Booting a qemu guest with 4 coldplugged DIMMs gives me:
:/root# dmesg | grep link_mem [ 0.302247] link_mem_sections() during 1 [ 0.445086] link_mem_sections() during 1 [ 0.445766] link_mem_sections() during 1 [ 0.446749] link_mem_sections() during 1 [ 0.447746] link_mem_sections() during 1
So AFAICs everything happens during SYSTEM_SCHEDULING - boot memory and ACPI (cold)plug.
To make forward progress with this, relying on the system_state is obviously not sufficient.
- We have to fix this instance and the instance directly in
get_nid_for_pfn() by passing in the context (I once had a patch to clean that up, to not have two state checks, but it got lost somewhere).
- The "system_state < SYSTEM_RUNNING" check in
register_memory_resource() is correct. Actual memory hotplug after boot is not impacted. (I remember we discussed this exact behavior back then)
- build_all_zonelists() should work as expected, called from
start_kernel() before sched_init().
I'm bit confused now. Since hotplug operation is happening at SYSTEM_SCHEDULING like the regular memory registration, would it be enough to add a parameter to register_mem_sect_under_node() (reworking the memmap_context enum)? That way the check is not based on the system state but on the calling path.
That would have been my suggestion to definitely fix it - maybe Michal/Oscar have a better suggestion know that we know what's going on.
On Thu 10-09-20 13:35:32, Laurent Dufour wrote:
Le 10/09/2020 à 13:12, Michal Hocko a écrit :
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
> For the point a, using the enum allows to know in > register_mem_sect_under_node() if the link operation is due to a hotplug > operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by David, something else)?
We do have system_state check spread at different places. I would use this one and wrap it behind a helper. Or have I missed any reason why that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the SYSTEM_SCHEDULING system state and the regular memory is also registered at that system state too. So system state is not enough to discriminate between the both.
If that is really the case all other places need a fix as well. Btw. could you be more specific about memory hotplug during early boot? How that happens? I am only aware of https://lkml.kernel.org/r/20200818110046.6664-1-osalvador@suse.de and that doesn't happen as early as SYSTEM_SCHEDULING.
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
: Please, note that upstream has fixed that differently (and unintentionally) by : adding another boot state (SYSTEM_SCHEDULING), which is set before smp_init(). : That should happen before memory hotplug events even with memhp_default_state=online. : Backporting that would be too intrusive.
Either I am confused or the above says that no hotplug should happen during SYSTEM_SCHEDULING even in the above case. I really have hard time to imagine how an early boot hotplug should even work. We start with a memory layout provided by a BIOS/FW and intiailize it statically. How would a hotplug even actually trigger that early?
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
I think my reply got lost.
We can see acpi hotplugs during SYSTEM_SCHEDULING:
$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \ -m size=$MEM,slots=255,maxmem=4294967296k \ -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \ -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \ -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \ -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \ -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \ -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \ -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \ -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728) kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230 kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0 kernel: [ 0.760811] link_mem_sections+0x32/0x40 kernel: [ 0.760811] add_memory_resource+0x148/0x250 kernel: [ 0.760811] __add_memory+0x5b/0x90 kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300 kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0 kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0 kernel: [ 0.760811] acpi_bus_scan+0x33/0x70 kernel: [ 0.760811] acpi_scan_init+0xea/0x21b kernel: [ 0.760811] acpi_init+0x2f1/0x33c kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
Le 10/09/2020 à 14:03, Oscar Salvador a écrit :
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
I think my reply got lost.
We can see acpi hotplugs during SYSTEM_SCHEDULING:
$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \ -m size=$MEM,slots=255,maxmem=4294967296k \ -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \ -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \ -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \ -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \ -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \ -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \ -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \ -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728) kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230 kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0 kernel: [ 0.760811] link_mem_sections+0x32/0x40 kernel: [ 0.760811] add_memory_resource+0x148/0x250 kernel: [ 0.760811] __add_memory+0x5b/0x90 kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300 kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0 kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0 kernel: [ 0.760811] acpi_bus_scan+0x33/0x70 kernel: [ 0.760811] acpi_scan_init+0xea/0x21b kernel: [ 0.760811] acpi_init+0x2f1/0x33c kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
Thanks Oscar!
On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
I think my reply got lost.
We can see acpi hotplugs during SYSTEM_SCHEDULING:
$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \ -m size=$MEM,slots=255,maxmem=4294967296k \ -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \ -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \ -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \ -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \ -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \ -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \ -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \ -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728) kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230 kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0 kernel: [ 0.760811] link_mem_sections+0x32/0x40 kernel: [ 0.760811] add_memory_resource+0x148/0x250 kernel: [ 0.760811] __add_memory+0x5b/0x90 kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300 kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0 kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0 kernel: [ 0.760811] acpi_bus_scan+0x33/0x70 kernel: [ 0.760811] acpi_scan_init+0xea/0x21b kernel: [ 0.760811] acpi_init+0x2f1/0x33c kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
Is there any actual usecase for a configuration like this? What is the point to statically define additional memory like this when the same can be achieved on the same command line?
On Thu 10-09-20 14:47:56, Michal Hocko wrote:
On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
I think my reply got lost.
We can see acpi hotplugs during SYSTEM_SCHEDULING:
$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \ -m size=$MEM,slots=255,maxmem=4294967296k \ -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \ -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \ -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \ -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \ -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \ -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \ -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \ -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728) kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230 kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0 kernel: [ 0.760811] link_mem_sections+0x32/0x40 kernel: [ 0.760811] add_memory_resource+0x148/0x250 kernel: [ 0.760811] __add_memory+0x5b/0x90 kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300 kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0 kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0 kernel: [ 0.760811] acpi_bus_scan+0x33/0x70 kernel: [ 0.760811] acpi_scan_init+0xea/0x21b kernel: [ 0.760811] acpi_init+0x2f1/0x33c kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
Is there any actual usecase for a configuration like this? What is the point to statically define additional memory like this when the same can be achieved on the same command line?
Forgot to ask one more thing. Who is going to online that memory when userspace is not running yet?
On Thu, Sep 10, 2020 at 02:48:47PM +0200, Michal Hocko wrote:
Is there any actual usecase for a configuration like this? What is the point to statically define additional memory like this when the same can be achieved on the same command line?
Well, for qemu I am not sure, but if David is right, it seems you can face the same if you reboot a vm with hotplugged memory. Moreover, it seems that the problem we spotted with [1], it was a VM running on Promox (KVM). The Hypervisor probably said at boot time "Ey, I do have these ACPI devices, care to enable them now"?
As always, there are all sorts of configurations/scenarios out there in the wild.
Forgot to ask one more thing. Who is going to online that memory when userspace is not running yet?
Depends, if you have CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE set or you specify memhp_default_online_type=[online,online_*], memory will get onlined right after hot-adding stage:
/* online pages if requested */ if (memhp_default_online_type != MMOP_OFFLINE) walk_memory_blocks(start, size, NULL, online_memory_block);
If not, systemd-udev will do the magic once the system is up.
On Thu 10-09-20 15:39:00, Oscar Salvador wrote:
On Thu, Sep 10, 2020 at 02:48:47PM +0200, Michal Hocko wrote:
Is there any actual usecase for a configuration like this? What is the point to statically define additional memory like this when the same can be achieved on the same command line?
Well, for qemu I am not sure, but if David is right, it seems you can face the same if you reboot a vm with hotplugged memory.
OK, thanks for the clarification. I was not aware of the reboot.
Moreover, it seems that the problem we spotted with [1], it was a VM running on Promox (KVM). The Hypervisor probably said at boot time "Ey, I do have these ACPI devices, care to enable them now"?
As always, there are all sorts of configurations/scenarios out there in the wild.
Forgot to ask one more thing. Who is going to online that memory when userspace is not running yet?
Depends, if you have CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE set or you specify memhp_default_online_type=[online,online_*], memory will get onlined right after hot-adding stage:
/* online pages if requested */ if (memhp_default_online_type != MMOP_OFFLINE) walk_memory_blocks(start, size, NULL, online_memory_block);
If not, systemd-udev will do the magic once the system is up.
Does that imply that we need udev to scan all existing devices and reprobe them?
On Thu 10-09-20 15:51:07, Michal Hocko wrote:
On Thu 10-09-20 15:39:00, Oscar Salvador wrote:
On Thu, Sep 10, 2020 at 02:48:47PM +0200, Michal Hocko wrote:
[...]
Forgot to ask one more thing. Who is going to online that memory when userspace is not running yet?
Depends, if you have CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE set or you specify memhp_default_online_type=[online,online_*], memory will get onlined right after hot-adding stage:
/* online pages if requested */ if (memhp_default_online_type != MMOP_OFFLINE) walk_memory_blocks(start, size, NULL, online_memory_block);
If not, systemd-udev will do the magic once the system is up.
Does that imply that we need udev to scan all existing devices and reprobe them?
I've checked the sysfs side of things and it seems that the KOBJ_ADD event gets lost because there are no listeners (create_memory_block_devices -> .... -> device_register -> ... -> device_add -> kobject_uevent(&dev->kobj, KOBJ_ADD) -> kobject_uevent_net_broadcast). So the only way to find out about those devices once the init is up and something than intercept those events is to rescan devices.
This is really unfortunate because this solution really doesn't scale with most usecases which do not do early boot hotplug and this can get more than interesting on machines like ppc which have gazillions of memory block devices because they use insanly small blocks and just imagine a multi TB machine how that scales. Sigh...
On 10.09.20 14:47, Michal Hocko wrote:
On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
I think my reply got lost.
We can see acpi hotplugs during SYSTEM_SCHEDULING:
$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \ -m size=$MEM,slots=255,maxmem=4294967296k \ -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \ -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \ -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \ -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \ -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \ -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \ -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \ -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728) kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230 kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0 kernel: [ 0.760811] link_mem_sections+0x32/0x40 kernel: [ 0.760811] add_memory_resource+0x148/0x250 kernel: [ 0.760811] __add_memory+0x5b/0x90 kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300 kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0 kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0 kernel: [ 0.760811] acpi_bus_scan+0x33/0x70 kernel: [ 0.760811] acpi_scan_init+0xea/0x21b kernel: [ 0.760811] acpi_init+0x2f1/0x33c kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
Is there any actual usecase for a configuration like this? What is the point to statically define additional memory like this when the same can be achieved on the same command line?
You can online it movable right away to unplug later.
Also, under QEMU, just do a reboot with hotplugged memory and you're in the very same situation.
On Thu 10-09-20 14:49:28, David Hildenbrand wrote:
On 10.09.20 14:47, Michal Hocko wrote:
On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
That points has been raised by David, quoting him here:
IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
I think my reply got lost.
We can see acpi hotplugs during SYSTEM_SCHEDULING:
$QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \ -m size=$MEM,slots=255,maxmem=4294967296k \ -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \ -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \ -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \ -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \ -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \ -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \ -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \ -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
kernel: [ 0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728) kernel: [ 0.756950] register_mem_sect_under_node: system_state= 1
kernel: [ 0.760811] register_mem_sect_under_node+0x4f/0x230 kernel: [ 0.760811] walk_memory_blocks+0x80/0xc0 kernel: [ 0.760811] link_mem_sections+0x32/0x40 kernel: [ 0.760811] add_memory_resource+0x148/0x250 kernel: [ 0.760811] __add_memory+0x5b/0x90 kernel: [ 0.760811] acpi_memory_device_add+0x130/0x300 kernel: [ 0.760811] acpi_bus_attach+0x13c/0x1c0 kernel: [ 0.760811] acpi_bus_attach+0x60/0x1c0 kernel: [ 0.760811] acpi_bus_scan+0x33/0x70 kernel: [ 0.760811] acpi_scan_init+0xea/0x21b kernel: [ 0.760811] acpi_init+0x2f1/0x33c kernel: [ 0.760811] do_one_initcall+0x46/0x1f4
Is there any actual usecase for a configuration like this? What is the point to statically define additional memory like this when the same can be achieved on the same command line?
You can online it movable right away to unplug later.
You can use movable_node for that. IIRC this would only all hotplugable memory as movable.
Also, under QEMU, just do a reboot with hotplugged memory and you're in the very same situation.
OK, I didn't know that. I thought the memory would be presented as a normal memory after reboot. Thanks for the clarification.
Also, under QEMU, just do a reboot with hotplugged memory and you're in the very same situation.
OK, I didn't know that. I thought the memory would be presented as a normal memory after reboot. Thanks for the clarification.
That's one of the cases where QEMU differs to actual hardware - it's not added to e820, so ACPI always probes+detects+adds DIMMs during boot.
Some people (me :)) consider that a feature and not a BUG.
linux-stable-mirror@lists.linaro.org