The function call alloc_percpu() returns a pointer to the memory address, but it hasn't been checked. Our static analysis tool indicates that null pointer dereference may exist in pointer zone->per_cpu_pageset. It is always safe to judge the null pointer before use.
Signed-off-by: Qiu-ji Chen chenqiuji666@gmail.com Cc: stable@vger.kernel.org Fixes: 9420f89db2dd ("mm: move most of core MM initialization to mm/mm_init.c") --- mm/page_alloc.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afab64814dc..5deae1193dc3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5703,8 +5703,14 @@ void __meminit setup_zone_pageset(struct zone *zone) /* Size may be 0 on !SMP && !NUMA */ if (sizeof(struct per_cpu_zonestat) > 0) zone->per_cpu_zonestats = alloc_percpu(struct per_cpu_zonestat); + if (!zone->per_cpu_pageset) + return;
zone->per_cpu_pageset = alloc_percpu(struct per_cpu_pages); + if (!zone->per_cpu_pageset) { + free_percpu(zone->per_cpu_pageset); + return; + } for_each_possible_cpu(cpu) { struct per_cpu_pages *pcp; struct per_cpu_zonestat *pzstats;
On 07.11.24 12:34, Qiu-ji Chen wrote:
The function call alloc_percpu() returns a pointer to the memory address, but it hasn't been checked. Our static analysis tool indicates that null pointer dereference may exist in pointer zone->per_cpu_pageset. It is always safe to judge the null pointer before use.
Signed-off-by: Qiu-ji Chen chenqiuji666@gmail.com Cc: stable@vger.kernel.org Fixes: 9420f89db2dd ("mm: move most of core MM initialization to mm/mm_init.c")
mm/page_alloc.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8afab64814dc..5deae1193dc3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5703,8 +5703,14 @@ void __meminit setup_zone_pageset(struct zone *zone) /* Size may be 0 on !SMP && !NUMA */ if (sizeof(struct per_cpu_zonestat) > 0) zone->per_cpu_zonestats = alloc_percpu(struct per_cpu_zonestat);
- if (!zone->per_cpu_pageset)
return;
Don't we initialize this for all with &boot_pageset? How could this ever happen?
zone->per_cpu_pageset = alloc_percpu(struct per_cpu_pages);
- if (!zone->per_cpu_pageset) {
free_percpu(zone->per_cpu_pageset);
return;
If it's NULL, we free it. Why?
- } for_each_possible_cpu(cpu) { struct per_cpu_pages *pcp; struct per_cpu_zonestat *pzstats;
Also, how could core code ever recover if this function would return early, leaving something partially initialized?
The missing NULL check is concerning, but looking into alloc_percpu() we treat these as atomic allocations and would print a warning in case this would ever happen. So likely it never really happens in practice.
I wonder if we simply want to leave it unmodified (IOW set to &boot_pageset) in case the allocation fails. We'd already print a warning in this unexpected scenario.
linux-stable-mirror@lists.linaro.org