On Sun, 01 Jan 2012 17:06:53 +0100, Gilad Ben-Yossef gilad@benyossef.com wrote:
2012/1/1 Michal Nazarewicz mina86@mina86.com:
Looks interesting, I'm not entirely sure why it does not end up a race condition, but in case of __zone_drain_all_pages() we already hold
If a page is in the PCP list when we check, you'll send the IPI and all is well.
If it isn't when we check and gets added later you could just the same have situation where we send the IPI, try to do try an empty PCP list and then the page gets added. So we are not adding a race condition that is not there already :-)
Right, makes sense.
zone->lock, so my fears are somehow gone.. I'll give it a try, and prepare a patch for __zone_drain_all_pages().
I plan to send V5 of the IPI noise patch after some testing. It has a new version of the drain_all_pages, with no allocation in the reclaim path and no locking. You might want to wait till that one is out to base on it.
This shouldn't be a problem for my case as set_migratetype_isolate() is hardly ever called in reclaim path. :)
The change so far seems rather obvious:
mm/page_alloc.c | 14 +++++++++++++- 1 files changed, 13 insertions(+), 1 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 424d36a..eaa686b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1181,7 +1181,19 @@ static void __zone_drain_local_pages(void *arg) */ static void __zone_drain_all_pages(struct zone *zone) { - on_each_cpu(__zone_drain_local_pages, zone, 1); + struct per_cpu_pageset *pcp; + cpumask_var_t cpus; + int cpu; + + if (likely(zalloc_cpumask_var(&cpus, GFP_ATOMIC | __GFP_NOWARN))) { + for_each_online_cpu(cpu) + if (per_cpu_ptr(zone->pageset, cpu)->pcp.count) + cpumask_set_cpu(cpu, cpus); + on_each_cpu_mask(cpus, __zone_drain_local_pages, zone, 1); + free_cpumask_var(cpus); + } else { + on_each_cpu(__zone_drain_local_pages, zone, 1); + } }
#ifdef CONFIG_HIBERNATION