On 2019/10/25 23:17, Matthew Wilcox wrote:
On Thu, Oct 24, 2019 at 11:03:20PM +0800, zhong jiang wrote:
- xa_lock_irq(&mapping->i_pages);
...
if (need_resched()) { slot = radix_tree_iter_resume(slot, &iter);
cond_resched_rcu();
cond_resched_lock(&mapping->i_pages.xa_lock);
Ooh, this isn't right. We're taking the lock, disabling interrupts, then dropping the lock and rescheduling without reenabling interrupts. If this ever triggers then we'll get a scheduling-while-atomic error.
Fortunately (?) need_resched() can almost never be set while we're holding a spinlock with interrupts disabled (thanks to peterz for telling me that when I asked for a cond_resched_lock_irq() a few years ago). So we need to take this patch further towards the current code.
I miss that. Thanks you for pointing out.
Thanks, zhong jiang
Here's a version for 4.14.y. Compile tested only.
diff --git a/mm/shmem.c b/mm/shmem.c index 6c10f1d92251..deaea74ec1b3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2657,11 +2657,12 @@ static void shmem_tag_pins(struct address_space *mapping) void **slot; pgoff_t start; struct page *page;
- unsigned int tagged = 0;
lru_add_drain(); start = 0;
- rcu_read_lock();
- spin_lock_irq(&mapping->tree_lock); radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) { page = radix_tree_deref_slot(slot); if (!page || radix_tree_exception(page)) {
@@ -2670,18 +2671,19 @@ static void shmem_tag_pins(struct address_space *mapping) continue; } } else if (page_count(page) - page_mapcount(page) > 1) {
spin_lock_irq(&mapping->tree_lock); radix_tree_tag_set(&mapping->page_tree, iter.index, SHMEM_TAG_PINNED);
}spin_unlock_irq(&mapping->tree_lock);
if (need_resched()) {
slot = radix_tree_iter_resume(slot, &iter);
cond_resched_rcu();
}
if (++tagged % 1024)
continue;
slot = radix_tree_iter_resume(slot, &iter);
spin_unlock_irq(&mapping->tree_lock);
cond_resched();
}spin_lock_irq(&mapping->tree_lock);
- rcu_read_unlock();
- spin_unlock_irq(&mapping->tree_lock);
}
/*
.