The patch titled
Subject: mm: memcg: synchronize objcg lists with a dedicated spinlock
has been added to the -mm tree. Its filename is
mm-memcg-synchronize-objcg-lists-with-a-dedicated-spinlock.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-memcg-synchronize-objcg-lists-…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-memcg-synchronize-objcg-lists-…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Roman Gushchin <guro(a)fb.com>
Subject: mm: memcg: synchronize objcg lists with a dedicated spinlock
Alexander reported a circular lock dependency revealed by the mmap1 ltp
test:
LOCKDEP_CIRCULAR (suite: ltp, case: mtest06 (mmap1))
WARNING: possible circular locking dependency detected
5.17.0-20220113.rc0.git0.f2211f194038.300.fc35.s390x+debug #1 Not tainted
------------------------------------------------------
mmap1/202299 is trying to acquire lock:
00000001892c0188 (css_set_lock){..-.}-{2:2}, at: obj_cgroup_release+0x4a/0xe0
but task is already holding lock:
00000000ca3b3818 (&sighand->siglock){-.-.}-{2:2}, at: force_sig_info_to_task+0x38/0x180
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&sighand->siglock){-.-.}-{2:2}:
__lock_acquire+0x604/0xbd8
lock_acquire.part.0+0xe2/0x238
lock_acquire+0xb0/0x200
_raw_spin_lock_irqsave+0x6a/0xd8
__lock_task_sighand+0x90/0x190
cgroup_freeze_task+0x2e/0x90
cgroup_migrate_execute+0x11c/0x608
cgroup_update_dfl_csses+0x246/0x270
cgroup_subtree_control_write+0x238/0x518
kernfs_fop_write_iter+0x13e/0x1e0
new_sync_write+0x100/0x190
vfs_write+0x22c/0x2d8
ksys_write+0x6c/0xf8
__do_syscall+0x1da/0x208
system_call+0x82/0xb0
-> #0 (css_set_lock){..-.}-{2:2}:
check_prev_add+0xe0/0xed8
validate_chain+0x736/0xb20
__lock_acquire+0x604/0xbd8
lock_acquire.part.0+0xe2/0x238
lock_acquire+0xb0/0x200
_raw_spin_lock_irqsave+0x6a/0xd8
obj_cgroup_release+0x4a/0xe0
percpu_ref_put_many.constprop.0+0x150/0x168
drain_obj_stock+0x94/0xe8
refill_obj_stock+0x94/0x278
obj_cgroup_charge+0x164/0x1d8
kmem_cache_alloc+0xac/0x528
__sigqueue_alloc+0x150/0x308
__send_signal+0x260/0x550
send_signal+0x7e/0x348
force_sig_info_to_task+0x104/0x180
force_sig_fault+0x48/0x58
__do_pgm_check+0x120/0x1f0
pgm_check_handler+0x11e/0x180
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&sighand->siglock);
lock(css_set_lock);
lock(&sighand->siglock);
lock(css_set_lock);
*** DEADLOCK ***
2 locks held by mmap1/202299:
#0: 00000000ca3b3818 (&sighand->siglock){-.-.}-{2:2}, at: force_sig_info_to_task+0x38/0x180
#1: 00000001892ad560 (rcu_read_lock){....}-{1:2}, at: percpu_ref_put_many.constprop.0+0x0/0x168
stack backtrace:
CPU: 15 PID: 202299 Comm: mmap1 Not tainted 5.17.0-20220113.rc0.git0.f2211f194038.300.fc35.s390x+debug #1
Hardware name: IBM 3906 M04 704 (LPAR)
Call Trace:
[<00000001888aacfe>] dump_stack_lvl+0x76/0x98
[<0000000187c6d7be>] check_noncircular+0x136/0x158
[<0000000187c6e888>] check_prev_add+0xe0/0xed8
[<0000000187c6fdb6>] validate_chain+0x736/0xb20
[<0000000187c71e54>] __lock_acquire+0x604/0xbd8
[<0000000187c7301a>] lock_acquire.part.0+0xe2/0x238
[<0000000187c73220>] lock_acquire+0xb0/0x200
[<00000001888bf9aa>] _raw_spin_lock_irqsave+0x6a/0xd8
[<0000000187ef6862>] obj_cgroup_release+0x4a/0xe0
[<0000000187ef6498>] percpu_ref_put_many.constprop.0+0x150/0x168
[<0000000187ef9674>] drain_obj_stock+0x94/0xe8
[<0000000187efa464>] refill_obj_stock+0x94/0x278
[<0000000187eff55c>] obj_cgroup_charge+0x164/0x1d8
[<0000000187ed8aa4>] kmem_cache_alloc+0xac/0x528
[<0000000187bf2eb8>] __sigqueue_alloc+0x150/0x308
[<0000000187bf4210>] __send_signal+0x260/0x550
[<0000000187bf5f06>] send_signal+0x7e/0x348
[<0000000187bf7274>] force_sig_info_to_task+0x104/0x180
[<0000000187bf7758>] force_sig_fault+0x48/0x58
[<00000001888ae160>] __do_pgm_check+0x120/0x1f0
[<00000001888c0cde>] pgm_check_handler+0x11e/0x180
INFO: lockdep is turned off.
In this example a slab allocation from __send_signal() caused a refilling
and draining of a percpu objcg stock, resulted in a releasing of another
non-related objcg. Objcg release path requires taking the css_set_lock,
which is used to synchronize objcg lists.
This can create a circular dependency with the sighandler lock, which is
taken with the locked css_set_lock by the freezer code (to freeze a task).
In general it seems that using css_set_lock to synchronize objcg lists
makes any slab allocations and deallocation with the locked css_set_lock
and any intervened locks risky.
To fix the problem and make the code more robust let's stop using
css_set_lock to synchronize objcg lists and use a new dedicated spinlock
instead.
Link: https://lkml.kernel.org/r/Yfm1IHmoGdyUR81T@carbon.dhcp.thefacebook.com
Fixes: bf4f059954dc ("mm: memcg/slab: obj_cgroup API")
Signed-off-by: Roman Gushchin <guro(a)fb.com>
Reported-by: Alexander Egorenkov <egorenar(a)linux.ibm.com>
Tested-by: Alexander Egorenkov <egorenar(a)linux.ibm.com>
Reviewed-by: Waiman Long <longman(a)redhat.com>
Acked-by: Tejun Heo <tj(a)kernel.org>
Reviewed-by: Shakeel Butt <shakeelb(a)google.com>
Reviewed-by: Jeremy Linton <jeremy.linton(a)arm.com>
Tested-by: Jeremy Linton <jeremy.linton(a)arm.com>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/memcontrol.h | 5 +++--
mm/memcontrol.c | 10 +++++-----
2 files changed, 8 insertions(+), 7 deletions(-)
--- a/include/linux/memcontrol.h~mm-memcg-synchronize-objcg-lists-with-a-dedicated-spinlock
+++ a/include/linux/memcontrol.h
@@ -219,7 +219,7 @@ struct obj_cgroup {
struct mem_cgroup *memcg;
atomic_t nr_charged_bytes;
union {
- struct list_head list;
+ struct list_head list; /* protected by objcg_lock */
struct rcu_head rcu;
};
};
@@ -315,7 +315,8 @@ struct mem_cgroup {
#ifdef CONFIG_MEMCG_KMEM
int kmemcg_id;
struct obj_cgroup __rcu *objcg;
- struct list_head objcg_list; /* list of inherited objcgs */
+ /* list of inherited objcgs, protected by objcg_lock */
+ struct list_head objcg_list;
#endif
MEMCG_PADDING(_pad2_);
--- a/mm/memcontrol.c~mm-memcg-synchronize-objcg-lists-with-a-dedicated-spinlock
+++ a/mm/memcontrol.c
@@ -254,7 +254,7 @@ struct mem_cgroup *vmpressure_to_memcg(s
}
#ifdef CONFIG_MEMCG_KMEM
-extern spinlock_t css_set_lock;
+static DEFINE_SPINLOCK(objcg_lock);
bool mem_cgroup_kmem_disabled(void)
{
@@ -298,9 +298,9 @@ static void obj_cgroup_release(struct pe
if (nr_pages)
obj_cgroup_uncharge_pages(objcg, nr_pages);
- spin_lock_irqsave(&css_set_lock, flags);
+ spin_lock_irqsave(&objcg_lock, flags);
list_del(&objcg->list);
- spin_unlock_irqrestore(&css_set_lock, flags);
+ spin_unlock_irqrestore(&objcg_lock, flags);
percpu_ref_exit(ref);
kfree_rcu(objcg, rcu);
@@ -332,7 +332,7 @@ static void memcg_reparent_objcgs(struct
objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
- spin_lock_irq(&css_set_lock);
+ spin_lock_irq(&objcg_lock);
/* 1) Ready to reparent active objcg. */
list_add(&objcg->list, &memcg->objcg_list);
@@ -342,7 +342,7 @@ static void memcg_reparent_objcgs(struct
/* 3) Move already reparented objcgs to the parent's list */
list_splice(&memcg->objcg_list, &parent->objcg_list);
- spin_unlock_irq(&css_set_lock);
+ spin_unlock_irq(&objcg_lock);
percpu_ref_kill(&objcg->refcnt);
}
_
Patches currently in -mm which might be from guro(a)fb.com are
mm-memcg-synchronize-objcg-lists-with-a-dedicated-spinlock.patch
The patch titled
Subject: mm: vmscan: remove deadlock due to throttling failing to make progress
has been added to the -mm tree. Its filename is
mm-vmscan-remove-deadlock-due-to-throttling-failing-to-make-progress.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-vmscan-remove-deadlock-due-to-…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-vmscan-remove-deadlock-due-to-…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Mel Gorman <mgorman(a)suse.de>
Subject: mm: vmscan: remove deadlock due to throttling failing to make progress
A soft lockup bug in kcompactd was reported in a private bugzilla with
the following visible in dmesg;
[15980.045209][ C33] watchdog: BUG: soft lockup - CPU#33 stuck for 26s! [kcompactd0:479]
[16008.044989][ C33] watchdog: BUG: soft lockup - CPU#33 stuck for 52s! [kcompactd0:479]
[16036.044768][ C33] watchdog: BUG: soft lockup - CPU#33 stuck for 78s! [kcompactd0:479]
[16064.044548][ C33] watchdog: BUG: soft lockup - CPU#33 stuck for 104s! [kcompactd0:479]
The machine had 256G of RAM with no swap and an earlier failed allocation
indicated that node 0 where kcompactd was run was potentially
unreclaimable;
Node 0 active_anon:29355112kB inactive_anon:2913528kB active_file:0kB
inactive_file:0kB unevictable:64kB isolated(anon):0kB isolated(file):0kB
mapped:8kB dirty:0kB writeback:0kB shmem:26780kB shmem_thp:
0kB shmem_pmdmapped: 0kB anon_thp: 23480320kB writeback_tmp:0kB
kernel_stack:2272kB pagetables:24500kB all_unreclaimable? yes
Vlastimil Babka investigated a crash dump and found that a task migrating pages
was trying to drain PCP lists;
PID: 52922 TASK: ffff969f820e5000 CPU: 19 COMMAND: "kworker/u128:3"
#0 [ffffaf4e4f4c3848] __schedule at ffffffffb840116d
#1 [ffffaf4e4f4c3908] schedule at ffffffffb8401e81
#2 [ffffaf4e4f4c3918] schedule_timeout at ffffffffb84066e8
#3 [ffffaf4e4f4c3990] wait_for_completion at ffffffffb8403072
#4 [ffffaf4e4f4c39d0] __flush_work at ffffffffb7ac3e4d
#5 [ffffaf4e4f4c3a48] __drain_all_pages at ffffffffb7cb707c
#6 [ffffaf4e4f4c3a80] __alloc_pages_slowpath.constprop.114 at ffffffffb7cbd9dd
#7 [ffffaf4e4f4c3b60] __alloc_pages at ffffffffb7cbe4f5
#8 [ffffaf4e4f4c3bc0] alloc_migration_target at ffffffffb7cf329c
#9 [ffffaf4e4f4c3bf0] migrate_pages at ffffffffb7cf6d15
10 [ffffaf4e4f4c3cb0] migrate_to_node at ffffffffb7cdb5aa
11 [ffffaf4e4f4c3da8] do_migrate_pages at ffffffffb7cdcf26
12 [ffffaf4e4f4c3e88] cpuset_migrate_mm_workfn at ffffffffb7b859d2
13 [ffffaf4e4f4c3e98] process_one_work at ffffffffb7ac45f3
14 [ffffaf4e4f4c3ed8] worker_thread at ffffffffb7ac47fd
15 [ffffaf4e4f4c3f10] kthread at ffffffffb7acbdc6
16 [ffffaf4e4f4c3f50] ret_from_fork at ffffffffb7a047e2
This failure is specific to CONFIG_PREEMPT=n builds. The root of the
problem is that kcompact0 is not rescheduling on a CPU while a task that
has isolated a large number of the pages from the LRU is waiting on
kcompact0 to reschedule so the pages can be released. While
shrink_inactive_list() only loops once around too_many_isolated, reclaim
can continue without rescheduling if sc->skipped_deactivate == 1 which
could happen if there was no file LRU and the inactive anon list was not
low.
Link: https://lkml.kernel.org/r/20220203100326.GD3301@suse.de
Fixes: d818fca1cac3 ("mm/vmscan: throttle reclaim and compaction when too may pages are isolated")
Signed-off-by: Mel Gorman <mgorman(a)suse.de>
Debugged-by: Vlastimil Babka <vbabka(a)suse.cz>
Reviewed-by: Vlastimil Babka <vbabka(a)suse.cz>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/vmscan.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/mm/vmscan.c~mm-vmscan-remove-deadlock-due-to-throttling-failing-to-make-progress
+++ a/mm/vmscan.c
@@ -1066,8 +1066,10 @@ void reclaim_throttle(pg_data_t *pgdat,
* forward progress (e.g. journalling workqueues or kthreads).
*/
if (!current_is_kswapd() &&
- current->flags & (PF_IO_WORKER|PF_KTHREAD))
+ current->flags & (PF_IO_WORKER|PF_KTHREAD)) {
+ cond_resched();
return;
+ }
/*
* These figures are pulled out of thin air.
_
Patches currently in -mm which might be from mgorman(a)suse.de are
mm-vmscan-remove-deadlock-due-to-throttling-failing-to-make-progress.patch
The patch titled
Subject: fs-proc-task_mmuc-dont-read-mapcount-for-migration-entry-v4
has been added to the -mm tree. Its filename is
fs-proc-task_mmuc-dont-read-mapcount-for-migration-entry-v4.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/fs-proc-task_mmuc-dont-read-mapco…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/fs-proc-task_mmuc-dont-read-mapco…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Yang Shi <shy828301(a)gmail.com>
Subject: fs-proc-task_mmuc-dont-read-mapcount-for-migration-entry-v4
v4: * s/Treated/Treat per David
* Collected acked-by tag from David
v3: * Fixed the fix tag, the one used by v2 was not accurate
* Added comment about the risk calling page_mapcount() per David
* Fix pagemap
Link: https://lkml.kernel.org/r/20220203182641.824731-1-shy828301@gmail.com
Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
Signed-off-by: Yang Shi <shy828301(a)gmail.com>
Reported-by: syzbot+1f52b3a18d5633fa7f82(a)syzkaller.appspotmail.com
Acked-by: David Hildenbrand <david(a)redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Jann Horn <jannh(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
fs/proc/task_mmu.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
--- a/fs/proc/task_mmu.c~fs-proc-task_mmuc-dont-read-mapcount-for-migration-entry-v4
+++ a/fs/proc/task_mmu.c
@@ -469,9 +469,12 @@ static void smaps_account(struct mem_siz
* If any subpage of the compound page mapped with PTE it would elevate
* page_count().
*
- * Treated regular migration entries as mapcount == 1 without reading
- * mapcount since calling page_mapcount() for migration entries is
- * racy against THP splitting.
+ * The page_mapcount() is called to get a snapshot of the mapcount.
+ * Without holding the page lock this snapshot can be slightly wrong as
+ * we cannot always read the mapcount atomically. It is not safe to
+ * call page_mapcount() even with PTL held if the page is not mapped,
+ * especially for migration entries. Treat regular migration entries
+ * as mapcount == 1.
*/
if ((page_count(page) == 1) || migration) {
smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
@@ -1393,6 +1396,7 @@ static pagemap_entry_t pte_to_pagemap_en
{
u64 frame = 0, flags = 0;
struct page *page = NULL;
+ bool migration = false;
if (pte_present(pte)) {
if (pm->show_pfn)
@@ -1414,13 +1418,14 @@ static pagemap_entry_t pte_to_pagemap_en
frame = swp_type(entry) |
(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
flags |= PM_SWAP;
+ migration = is_migration_entry(entry);
if (is_pfn_swap_entry(entry))
page = pfn_swap_entry_to_page(entry);
}
if (page && !PageAnon(page))
flags |= PM_FILE;
- if (page && page_mapcount(page) == 1)
+ if (page && !migration && page_mapcount(page) == 1)
flags |= PM_MMAP_EXCLUSIVE;
if (vma->vm_flags & VM_SOFTDIRTY)
flags |= PM_SOFT_DIRTY;
@@ -1436,6 +1441,7 @@ static int pagemap_pmd_range(pmd_t *pmdp
spinlock_t *ptl;
pte_t *pte, *orig_pte;
int err = 0;
+ bool migration = false;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
ptl = pmd_trans_huge_lock(pmdp, vma);
@@ -1476,11 +1482,12 @@ static int pagemap_pmd_range(pmd_t *pmdp
if (pmd_swp_uffd_wp(pmd))
flags |= PM_UFFD_WP;
VM_BUG_ON(!is_pmd_migration_entry(pmd));
+ migration = is_migration_entry(entry);
page = pfn_swap_entry_to_page(entry);
}
#endif
- if (page && page_mapcount(page) == 1)
+ if (page && !migration && page_mapcount(page) == 1)
flags |= PM_MMAP_EXCLUSIVE;
for (; addr != end; addr += PAGE_SIZE) {
_
Patches currently in -mm which might be from shy828301(a)gmail.com are
fs-proc-task_mmuc-dont-read-mapcount-for-migration-entry.patch
fs-proc-task_mmuc-dont-read-mapcount-for-migration-entry-v4.patch