Percpu sheaves caching was introduced as opt-in but the goal was to eventually move all caches to them. This is the next step, enabling sheaves for all caches (except the two bootstrap ones) and then removing the per cpu (partial) slabs and lots of associated code.
Besides (hopefully) improved performance, this removes the rather complicated code related to the lockless fastpaths (using this_cpu_try_cmpxchg128/64) and its complications with PREEMPT_RT or kmalloc_nolock().
The lockless slab freelist+counters update operation using try_cmpxchg128/64 remains and is crucial for freeing remote NUMA objects without repeating the "alien" array flushing of SLUB, and to allow flushing objects from sheaves to slabs mostly without the node list_lock.
This v2 is the first non-RFC. I would consider exposing the series to linux-next at this point.
Git branch for the v2: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=shea...
Based on: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/... - includes a sheaves optimization that seemed minor but there was lkp test robot result with significant improvements: https://lore.kernel.org/all/202512291555.56ce2e53-lkp@intel.com/ (could be an uncommon corner case workload though)
Significant (but not critical) remaining TODOs: - Integration of rcu sheaves handling with kfree_rcu batching. - Currently the kfree_rcu batching is almost completely bypassed. I'm thinking it could be adjusted to handle rcu sheaves in addition to individual objects, to get the best of both. - Performance evaluation. Petr Tesarik has been doing that on the RFC with some promising results (thanks!) and also found a memory leak.
Note that as many things, this caching scheme change is a tradeoff, as summarized by Christoph:
https://lore.kernel.org/all/f7c33974-e520-387e-9e2f-1e523bfe1545@gentwo.org/
- Objects allocated from sheaves should have better temporal locality (likely recently freed, thus cache hot) but worse spatial locality (likely from many different slabs, increasing memory usage and possibly TLB pressure on kernel's direct map).
Signed-off-by: Vlastimil Babka vbabka@suse.cz --- Changes in v2: - Rebased to v6.19-rc1+slab.git slab/for-7.0/sheaves - Some of the preliminary patches from the RFC went in there. - Incorporate feedback/reports from many people (thanks!), including: - Make caches with sheaves mergeable. - Fix a major memory leak. - Cleanup of stat items. - Link to v1: https://patch.msgid.link/20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz
--- Vlastimil Babka (20): mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() mm/slab: move and refactor __kmem_cache_alias() mm/slab: make caches with sheaves mergeable slab: add sheaves to most caches slab: introduce percpu sheaves bootstrap slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() slab: handle kmalloc sheaves bootstrap slab: add optimized sheaf refill from partial list slab: remove cpu (partial) slabs usage from allocation paths slab: remove SLUB_CPU_PARTIAL slab: remove the do_slab_free() fastpath slab: remove defer_deactivate_slab() slab: simplify kmalloc_nolock() slab: remove struct kmem_cache_cpu slab: remove unused PREEMPT_RT specific macros slab: refill sheaves from all nodes slab: update overview comments slab: remove frozen slab checks from __slab_free() mm/slub: remove DEACTIVATE_TO_* stat items mm/slub: cleanup and repurpose some stat items
include/linux/slab.h | 6 - mm/Kconfig | 11 - mm/internal.h | 1 + mm/page_alloc.c | 5 + mm/slab.h | 53 +- mm/slab_common.c | 56 +- mm/slub.c | 2591 +++++++++++++++++--------------------------------- 7 files changed, 950 insertions(+), 1773 deletions(-) --- base-commit: aff9fb2fffa1175bd5ae3b4630f3d4ae53af450b change-id: 20251002-sheaves-for-all-86ac13dc47a5
Best regards,
After we submit the rcu_free sheaves to call_rcu() we need to make sure the rcu callbacks complete. kvfree_rcu_barrier() does that via flush_all_rcu_sheaves() but kvfree_rcu_barrier_on_cache() doesn't. Fix that.
Reported-by: kernel test robot oliver.sang@intel.com Closes: https://lore.kernel.org/oe-lkp/202601121442.c530bed3-lkp@intel.com Fixes: 0f35040de593 ("mm/slab: introduce kvfree_rcu_barrier_on_cache() for cache destruction") Cc: stable@vger.kernel.org Signed-off-by: Vlastimil Babka vbabka@suse.cz --- mm/slab_common.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c index eed7ea556cb1..ee994ec7f251 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -2133,8 +2133,11 @@ EXPORT_SYMBOL_GPL(kvfree_rcu_barrier); */ void kvfree_rcu_barrier_on_cache(struct kmem_cache *s) { - if (s->cpu_sheaves) + if (s->cpu_sheaves) { flush_rcu_sheaves_on_cache(s); + rcu_barrier(); + } + /* * TODO: Introduce a version of __kvfree_rcu_barrier() that works * on a specific slab cache.
On 1/12/26 16:16, Vlastimil Babka wrote:
Percpu sheaves caching was introduced as opt-in but the goal was to eventually move all caches to them. This is the next step, enabling sheaves for all caches (except the two bootstrap ones) and then removing the per cpu (partial) slabs and lots of associated code.
Besides (hopefully) improved performance, this removes the rather complicated code related to the lockless fastpaths (using this_cpu_try_cmpxchg128/64) and its complications with PREEMPT_RT or kmalloc_nolock().
The lockless slab freelist+counters update operation using try_cmpxchg128/64 remains and is crucial for freeing remote NUMA objects without repeating the "alien" array flushing of SLUB, and to allow flushing objects from sheaves to slabs mostly without the node list_lock.
This v2 is the first non-RFC. I would consider exposing the series to linux-next at this point.
Well if only I didn't forget to remove the RFC prefix before sending...
Git branch for the v2: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=shea...
Based on: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git/log/?h=slab/...
- includes a sheaves optimization that seemed minor but there was lkp test robot result with significant improvements: https://lore.kernel.org/all/202512291555.56ce2e53-lkp@intel.com/ (could be an uncommon corner case workload though)
Significant (but not critical) remaining TODOs:
- Integration of rcu sheaves handling with kfree_rcu batching.
- Currently the kfree_rcu batching is almost completely bypassed. I'm thinking it could be adjusted to handle rcu sheaves in addition to individual objects, to get the best of both.
- Performance evaluation. Petr Tesarik has been doing that on the RFC with some promising results (thanks!) and also found a memory leak.
Note that as many things, this caching scheme change is a tradeoff, as summarized by Christoph:
https://lore.kernel.org/all/f7c33974-e520-387e-9e2f-1e523bfe1545@gentwo.org/
- Objects allocated from sheaves should have better temporal locality (likely recently freed, thus cache hot) but worse spatial locality (likely from many different slabs, increasing memory usage and possibly TLB pressure on kernel's direct map).
Signed-off-by: Vlastimil Babka vbabka@suse.cz
Changes in v2:
- Rebased to v6.19-rc1+slab.git slab/for-7.0/sheaves
- Some of the preliminary patches from the RFC went in there.
- Incorporate feedback/reports from many people (thanks!), including:
- Make caches with sheaves mergeable.
- Fix a major memory leak.
- Cleanup of stat items.
- Link to v1: https://patch.msgid.link/20251023-sheaves-for-all-v1-0-6ffa2c9941c0@suse.cz
Vlastimil Babka (20): mm/slab: add rcu_barrier() to kvfree_rcu_barrier_on_cache() mm/slab: move and refactor __kmem_cache_alias() mm/slab: make caches with sheaves mergeable slab: add sheaves to most caches slab: introduce percpu sheaves bootstrap slab: make percpu sheaves compatible with kmalloc_nolock()/kfree_nolock() slab: handle kmalloc sheaves bootstrap slab: add optimized sheaf refill from partial list slab: remove cpu (partial) slabs usage from allocation paths slab: remove SLUB_CPU_PARTIAL slab: remove the do_slab_free() fastpath slab: remove defer_deactivate_slab() slab: simplify kmalloc_nolock() slab: remove struct kmem_cache_cpu slab: remove unused PREEMPT_RT specific macros slab: refill sheaves from all nodes slab: update overview comments slab: remove frozen slab checks from __slab_free() mm/slub: remove DEACTIVATE_TO_* stat items mm/slub: cleanup and repurpose some stat items
include/linux/slab.h | 6 - mm/Kconfig | 11 - mm/internal.h | 1 + mm/page_alloc.c | 5 + mm/slab.h | 53 +- mm/slab_common.c | 56 +- mm/slub.c | 2591 +++++++++++++++++--------------------------------- 7 files changed, 950 insertions(+), 1773 deletions(-)
base-commit: aff9fb2fffa1175bd5ae3b4630f3d4ae53af450b change-id: 20251002-sheaves-for-all-86ac13dc47a5
Best regards,
linux-stable-mirror@lists.linaro.org