From: Long Li leo.lilong@huawei.com
[ Upstream commit 2298abcbe11e9b553d03c0f1d084da786f7eff88 ]
When cache cleanup runs concurrently with cache entry removal, a race condition can occur that leads to incorrect nextcheck times. This can delay cache cleanup for the cache_detail by up to 1800 seconds:
1. cache_clean() sets nextcheck to current time plus 1800 seconds 2. While scanning a non-empty bucket, concurrent cache entry removal can empty that bucket 3. cache_clean() finds no cache entries in the now-empty bucket to update the nextcheck time 4. This maybe delays the next scan of the cache_detail by up to 1800 seconds even when it should be scanned earlier based on remaining entries
Fix this by moving the hash_lock acquisition earlier in cache_clean(). This ensures bucket emptiness checks and nextcheck updates happen atomically, preventing the race between cleanup and entry removal.
Signed-off-by: Long Li leo.lilong@huawei.com Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org ---
Now let me analyze this commit in detail based on what I've learned: **ANALYSIS:** **YES** - This commit should be backported to stable kernel trees. ## Detailed Analysis: ### 1. **Bug Impact Assessment** This commit fixes a **race condition** that can cause cache cleanup delays of up to **1800 seconds (30 minutes)**. This is a significant performance regression that affects the SUNRPC cache subsystem's ability to clean expired entries in a timely manner. ### 2. **Root Cause Analysis** The race condition occurs in the `cache_clean()` function between these steps: - **Line 461**: `current_detail->nextcheck = seconds_since_boot()+30*60;` (sets nextcheck to current time + 1800 seconds) - **Lines 466-469**: The code searches for non-empty buckets **without holding hash_lock** - **Lines 479-496**: The code then acquires `hash_lock` and processes entries, updating nextcheck based on entry expiry times **The race window**: Between lines 466-469 and 479, concurrent cache operations (like `sunrpc_cache_unhash()`) can remove entries from buckets, making them empty. When `cache_clean()` later acquires the hash_lock, it finds empty buckets and cannot update the nextcheck time appropriately. ### 3. **Code Change Analysis** The fix is **minimal and surgical**: ```c // BEFORE: hash_lock acquired late (line 479) if (current_detail && current_index < current_detail->hash_size) { // ... variable declarations ... spin_lock(¤t_detail->hash_lock); // TOO LATE! // AFTER: hash_lock acquired early (before bucket search) spin_lock(¤t_detail->hash_lock); while (current_index < current_detail->hash_size && hlist_empty(¤t_detail->hash_table[current_index])) current_index++; ``` This ensures that: 1. **Atomicity**: Bucket emptiness checks and nextcheck updates happen atomically 2. **Race prevention**: No concurrent entry removal can happen during bucket scanning 3. **Correctness**: nextcheck times are updated based on the actual state of buckets at scan time ### 4. **Risk Assessment** **Low Risk**: - **Scope**: Changes only affect the locking order in `cache_clean()` - **Backwards compatibility**: No API changes - **Side effects**: Minimal - just holds the lock slightly longer, which is safe - **Testing**: The change preserves existing logic while fixing the race ### 5. **Stable Tree Criteria Compliance** ✅ **Fixes important bug**: 30-minute cache cleanup delays significantly impact performance ✅ **Small and contained**: Only 12 lines changed in a single function ✅ **No architectural changes**: Just reorders lock acquisition ✅ **Minimal regression risk**: The change is conservative and preserves existing behavior ✅ **Clear side effects**: Only improves correctness, no negative impacts ### 6. **Comparison with Similar Commits** Looking at the historical examples: - **Similar to Commit #3 (YES)**: Fixes a race condition with minimal code changes and clear correctness improvement - **Unlike Commits #1,#2,#4,#5 (NO)**: Those were either reverted, cosmetic, or changed behavior significantly ### 7. **Critical System Impact** SUNRPC cache is used by: - **NFS client/server operations** - **RPC authentication caching** - **Various network filesystem operations** A 30-minute delay in cache cleanup can cause: - Memory bloat from stale entries - Performance degradation - Potential service disruptions **Conclusion**: This is a clear bug fix with minimal risk that addresses a significant performance issue in a critical kernel subsystem. It meets all criteria for stable tree backporting.
net/sunrpc/cache.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c index 0a91945db88fe..81a780c1226c9 100644 --- a/net/sunrpc/cache.c +++ b/net/sunrpc/cache.c @@ -451,24 +451,21 @@ static int cache_clean(void) } }
+ spin_lock(¤t_detail->hash_lock); + /* find a non-empty bucket in the table */ - while (current_detail && - current_index < current_detail->hash_size && + while (current_index < current_detail->hash_size && hlist_empty(¤t_detail->hash_table[current_index])) current_index++;
/* find a cleanable entry in the bucket and clean it, or set to next bucket */ - - if (current_detail && current_index < current_detail->hash_size) { + if (current_index < current_detail->hash_size) { struct cache_head *ch = NULL; struct cache_detail *d; struct hlist_head *head; struct hlist_node *tmp;
- spin_lock(¤t_detail->hash_lock); - /* Ok, now to clean this strand */ - head = ¤t_detail->hash_table[current_index]; hlist_for_each_entry_safe(ch, tmp, head, cache_list) { if (current_detail->nextcheck > ch->expiry_time) @@ -489,8 +486,10 @@ static int cache_clean(void) spin_unlock(&cache_list_lock); if (ch) sunrpc_end_cache_remove_entry(ch, d); - } else + } else { + spin_unlock(¤t_detail->hash_lock); spin_unlock(&cache_list_lock); + }
return rv; }