On Thu 14-07-22 17:17:02, Ritesh Harjani wrote:
On 22/07/12 12:54PM, Jan Kara wrote:
Do not reclaim entries that are currently used by somebody from a shrinker. Firstly, these entries are likely useful. Secondly, we will need to keep such entries to protect pending increment of xattr block refcount.
CC: stable@vger.kernel.org Fixes: 82939d7999df ("ext4: convert to mbcache2") Signed-off-by: Jan Kara jack@suse.cz
fs/mbcache.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/fs/mbcache.c b/fs/mbcache.c index 97c54d3a2227..cfc28129fb6f 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -288,7 +288,7 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache, while (nr_to_scan-- && !list_empty(&cache->c_list)) { entry = list_first_entry(&cache->c_list, struct mb_cache_entry, e_list);
if (entry->e_referenced) {
if (entry->e_referenced || atomic_read(&entry->e_refcnt) > 2) { entry->e_referenced = 0; list_move_tail(&entry->e_list, &cache->c_list); continue;
@@ -302,6 +302,14 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache, spin_unlock(&cache->c_list_lock); head = mb_cache_entry_head(cache, entry->e_key); hlist_bl_lock(head);
/* Now a reliable check if the entry didn't get used... */
if (atomic_read(&entry->e_refcnt) > 2) {
On taking a look at this patchset again. I think if we move this "if" condition of checking refcnt to above i.e. before we delete the entry from c_list. Then we can avoid => removing of the entry -> checking it's refcnt under lock -> adding it back if the refcnt is elevated.
Thoughts?
Well, but synchronization would get more complicated because we don't want to acquire hlist_bl_lock() under c_list_lock (technically we could at this point in the series but it would make life harder for the last patch in the series). And we need c_list_lock to remove entry from the LRU list. It could be all done but I don't think what you suggest is really that simpler and this code will go away later in the patchset anyway...
Honza