In raid5_cache_count(): if (conf->max_nr_stripes < conf->min_nr_stripes) return 0; return conf->max_nr_stripes - conf->min_nr_stripes; The current check is ineffective, as the values could change immediately after being checked.
In raid5_set_cache_size(): ... conf->min_nr_stripes = size; ... while (size > conf->max_nr_stripes) conf->min_nr_stripes = conf->max_nr_stripes; ...
Due to intermediate value updates in raid5_set_cache_size(), concurrent execution of raid5_cache_count() and raid5_set_cache_size() may lead to inconsistent reads of conf->max_nr_stripes and conf->min_nr_stripes. The current checks are ineffective as values could change immediately after being checked, raising the risk of conf->min_nr_stripes exceeding conf->max_nr_stripes and potentially causing an integer overflow.
This possible bug is found by an experimental static analysis tool developed by our team. This tool analyzes the locking APIs to extract function pairs that can be concurrently executed, and then analyzes the instructions in the paired functions to identify possible concurrency bugs including data races and atomicity violations. The above possible bug is reported when our tool analyzes the source code of Linux 6.2.
To resolve this issue, it is suggested to introduce local variables 'min_stripes' and 'max_stripes' in raid5_cache_count() to ensure the values remain stable throughout the check. Adding locks in raid5_cache_count() fails to resolve atomicity violations, as raid5_set_cache_size() may hold intermediate values of conf->min_nr_stripes while unlocked. With this patch applied, our tool no longer reports the bug, with the kernel configuration allyesconfig for x86_64. Due to the lack of associated hardware, we cannot test the patch in runtime testing, and just verify it according to the code logic.
Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.") Cc: stable@vger.kernel.org Signed-off-by: Gui-Dong Han 2045gemini@gmail.com --- v2: * In this patch v2, we've updated to use READ_ONCE() instead of direct reads for accessing max_nr_stripes and min_nr_stripes, since read and write can concurrent. Thank Yu Kuai for helpful advice. --- v3: * In this patch v3, we've updated to use WRITE_ONCE() in raid5_set_cache_size(), grow_one_stripe() and drop_one_stripe(), in order to pair READ_ONCE() with WRITE_ONCE(). Thank Yu Kuai for helpful advice. --- v4: * In this patch v4, we've addressed several code style issues. Thank Yu Kuai for helpful advice. --- drivers/md/raid5.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 8497880135ee..30e118d10c0b 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2412,7 +2412,7 @@ static int grow_one_stripe(struct r5conf *conf, gfp_t gfp) atomic_inc(&conf->active_stripes);
raid5_release_stripe(sh); - conf->max_nr_stripes++; + WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1); return 1; }
@@ -2707,7 +2707,7 @@ static int drop_one_stripe(struct r5conf *conf) shrink_buffers(sh); free_stripe(conf->slab_cache, sh); atomic_dec(&conf->active_stripes); - conf->max_nr_stripes--; + WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes - 1); return 1; }
@@ -6820,7 +6820,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) if (size <= 16 || size > 32768) return -EINVAL;
- conf->min_nr_stripes = size; + WRITE_ONCE(conf->min_nr_stripes, size); mutex_lock(&conf->cache_size_mutex); while (size < conf->max_nr_stripes && drop_one_stripe(conf)) @@ -6832,7 +6832,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) mutex_lock(&conf->cache_size_mutex); while (size > conf->max_nr_stripes) if (!grow_one_stripe(conf, GFP_KERNEL)) { - conf->min_nr_stripes = conf->max_nr_stripes; + WRITE_ONCE(conf->min_nr_stripes, conf->max_nr_stripes); result = -ENOMEM; break; } @@ -7390,11 +7390,13 @@ static unsigned long raid5_cache_count(struct shrinker *shrink, struct shrink_control *sc) { struct r5conf *conf = shrink->private_data; + int max_stripes = READ_ONCE(conf->max_nr_stripes); + int min_stripes = READ_ONCE(conf->min_nr_stripes);
- if (conf->max_nr_stripes < conf->min_nr_stripes) + if (max_stripes < min_stripes) /* unlikely, but not impossible */ return 0; - return conf->max_nr_stripes - conf->min_nr_stripes; + return max_stripes - min_stripes; }
static struct r5conf *setup_conf(struct mddev *mddev)
在 2024/01/12 15:10, Gui-Dong Han 写道:
In raid5_cache_count(): if (conf->max_nr_stripes < conf->min_nr_stripes) return 0; return conf->max_nr_stripes - conf->min_nr_stripes; The current check is ineffective, as the values could change immediately after being checked.
In raid5_set_cache_size(): ... conf->min_nr_stripes = size; ... while (size > conf->max_nr_stripes) conf->min_nr_stripes = conf->max_nr_stripes; ...
Due to intermediate value updates in raid5_set_cache_size(), concurrent execution of raid5_cache_count() and raid5_set_cache_size() may lead to inconsistent reads of conf->max_nr_stripes and conf->min_nr_stripes. The current checks are ineffective as values could change immediately after being checked, raising the risk of conf->min_nr_stripes exceeding conf->max_nr_stripes and potentially causing an integer overflow.
This possible bug is found by an experimental static analysis tool developed by our team. This tool analyzes the locking APIs to extract function pairs that can be concurrently executed, and then analyzes the instructions in the paired functions to identify possible concurrency bugs including data races and atomicity violations. The above possible bug is reported when our tool analyzes the source code of Linux 6.2.
To resolve this issue, it is suggested to introduce local variables 'min_stripes' and 'max_stripes' in raid5_cache_count() to ensure the values remain stable throughout the check. Adding locks in raid5_cache_count() fails to resolve atomicity violations, as raid5_set_cache_size() may hold intermediate values of conf->min_nr_stripes while unlocked. With this patch applied, our tool no longer reports the bug, with the kernel configuration allyesconfig for x86_64. Due to the lack of associated hardware, we cannot test the patch in runtime testing, and just verify it according to the code logic.
Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.") Cc: stable@vger.kernel.org Signed-off-by: Gui-Dong Han 2045gemini@gmail.com
LGTM Reviewed-by: Yu Kuai yukuai3@huawei.com
v2:
- In this patch v2, we've updated to use READ_ONCE() instead of direct
reads for accessing max_nr_stripes and min_nr_stripes, since read and write can concurrent. Thank Yu Kuai for helpful advice.
v3:
- In this patch v3, we've updated to use WRITE_ONCE() in
raid5_set_cache_size(), grow_one_stripe() and drop_one_stripe(), in order to pair READ_ONCE() with WRITE_ONCE(). Thank Yu Kuai for helpful advice.
v4:
- In this patch v4, we've addressed several code style issues. Thank Yu Kuai for helpful advice.
drivers/md/raid5.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 8497880135ee..30e118d10c0b 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2412,7 +2412,7 @@ static int grow_one_stripe(struct r5conf *conf, gfp_t gfp) atomic_inc(&conf->active_stripes); raid5_release_stripe(sh);
- conf->max_nr_stripes++;
- WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1); return 1; }
@@ -2707,7 +2707,7 @@ static int drop_one_stripe(struct r5conf *conf) shrink_buffers(sh); free_stripe(conf->slab_cache, sh); atomic_dec(&conf->active_stripes);
- conf->max_nr_stripes--;
- WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes - 1); return 1; }
@@ -6820,7 +6820,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) if (size <= 16 || size > 32768) return -EINVAL;
- conf->min_nr_stripes = size;
- WRITE_ONCE(conf->min_nr_stripes, size); mutex_lock(&conf->cache_size_mutex); while (size < conf->max_nr_stripes && drop_one_stripe(conf))
@@ -6832,7 +6832,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) mutex_lock(&conf->cache_size_mutex); while (size > conf->max_nr_stripes) if (!grow_one_stripe(conf, GFP_KERNEL)) {
conf->min_nr_stripes = conf->max_nr_stripes;
}WRITE_ONCE(conf->min_nr_stripes, conf->max_nr_stripes); result = -ENOMEM; break;
@@ -7390,11 +7390,13 @@ static unsigned long raid5_cache_count(struct shrinker *shrink, struct shrink_control *sc) { struct r5conf *conf = shrink->private_data;
- int max_stripes = READ_ONCE(conf->max_nr_stripes);
- int min_stripes = READ_ONCE(conf->min_nr_stripes);
- if (conf->max_nr_stripes < conf->min_nr_stripes)
- if (max_stripes < min_stripes) /* unlikely, but not impossible */ return 0;
- return conf->max_nr_stripes - conf->min_nr_stripes;
- return max_stripes - min_stripes; }
static struct r5conf *setup_conf(struct mddev *mddev)
On Thu, Jan 11, 2024 at 11:10 PM Gui-Dong Han 2045gemini@gmail.com wrote:
[...]
raid5_release_stripe(sh);
conf->max_nr_stripes++;
WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1);
This is weird. We are reading max_nr_stripes without READ_ONCE.
return 1;
}
@@ -2707,7 +2707,7 @@ static int drop_one_stripe(struct r5conf *conf) shrink_buffers(sh); free_stripe(conf->slab_cache, sh); atomic_dec(&conf->active_stripes);
conf->max_nr_stripes--;
WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes - 1);
Same here.
return 1;
}
@@ -6820,7 +6820,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) if (size <= 16 || size > 32768) return -EINVAL;
conf->min_nr_stripes = size;
WRITE_ONCE(conf->min_nr_stripes, size); mutex_lock(&conf->cache_size_mutex); while (size < conf->max_nr_stripes && drop_one_stripe(conf))
@@ -6832,7 +6832,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) mutex_lock(&conf->cache_size_mutex); while (size > conf->max_nr_stripes) if (!grow_one_stripe(conf, GFP_KERNEL)) {
conf->min_nr_stripes = conf->max_nr_stripes;
WRITE_ONCE(conf->min_nr_stripes, conf->max_nr_stripes);
And here.
result = -ENOMEM; break; }
Thanks, Song
Hi,
在 2024/01/30 15:37, Song Liu 写道:
On Thu, Jan 11, 2024 at 11:10 PM Gui-Dong Han 2045gemini@gmail.com wrote:
[...]
raid5_release_stripe(sh);
conf->max_nr_stripes++;
WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1);
This is weird. We are reading max_nr_stripes without READ_ONCE.
We don't need READ_ONCE() here because writers are protected by 'cache_size_mutex', there are no concurrent writers, it's safe to read 'max_nr_stripes' directly.
Thanks, Kuai
return 1;
}
@@ -2707,7 +2707,7 @@ static int drop_one_stripe(struct r5conf *conf) shrink_buffers(sh); free_stripe(conf->slab_cache, sh); atomic_dec(&conf->active_stripes);
conf->max_nr_stripes--;
WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes - 1);
Same here.
return 1;
}
@@ -6820,7 +6820,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) if (size <= 16 || size > 32768) return -EINVAL;
conf->min_nr_stripes = size;
WRITE_ONCE(conf->min_nr_stripes, size); mutex_lock(&conf->cache_size_mutex); while (size < conf->max_nr_stripes && drop_one_stripe(conf))
@@ -6832,7 +6832,7 @@ raid5_set_cache_size(struct mddev *mddev, int size) mutex_lock(&conf->cache_size_mutex); while (size > conf->max_nr_stripes) if (!grow_one_stripe(conf, GFP_KERNEL)) {
conf->min_nr_stripes = conf->max_nr_stripes;
WRITE_ONCE(conf->min_nr_stripes, conf->max_nr_stripes);
And here.
result = -ENOMEM; break; }
Thanks, Song .
On Thu, Feb 1, 2024 at 11:11 PM Yu Kuai yukuai1@huaweicloud.com wrote:
Hi,
在 2024/01/30 15:37, Song Liu 写道:
On Thu, Jan 11, 2024 at 11:10 PM Gui-Dong Han 2045gemini@gmail.com wrote:
[...]
raid5_release_stripe(sh);
conf->max_nr_stripes++;
WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1);
This is weird. We are reading max_nr_stripes without READ_ONCE.
We don't need READ_ONCE() here because writers are protected by 'cache_size_mutex', there are no concurrent writers, it's safe to read 'max_nr_stripes' directly.
OK, that makes sense. Applied to md-6.9.
Thanks, Song
linux-stable-mirror@lists.linaro.org