From: Thorsten Blum thorsten.blum@linux.dev
[ Upstream commit a7f3dfb8293c4cee99743132d69863a92e8f4875 ]
Replace max_t() followed by min_t() with a single clamp().
As was pointed by David Laight in https://lore.kernel.org/linux-btrfs/20250906122458.75dfc8f0@pumpkin/ the calculation may overflow u32 when the input value is too large, so clamp_t() is not used. In practice the expected values are in range of megabytes to gigabytes (throughput limit) so the bug would not happen.
Signed-off-by: Thorsten Blum thorsten.blum@linux.dev Reviewed-by: David Sterba dsterba@suse.com [ Use clamp() and add explanation. ] Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
**Backport Status: YES**
## Analysis
### Change Classification This commit is a **code quality improvement** that also addresses a **theoretical overflow bug**. It replaces two lines with a single, cleaner `clamp()` call:
**Before (fs/btrfs/scrub.c:1372-1373):** ```c div = max_t(u32, 1, (u32)(bwlimit / (16 * 1024 * 1024))); div = min_t(u32, 64, div); ```
**After:** ```c div = clamp(bwlimit / (16 * 1024 * 1024), 1, 64); ```
### Key Points
1. **Already Backported**: This commit has already been backported to stable (commit 185af233e0914) by Sasha Levin on Oct 1, 2025, confirming it meets stable backport criteria.
2. **Bug Fixed (Theoretical)**: The original code casts `(bwlimit / (16 * 1024 * 1024))` to u32, which could overflow if bwlimit exceeds ~64 PiB/s. However, as the commit message explicitly states: "In practice the expected values are in range of megabytes to gigabytes (throughput limit) so the bug would not happen."
3. **Real Improvement**: By using `clamp()` instead of `clamp_t()`, the code avoids the explicit u32 cast, allowing the macro to handle types correctly. This was specifically recommended by David Laight in the mailing list discussion.
4. **Low Risk**: - Minimal code change (3 lines: +1, -2) - Functionality remains identical for all realistic values - No regression reports or follow-up fixes found - Reviewed by David Sterba (btrfs maintainer)
5. **Code Quality**: Improves readability by consolidating the min/max pattern into a single, more expressive `clamp()` call - a common kernel code modernization.
6. **Stable Tree Criteria**: - ✓ Small, contained change - ✓ No architectural modifications - ✓ Minimal regression risk - ✓ Fixes a (theoretical) bug - ✓ Already proven safe through upstream testing
### Conclusion While this primarily improves code quality rather than fixing a critical bug, it addresses a legitimate (if theoretical) overflow issue identified during code review. The fact that it has already been selected for stable backport by the stable tree maintainer confirms its suitability. The change is extremely low-risk and represents the type of defensive programming improvement appropriate for stable trees.
fs/btrfs/scrub.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 6776e6ab8d108..fd4c1ca34b5e4 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1369,8 +1369,7 @@ static void scrub_throttle_dev_io(struct scrub_ctx *sctx, struct btrfs_device *d * Slice is divided into intervals when the IO is submitted, adjust by * bwlimit and maximum of 64 intervals. */ - div = max_t(u32, 1, (u32)(bwlimit / (16 * 1024 * 1024))); - div = min_t(u32, 64, div); + div = clamp(bwlimit / (16 * 1024 * 1024), 1, 64);
/* Start new epoch, set deadline */ now = ktime_get();