On Aug 31, 2018, at 11:41 AM, Theodore Ts'o tytso@mit.edu wrote:
A maliciously crafted file system can cause an overflow when the results of a 64-bit calculation is stored into a 32-bit length parameter.
https://bugzilla.kernel.org/show_bug.cgi?id=200623
Signed-off-by: Theodore Ts'o tytso@mit.edu Reported-by: Wen Xu wen.xu@gatech.edu Cc: stable@vger.kernel.org
fs/ext4/inode.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 8f6ad7667974..1134c3473673 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3414,6 +3414,7 @@ static int ext4_iomap_begin(struct inode *inode, unsigned int blkbits = inode->i_blkbits; unsigned long first_block = offset >> blkbits; unsigned long last_block = (offset + length - 1) >> blkbits;
- unsigned long len; struct ext4_map_blocks map; bool delalloc = false; int ret;
@@ -3434,7 +3435,8 @@ static int ext4_iomap_begin(struct inode *inode, }
map.m_lblk = first_block;
- map.m_len = last_block - first_block + 1;
- len = last_block - first_block + 1;
- map.m_len = (len < UINT_MAX) ? len : UINT_MAX;
Wouldn't "(len < UINT_MAX)" always be true on a 32-bit system, or is there some other limitation in that case (e.g. filesystem < 16TB) that prevents it from being an issue? Otherwise, this should use "unsigned long long len".
Cheers, Andreas