[ sorry for forgetting to Cc dgilmore on this reply, it would be much better/helpful with a "Tested-by:" for v2... Let me resend, sorry for annoying. ]
Hi,
On Thu, Nov 12, 2020 at 10:30:04AM -0800, Darrick J. Wong wrote:
On Thu, Nov 12, 2020 at 09:53:53AM -0600, Eric Sandeen wrote:
On 11/12/20 12:30 AM, Gao Xiang wrote:
Currently, commit e9e2eae89ddb dropped a (int) decoration from XFS_LITINO(mp), and since sizeof() expression is also involved, the result of XFS_LITINO(mp) is simply as the size_t type (commonly unsigned long).
Thanks for finding this! Let me think through it a little.
Considering the expression in xfs_attr_shortform_bytesfit(): offset = (XFS_LITINO(mp) - bytes) >> 3; let "bytes" be (int)340, and "XFS_LITINO(mp)" be (unsigned long)336.
on 64-bit platform, the expression is offset = ((unsigned long)336 - (int)340) >> 8 =
This should be >> 3, right.
(int)(0xfffffffffffffffcUL >> 3) = -1
but on 32-bit platform, the expression is offset = ((unsigned long)336 - (int)340) >> 8 =
and >> 3 here as well.
(int)(0xfffffffcUL >> 3) = 0x1fffffff
instead.
Ok. But wow, that magical cast to (int) in XFS_LITINO isn't at all clear to me.
XFS_LITINO() indicates a /size/ - fixing this problem by making it a signed value feels very strange, but I'm not sure what would be better, yet.
TBH I think this needs to be cleaned up -- what is "LITINO", anyway?
I'm pretty sure it's the size of the literal area, aka everything after the inode core, where the forks live?
And, uh, can these things get turned into static inline helpers instead of weird macros? Separate patches, obviously.
Thanks, I've addressed all comments in these two reviews in v2: https://lore.kernel.org/r/20201113015044.844213-1-hsiangkao@redhat.com
As for LITINO itself, btw, what would be the suggested name for this? I'm not good at naming, and will seek time working on cleaning up this.
Therefore, one result is "ASSERT(new_size <= XFS_IFORK_SIZE(ip, whichfork));" assertion failure in xfs_idata_realloc().
, which can be triggered with the following commands: touch a; setfattr -n user.0 -v "`seq 0 80`" a; setfattr -n user.1 -v "`seq 0 80`" a on 32-bit platform.
Can you please write an xfstest for this? :)
Seconded.
Will seek time on this later as well.
If this is the fix for the corruption report that dgilmore reported on IRC, this should credit him as the reporter and/or tester. Especially because I thought this was just a "oh I found this via code review" until someone else pointed out that this was actually a fix for something that a user hit in the field.
The difference is that we're headed towards -rc4 and I'm much more willing to hold up posting the xfs-5.11-merge branch to get in fixes for user-reported problems.
Yeah, sorry for ignoring this original bugreport, since I thought the original bugzilla couldn't open publicly. https://bugzilla.redhat.com/show_bug.cgi?id=1894177
It would be better to get a "Tested-by:" tag to check the original case for v2. :)
Fix it by restoring (int) decorator to XFS_LITINO(mp) since int type for XFS_LITINO(mp) is safe and all pre-exist signed calculations are correct.
Fixes: e9e2eae89ddb ("xfs: only check the superblock version for dinode size calculation") Cc: stable@vger.kernel.org # 5.7+ Signed-off-by: Gao Xiang hsiangkao@redhat.com
I'm not sure this is the preferred way or just simply fix xfs_attr_shortform_bytesfit() since I don't look into the rest of XFS_LITINO(mp) users. Add (int) to XFS_LITINO(mp) will avoid all potential regression at least.
fs/xfs/libxfs/xfs_format.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/xfs/libxfs/xfs_format.h b/fs/xfs/libxfs/xfs_format.h index dd764da08f6f..f58f0a44c024 100644 --- a/fs/xfs/libxfs/xfs_format.h +++ b/fs/xfs/libxfs/xfs_format.h @@ -1061,7 +1061,7 @@ enum xfs_dinode_fmt { sizeof(struct xfs_dinode) : \ offsetof(struct xfs_dinode, di_crc))> #define XFS_LITINO(mp) \
- ((mp)->m_sb.sb_inodesize - XFS_DINODE_SIZE(&(mp)->m_sb))
- ((int)((mp)->m_sb.sb_inodesize - XFS_DINODE_SIZE(&(mp)->m_sb)))
If we do keep the (int) cast here we at least need a comment explaining why it cannot be removed, unless there is a better way to solve the problem.
It's still weird, because "size of literal inode area" isn't a signed quantity because you can't have a negative size.
I'm fine with either way, since my starting point was to address the regression of e9e2eae89ddb as I mentioned on IRC. And it can also be simply fixed directly.
Thanks, Gao Xiang
I wonder if explicitly making XFS_LITINO() cast to a size_t would be best, and then in xfs_attr_shortform_bytesfit() we just quickly reject the query if the bytes are larger than the literal area:
diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c index bb128db..5588c2e 100644 --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -535,6 +535,10 @@ STATIC void xfs_attr3_leaf_moveents(struct xfs_da_args *args, int maxforkoff; int offset;
/* Is there no chance we can fit? */
if (bytes > XFS_LITINO(mp))
return 0;
/* rounded down */ offset = (XFS_LITINO(mp) - bytes) >> 3;
So if LITINO is 336 and the caller asked for 350 bytes, offset will be negative here. However, offset is the proposed forkoff, right? It doesn't make any sense to have a negative forkoff, so I think Eric's (bytes > LITINO) suggestion above is correct.
This patch was hard to review because the comment for xfs_attr_shortform_bytesfit mentions "...the requested number of additional bytes", but the bytes parameter represents the total number of attr fork bytes we want, not a delta over what we have right now. Can someone please fix that comment too?
--D
or, maybe simply:
offset = (XFS_LITINO(mp) - bytes) >> 3;
offset = (int)(XFS_LITINO(mp) - bytes) >> 3;
to be sure that the arithmetic doesn't get promoted to unsigned before the shift?
or maybe others have better ideas.
-Eric
/*
- Inode data & attribute fork sizes, per inode.