On Thu, Sep 05, 2019 at 07:26:22PM +1000, Aleksa Sarai wrote:
On 2019-09-05, Peter Zijlstra peterz@infradead.org wrote:
On Thu, Sep 05, 2019 at 06:19:22AM +1000, Aleksa Sarai wrote:
+/**
- copy_struct_to_user: copy a struct to user space
- @dst: Destination address, in user space.
- @usize: Size of @dst struct.
- @src: Source address, in kernel space.
- @ksize: Size of @src struct.
- Copies a struct from kernel space to user space, in a way that guarantees
- backwards-compatibility for struct syscall arguments (as long as future
- struct extensions are made such that all new fields are *appended* to the
- old struct, and zeroed-out new fields have the same meaning as the old
- struct).
- @ksize is just sizeof(*dst), and @usize should've been passed by user space.
- The recommended usage is something like the following:
- SYSCALL_DEFINE2(foobar, struct foo __user *, uarg, size_t, usize)
- {
int err;
struct foo karg = {};
// do something with karg
err = copy_struct_to_user(uarg, usize, &karg, sizeof(karg));
if (err)
return err;
// ...
- }
- There are three cases to consider:
- If @usize == @ksize, then it's copied verbatim.
- If @usize < @ksize, then kernel space is "returning" a newer struct to an
- older user space. In order to avoid user space getting incomplete
- information (new fields might be important), all trailing bytes in @src
- (@ksize - @usize) must be zerored
s/zerored/zero/, right?
It should've been "zeroed".
That reads wrong to me; that way it reads like this function must take that action and zero out the 'rest'; which is just wrong.
This function must verify those bytes are zero, not make them zero.
, otherwise -EFBIG is returned.
'Funny' that, copy_struct_from_user() below seems to use E2BIG.
This is a copy of the semantics that sched_[sg]etattr(2) uses -- E2BIG for a "too big" struct passed to the kernel, and EFBIG for a "too big" struct passed to user-space. I would personally have preferred EMSGSIZE instead of EFBIG, but felt using the existing error codes would be less confusing.
Sadly a recent commit:
1251201c0d34 ("sched/core: Fix uclamp ABI bug, clean up and robustify sched_read_attr() ABI logic and code")
Made the situation even 'worse'.
- if (unlikely(!access_ok(src, usize)))
return -EFAULT;
- /* Deal with trailing bytes. */
- if (usize < ksize)
memset(dst + size, 0, rest);
- else if (usize > ksize) {
const void __user *addr = src + size;
char buffer[BUFFER_SIZE] = {};
Isn't that too big for on-stack?
Is a 64-byte buffer too big? I picked the number "at random" to be the size of a cache line, but I could shrink it down to 32 bytes if the size is an issue (I wanted to avoid needless allocations -- hence it being on-stack).
Ah, my ctags gave me a definition of BUFFER_SIZE that was 512. I suppose 64 should be OK.
while (rest > 0) {
size_t bufsize = min(rest, sizeof(buffer));
if (__copy_from_user(buffer, addr, bufsize))
return -EFAULT;
if (memchr_inv(buffer, 0, bufsize))
return -E2BIG;
addr += bufsize;
rest -= bufsize;
}
The perf implementation uses get_user(); but if that is too slow, surely we can do something with uaccess_try() here?
Is there a non-x86-specific way to do that (unless I'm mistaken only x86 has uaccess_try() or the other *_try() wrappers)? The main "performance improvement" (if you can even call it that) is that we use memchr_inv() which finds non-matching characters more efficiently than just doing a loop.
Oh, you're right, that's x86 only :/