On Wed, Feb 22, 2017 at 7:33 PM, Albert ARIBAUD albert.aribaud@3adev.fr wrote:
On Wed, 22 Feb 2017 16:39:53 +0100, Arnd Bergmann arnd@arndb.de wrote :
On Wed, Feb 22, 2017 at 9:05 AM, Albert ARIBAUD albert.aribaud@3adev.fr wrote:
- Ergo, sched_rr_get_interval should be classified as Y2038-incompatible.
Ok.
I've discussed the kernel side for "Y2038-incompatible socket timestamping" with Deep a while ago, and I think we came to a new conclusions for what would be the best approach. I'll let her comment here.
For "Y2038-compatible types", please clarify whether time32_t and time64_t (and related types) are internal-only types or visible to applications through header files. I assume they are internal only, but it is not 100% clear. Related to that, what is the expected definition of time32_t on architectures that never had a 32-bit time_t, such as existing 64-bit architectures? Is it left undefined and all code referring to time32_t compiled conditionally?
(written after reading Joseph's reply, and type names adapted accordingly)
Yes, there would be two internal types, __time_t and __time64_t with sizes invariant with respect to default type size, whereas time_t, in the user facing public API, would be defined as either __time_t or __time64_t depending on which time bit size the user code would choose
Ok, good.
As far as 64-bit architectures are concerned:
- pure 64-bit architectures already have a 64-bit time_t, and are out of the scope of my project; a 64-bit GLIBC is assumed to be Y2038- compatible as far as APIs go (there may be bugs though; again, if I see any, I'll raise an GLIC issue but outside of this project).
My question here was about the way it gets handled internally in glibc, as we want to compile the same source code on both 32-bit and 64-bit architectures. When the existing code that uses time_t gets changed to __time32_t, we probably need to do one of these:
- leave __time32_t undefined on 64-bit architectures and don't compile any of the files referencing it, but always provide the version that uses __time64_t internally for the redirect. This is probably the cleanest. - leave the internal name as __time_t rather than time32_t for all architectures, and leave it defined as 'long' (with a special case for x32), and provide the __time64_t variant only on targets that have a 32-bit __time_t.
- this leaves the case of a 64-bit architecture kernel providing a 32-bit ABI to 32-bit code. I am not planning on supporting such a scenario.
That is not what I was interested in here, and seems to have started a whole new sub-thread, see my reply there.
Also, it's worth pointing out the known problems with the padding:
- on big-endian systems, any code doing a variation of "struct timespec ts = { seconds, nanos };" is broken because it assigns the nanoseconds to the wrong struct member. The example code is nonconforming as neither POSIX nor C11 require a particular order of the struct members, but I could also easily find examples of existing programs doing this. Note that NetBSD, OpenBSD and Windows have no padding but do use 64-bit time_t.
- If the padding is uninitialized, we have to explicitly zero it before calling a kernel function that assumes the 64-bit layout. This can be done in glibc before calling into the kernel, or at the kernel entry (where my last patch set does it), but it is awkward either way.
Unfortunately, there doesn't seem to be a good solution here for either of the two problems. Maybe someone else has more ideas. Using a pointer type for the padding would at least cause a compile-time warning for broken code, other solutions might require GCC extensions or break C11 in another way.
Agreed on the whole line, and I will add these points in the document.
However, this makes me consider an option which would keep source code as happy as possible: keep tv_nsec a long even in struct timespec64 (so the struct would be 12 bytes: 8-byte tv_sec, 4-byte tv_nsec).
The size is actually architecture specific, most 32-bit ABIs would insert 4-byte implicit padding at the end, leaving it compatible with the 64-bit definition on little-endian but not big-endian systems, while x86-32 would be a rare exception that uses a 12 byte structure with 4 byte alignment, leading to slightly different structure layouts between the most common architecture on one side and everything else on the other side.
It would be ABI-incompatible with 64-bit code, but would conform to Posix, and the same exact user source code would then compile equally well in all three cases: 32-bit time 64-bit time on 32-bit arch, 64-bit arch, including the struct initializers you mentioned above.
There would be a rough edge left when running 32-bit arch, 64-bit time user code over a 64-bit arch GLIBC and kernel, because then we'd have to copy between 64-bit and 32-bit nanosecond fields, but then again, it is not a scenario I am aiming for.
The kernel will use a layout that matches the 64-bit architecture, and use the same layout on all architectures, so this would imply having to copy it everywhere, but to have it working sometimes, or broken in very subtle ways. I think if we end up with nanoseconds that are sometime in a different bit position between kernel and user space, we should mix up all of the structure e.g. like this:
struct timespec { void *__padding; long tv_nsec; time_t tv_sec; };
this would cause a build bug for anyone trying to initialize the first two members as integers, and it would cause an obvious runtime failure if anyone tries to interpret the first four or eight bytes as epoch seconds.
The disadvantage would be that we have to modify the contents both for data passed into the kernel and out of it, compared to the version you are proposing right now, which only requires special treatment of the padding data when passing it into the kernel.
Arnd