'struct timespec' uses a 32-bit seconds which will overflow in year 2038 and beyond. This patch replaces timespec with timespec64. The code is correct as is - the patch is merely part of a larger attempt to remove all 32-bit timekeeping variables (timespec, timeval, time_t) from the kernel.
Signed-off-by: Tina Ruchandani ruchandani.tina@gmail.com
On Wednesday 13 April 2016 02:52:14 Tina Ruchandani wrote:
ktime_t rem = ktime_sub(*timeout, now);
struct timespec ts = ktime_to_timespec(rem);
remaining_jiffies = timespec_to_jiffies(&ts);
struct timespec64 ts = ktime_to_timespec64(rem);
remaining_jiffies = timespec64_to_jiffies(&ts);
Hi Tina,
The change looks correct to me, but I wonder if we should optimize this code a little more, as it does two expensive 64-bit divisions.
How about
remaining_jiffies = msecs_to_jiffies(ktime_ms_delta(*timeout, now));
which only does one 64-bit division, and it's one that we can probably optimize out in the future (we can check in ktime_ms_delta whether the difference is more than 2^32 nanoseconds as the fast path).
Arnd
How about
remaining_jiffies = msecs_to_jiffies(ktime_ms_delta(*timeout, now));
which only does one 64-bit division, and it's one that we can probably optimize out in the future (we can check in ktime_ms_delta whether the difference is more than 2^32 nanoseconds as the fast path).
Hi Arnd, I had thought about that, but discard that approach being confused about the truncation. ktime_ms_delta returns s64 and msecs_to_jiffies will truncate that input to int. However, I now realize that for the msecs value to be greater than 32 bits, the time delta has to be
= ((2^29)/(60*60*24*365)) or >= 17 years. So your solution is safe.
If that sounds ok, I will send out a v3.
which only does one 64-bit division, and it's one that we can probably optimize out in the future (we can check in ktime_ms_delta whether the difference is more than 2^32 nanoseconds as the fast path).
It looks like ktime_divns already has that optimization for 32-bit divisor, so your solution should avoid the 64-bit division.
On Thursday 21 April 2016 04:39:04 Tina Ruchandani wrote:
which only does one 64-bit division, and it's one that we can probably optimize out in the future (we can check in ktime_ms_delta whether the difference is more than 2^32 nanoseconds as the fast path).
It looks like ktime_divns already has that optimization for 32-bit divisor, so your solution should avoid the 64-bit division.
I meant an optimization for a 32-bit dividend, not divisor, e.g. doing:
diff --git a/include/linux/ktime.h b/include/linux/ktime.h index 2b6a204bd8d4..4fbf735ec0af 100644 --- a/include/linux/ktime.h +++ b/include/linux/ktime.h @@ -169,13 +169,17 @@ static inline bool ktime_before(const ktime_t cmp1, const ktime_t cmp2) extern s64 __ktime_divns(const ktime_t kt, s64 div); static inline s64 ktime_divns(const ktime_t kt, s64 div) { + s64 ns = kt.tv64; + /* * Negative divisors could cause an inf loop, * so bug out here. */ BUG_ON(div < 0); - if (__builtin_constant_p(div) && !(div >> 32)) { - s64 ns = kt.tv64; + + if ((ns >> 32) == 0) { + return (s32)ns / div; + else if (__builtin_constant_p(div) && !(div >> 32)) { u64 tmp = ns < 0 ? -ns : ns;
do_div(tmp, div);
I also just looked at the implementation of do_div() in include/asm-generic/div64.h, and it already does that for non-constant divisors, but I don't understand __div64_const32() enough to know if the compiler end up doing the same optimization for the constant divisor we have here.
Arnd