On Tue, Jul 10, 2018 at 11:27:00PM +0200, Arnd Bergmann wrote:
On Tue, Jul 10, 2018 at 10:47 PM, Sean Paul seanpaul@chromium.org wrote:
On Mon, Jun 18, 2018 at 05:39:42PM +0200, Arnd Bergmann wrote:
The timespec structure and associated interfaces are deprecated and will be removed in the future because of the y2038 overflow.
The use of ktime_to_timespec() in timeout_to_jiffies() does not suffer from that overflow, but is easy to avoid by just converting the ktime_t into jiffies directly.
Signed-off-by: Arnd Bergmann arnd@arndb.de
drivers/gpu/drm/msm/msm_drv.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b2da1fbf81e0..cc8977476a41 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -353,8 +353,7 @@ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout) remaining_jiffies = 0; } else { ktime_t rem = ktime_sub(*timeout, now);
struct timespec ts = ktime_to_timespec(rem);
remaining_jiffies = timespec_to_jiffies(&ts);
remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
Do you need to wrap rem in ktime_to_ns() just to be safe?
The ktime_t interfaces are still defined to use an opaque type, as previously it was a union that could be a seconds/nanoseconds pair depending on the architecture. These days, ktime_t is just a 64-bit integer, so div_u64() would work just as well as ktime_divns(), but this is the documented way to do it.
Hey Arnd, Ahh, ok, I think I realize my confusion now. If ktime_t was not ns, ktime_divns() would do the conversion for us. Since it is ns, the conversion is a no-op (which is why I didn't see ktime_to_ns() in ktime_divns()).
Thanks for breaking that down for me,
Reviewed-by: Sean Paul seanpaul@chromium.org
Arnd