On Thursday 29 October 2015 22:40:03 Alison Schofield wrote:
Replace the use of struct timeval and do_gettimeofday() with 64 bit ktime_get_real_seconds. Prevents 32-bit type overflow in year 2038 on 32-bit systems.
Signed-off-by: Alison Schofield amsfield22@gmail.com
In the subject line, please be more specific and name the driver, not just the subsystem.
The patch looks correct, but it would be nice to explain in the changelog how the time is used here (it gets passed to the firmware as 64-bit milliseconds).
@@ -45,6 +45,7 @@ #include <asm/processor.h> #include <linux/libata.h> #include <linux/mutex.h> +#include <linux/ktime.h> #include <scsi/scsi.h> #include <scsi/scsi_host.h> #include <scsi/scsi_device.h> @@ -5563,11 +5564,9 @@ static void pmcraid_set_timestamp(struct pmcraid_cmd *cmd) __be32 time_stamp_len = cpu_to_be32(PMCRAID_TIMESTAMP_LEN); struct pmcraid_ioadl_desc *ioadl = ioarcb->add_data.u.ioadl;
- struct timeval tv; __le64 timestamp;
- do_gettimeofday(&tv);
- timestamp = tv.tv_sec * 1000;
- timestamp = ktime_get_real_seconds() * 1000;
This is a nice simplification on top of the fix.
I see that the driver for some reason leaves the sub-second portion of the 64-bit milliseconds as zero, which is a bit odd. Did you notice that too?
What I'd do in a case like this is to list in the changelog an alternative approach, so the maintainer can decide whether to apply the patch as-is or to ask for the alternative. The other approach would be to fix the milliseconds as well and do:
struct timespec64 ts;
ktime_get_real_ts64(&ts); timestamp = ts.tv_sec * MSEC_PER_SEC + ts.tv_nsec / NSEC_PER_MSEC;
Arnd