On Wed, Jun 22, 2016 at 6:54 AM, Arnd Bergmann arnd@arndb.de wrote:
On Sunday, June 19, 2016 5:27:08 PM CEST Deepa Dinamani wrote:
mutex_lock(&sbi->s_alloc_mutex); lvidiu->impIdent.identSuffix[0] = UDF_OS_CLASS_UNIX; lvidiu->impIdent.identSuffix[1] = UDF_OS_ID_LINUX;
ktime_get_real_ts(&ts); udf_time_to_disk_stamp(&lvid->recordingDateAndTime,
CURRENT_TIME);
timespec_trunc(ts, sb->s_time_gran)); lvid->integrityType = cpu_to_le32(LVID_INTEGRITY_TYPE_OPEN); lvid->descTag.descCRC = cpu_to_le16(
I think we don't need the timespec_trunc here, and introducing the call might complicate matters in the future.
This was done so that it could be similar to inode timestamps as both have same format and use same formatting functions.
IMHO timespec_trunc() really only makes sense when assigning into an inode timestamp, whereas udf_time_to_disk_stamp() already truncates the resulting nanoseconds to microseconds.
I reconsidered our discussion. I think you are correct. I missed that the prints using microseconds format would actually convert seconds also to microseconds and not just nanoseconds.
I will remove the timespec_trunc here and post a v3.
Thanks, -Deepa