> From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Date: 2018-05-21 23:10 GMT+02:00
> Subject: [PATCH 4.14 00/95] 4.14.43-stable review
> To: linux-kernel(a)vger.kernel.org
> 抄送: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>,
> torvalds(a)linux-foundation.org, akpm(a)linux-foundation.org,
> linux(a)roeck-us.net, shuah(a)kernel.org, patches(a)kernelci.org,
> ben.hutchings(a)codethink.co.uk, lkft-triage(a)lists.linaro.org,
> stable(a)vger.kernel.org
>
>
> This is the start of the stable review cycle for the 4.14.43 release.
> There are 95 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know.
>
> Responses should be made by Tue May 22 21:04:09 UTC 2018.
> Anything received after that time might be too late.
>
> The whole patch series can be found in one patch at:
> https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.43-rc…
> or in the git tree and branch at:
> git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
> linux-4.14.y
> and the diffstat can be found below.
>
> thanks,
>
> greg k-h
>
Merged, tested on my local test machine, no regression found.
Thanks,
--
Jack Wang
Linux Kernel Developer
ProfitBricks GmbH
Greifswalder Str. 207
D - 10405 Berlin
Tel: +49 30 577 008 042
Fax: +49 30 577 008 299
Email: jinpu.wang(a)profitbricks.com
URL: https://www.profitbricks.de
Sitz der Gesellschaft: Berlin
Registergericht: Amtsgericht Charlottenburg, HRB 125506 B
Geschäftsführer: Achim Weiss, Matthias Steinberg, Christoph Steffens
Hello,
thanks for for your effort and the patch.
Is this eligible for stable?
Best regards
Am 22.05.2018 um 13:02 schrieb Rafael J. Wysocki:
> From: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
>
> Commit 08810a4119aa (PM / core: Add NEVER_SKIP and SMART_PREPARE
> driver flags) inadvertently prevented the power.direct_complete flag
> from being set for devices without PM callbacks and with disabled
> runtime PM which also prevents power.direct_complete from being set
> for their parents. That led to problems including a resume crash on
> HP ZBook 14u.
>
> Restore the previous behavior by causing power.direct_complete to be
> set for those devices again, but do that in a more direct way to
> avoid overlooking that case in the future.
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=199693
> Fixes: 08810a4119aa (PM / core: Add NEVER_SKIP and SMART_PREPARE driver flags)
> Reported-by: Thomas Martitz <kugel(a)rockbox.org>
> Tested-by: Thomas Martitz <kugel(a)rockbox.org>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
> ---
> drivers/base/power/main.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> Index: linux-pm/drivers/base/power/main.c
> ===================================================================
> --- linux-pm.orig/drivers/base/power/main.c
> +++ linux-pm/drivers/base/power/main.c
> @@ -1920,10 +1920,8 @@ static int device_prepare(struct device
>
> dev->power.wakeup_path = false;
>
> - if (dev->power.no_pm_callbacks) {
> - ret = 1; /* Let device go direct_complete */
> + if (dev->power.no_pm_callbacks)
> goto unlock;
> - }
>
> if (dev->pm_domain)
> callback = dev->pm_domain->ops.prepare;
> @@ -1957,7 +1955,8 @@ unlock:
> */
> spin_lock_irq(&dev->power.lock);
> dev->power.direct_complete = state.event == PM_EVENT_SUSPEND &&
> - pm_runtime_suspended(dev) && ret > 0 &&
> + ((pm_runtime_suspended(dev) && ret > 0) ||
> + dev->power.no_pm_callbacks) &&
> !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP);
> spin_unlock_irq(&dev->power.lock);
> return 0;
>
From: John Stultz <john.stultz(a)linaro.org>
commit 3d88d56c5873f6eebe23e05c3da701960146b801 upstream.
Due to how the MONOTONIC_RAW accumulation logic was handled,
there is the potential for a 1ns discontinuity when we do
accumulations. This small discontinuity has for the most part
gone un-noticed, but since ARM64 enabled CLOCK_MONOTONIC_RAW
in their vDSO clock_gettime implementation, we've seen failures
with the inconsistency-check test in kselftest.
This patch addresses the issue by using the same sub-ns
accumulation handling that CLOCK_MONOTONIC uses, which avoids
the issue for in-kernel users.
Since the ARM64 vDSO implementation has its own clock_gettime
calculation logic, this patch reduces the frequency of errors,
but failures are still seen. The ARM64 vDSO will need to be
updated to include the sub-nanosecond xtime_nsec values in its
calculation for this issue to be completely fixed.
Signed-off-by: John Stultz <john.stultz(a)linaro.org>
Tested-by: Daniel Mentz <danielmentz(a)google.com>
Cc: Prarit Bhargava <prarit(a)redhat.com>
Cc: Kevin Brodsky <kevin.brodsky(a)arm.com>
Cc: Richard Cochran <richardcochran(a)gmail.com>
Cc: Stephen Boyd <stephen.boyd(a)linaro.org>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: "stable #4 . 8+" <stable(a)vger.kernel.org>
Cc: Miroslav Lichvar <mlichvar(a)redhat.com>
Link: http://lkml.kernel.org/r/1496965462-20003-3-git-send-email-john.stultz@lina…
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
[fabrizio: cherry-pick to 4.4. Kept cycle_t type for function
logarithmic_accumulation local variable "interval". Dropped
casting of "interval" variable]
Signed-off-by: Fabrizio Castro <fabrizio.castro(a)bp.renesas.com>
Signed-off-by: Biju Das <biju.das(a)bp.renesas.com>
---
Hello Greg,
I am reposting this patch to include the relevant people in the email.
Could you please consider this patch for 4.4.y?
Testing 4.4.y without this patch makes tool
tools/testing/selftests/timers/clocksource-switch.c fail on Koelsch board
while running "Consistent CLOCK_MONOTONIC_RAW" with message "Delta: 1 ns".
This patch fixes the problem.
Thanks,
Fab
include/linux/timekeeper_internal.h | 4 ++--
kernel/time/timekeeping.c | 20 ++++++++++----------
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/include/linux/timekeeper_internal.h b/include/linux/timekeeper_internal.h
index f0f1793..115216e 100644
--- a/include/linux/timekeeper_internal.h
+++ b/include/linux/timekeeper_internal.h
@@ -56,7 +56,7 @@ struct tk_read_base {
* interval.
* @xtime_remainder: Shifted nano seconds left over when rounding
* @cycle_interval
- * @raw_interval: Raw nano seconds accumulated per NTP interval.
+ * @raw_interval: Shifted raw nano seconds accumulated per NTP interval.
* @ntp_error: Difference between accumulated time and NTP time in ntp
* shifted nano seconds.
* @ntp_error_shift: Shift conversion between clock shifted nano seconds and
@@ -97,7 +97,7 @@ struct timekeeper {
cycle_t cycle_interval;
u64 xtime_interval;
s64 xtime_remainder;
- u32 raw_interval;
+ u64 raw_interval;
/* The ntp_tick_length() value currently being used.
* This cached copy ensures we consistently apply the tick
* length for an entire tick, as ntp_tick_length may change
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6e48668..fed86b2 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -277,8 +277,7 @@ static void tk_setup_internals(struct timekeeper *tk, struct clocksource *clock)
/* Go back from cycles -> shifted ns */
tk->xtime_interval = (u64) interval * clock->mult;
tk->xtime_remainder = ntpinterval - tk->xtime_interval;
- tk->raw_interval =
- ((u64) interval * clock->mult) >> clock->shift;
+ tk->raw_interval = interval * clock->mult;
/* if changing clocks, convert xtime_nsec shift units */
if (old_clock) {
@@ -1767,7 +1766,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
unsigned int *clock_set)
{
cycle_t interval = tk->cycle_interval << shift;
- u64 raw_nsecs;
+ u64 snsec_per_sec;
/* If the offset is smaller than a shifted interval, do nothing */
if (offset < interval)
@@ -1782,14 +1781,15 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
*clock_set |= accumulate_nsecs_to_secs(tk);
/* Accumulate raw time */
- raw_nsecs = (u64)tk->raw_interval << shift;
- raw_nsecs += tk->raw_time.tv_nsec;
- if (raw_nsecs >= NSEC_PER_SEC) {
- u64 raw_secs = raw_nsecs;
- raw_nsecs = do_div(raw_secs, NSEC_PER_SEC);
- tk->raw_time.tv_sec += raw_secs;
+ tk->tkr_raw.xtime_nsec += (u64)tk->raw_time.tv_nsec << tk->tkr_raw.shift;
+ tk->tkr_raw.xtime_nsec += tk->raw_interval << shift;
+ snsec_per_sec = (u64)NSEC_PER_SEC << tk->tkr_raw.shift;
+ while (tk->tkr_raw.xtime_nsec >= snsec_per_sec) {
+ tk->tkr_raw.xtime_nsec -= snsec_per_sec;
+ tk->raw_time.tv_sec++;
}
- tk->raw_time.tv_nsec = raw_nsecs;
+ tk->raw_time.tv_nsec = tk->tkr_raw.xtime_nsec >> tk->tkr_raw.shift;
+ tk->tkr_raw.xtime_nsec -= (u64)tk->raw_time.tv_nsec << tk->tkr_raw.shift;
/* Accumulate error between NTP and clock interval */
tk->ntp_error += tk->ntp_tick << shift;
--
2.7.4
From: John Stultz <john.stultz(a)linaro.org>
commit 3d88d56c5873f6eebe23e05c3da701960146b801 upstream.
Due to how the MONOTONIC_RAW accumulation logic was handled,
there is the potential for a 1ns discontinuity when we do
accumulations. This small discontinuity has for the most part
gone un-noticed, but since ARM64 enabled CLOCK_MONOTONIC_RAW
in their vDSO clock_gettime implementation, we've seen failures
with the inconsistency-check test in kselftest.
This patch addresses the issue by using the same sub-ns
accumulation handling that CLOCK_MONOTONIC uses, which avoids
the issue for in-kernel users.
Since the ARM64 vDSO implementation has its own clock_gettime
calculation logic, this patch reduces the frequency of errors,
but failures are still seen. The ARM64 vDSO will need to be
updated to include the sub-nanosecond xtime_nsec values in its
calculation for this issue to be completely fixed.
Signed-off-by: John Stultz <john.stultz(a)linaro.org>
Tested-by: Daniel Mentz <danielmentz(a)google.com>
Cc: Prarit Bhargava <prarit(a)redhat.com>
Cc: Kevin Brodsky <kevin.brodsky(a)arm.com>
Cc: Richard Cochran <richardcochran(a)gmail.com>
Cc: Stephen Boyd <stephen.boyd(a)linaro.org>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: "stable #4 . 8+" <stable(a)vger.kernel.org>
Cc: Miroslav Lichvar <mlichvar(a)redhat.com>
Link: http://lkml.kernel.org/r/1496965462-20003-3-git-send-email-john.stultz@lina…
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
[fabrizio: cherry-pick to 4.4. Kept cycle_t type for function
logarithmic_accumulation local variable "interval". Dropped
casting of "interval" variable]
Signed-off-by: Fabrizio Castro <fabrizio.castro(a)bp.renesas.com>
Signed-off-by: Biju Das <biju.das(a)bp.renesas.com>
---
Hello Greg,
we noticed tools/testing/selftests/timers/clocksource-switch.c
was failing for us, this patch fixes the cause of the failure.
Are you happy to take this patch?
Thanks,
Fab
include/linux/timekeeper_internal.h | 4 ++--
kernel/time/timekeeping.c | 20 ++++++++++----------
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/include/linux/timekeeper_internal.h b/include/linux/timekeeper_internal.h
index f0f1793..115216e 100644
--- a/include/linux/timekeeper_internal.h
+++ b/include/linux/timekeeper_internal.h
@@ -56,7 +56,7 @@ struct tk_read_base {
* interval.
* @xtime_remainder: Shifted nano seconds left over when rounding
* @cycle_interval
- * @raw_interval: Raw nano seconds accumulated per NTP interval.
+ * @raw_interval: Shifted raw nano seconds accumulated per NTP interval.
* @ntp_error: Difference between accumulated time and NTP time in ntp
* shifted nano seconds.
* @ntp_error_shift: Shift conversion between clock shifted nano seconds and
@@ -97,7 +97,7 @@ struct timekeeper {
cycle_t cycle_interval;
u64 xtime_interval;
s64 xtime_remainder;
- u32 raw_interval;
+ u64 raw_interval;
/* The ntp_tick_length() value currently being used.
* This cached copy ensures we consistently apply the tick
* length for an entire tick, as ntp_tick_length may change
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6e48668..fed86b2 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -277,8 +277,7 @@ static void tk_setup_internals(struct timekeeper *tk, struct clocksource *clock)
/* Go back from cycles -> shifted ns */
tk->xtime_interval = (u64) interval * clock->mult;
tk->xtime_remainder = ntpinterval - tk->xtime_interval;
- tk->raw_interval =
- ((u64) interval * clock->mult) >> clock->shift;
+ tk->raw_interval = interval * clock->mult;
/* if changing clocks, convert xtime_nsec shift units */
if (old_clock) {
@@ -1767,7 +1766,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
unsigned int *clock_set)
{
cycle_t interval = tk->cycle_interval << shift;
- u64 raw_nsecs;
+ u64 snsec_per_sec;
/* If the offset is smaller than a shifted interval, do nothing */
if (offset < interval)
@@ -1782,14 +1781,15 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
*clock_set |= accumulate_nsecs_to_secs(tk);
/* Accumulate raw time */
- raw_nsecs = (u64)tk->raw_interval << shift;
- raw_nsecs += tk->raw_time.tv_nsec;
- if (raw_nsecs >= NSEC_PER_SEC) {
- u64 raw_secs = raw_nsecs;
- raw_nsecs = do_div(raw_secs, NSEC_PER_SEC);
- tk->raw_time.tv_sec += raw_secs;
+ tk->tkr_raw.xtime_nsec += (u64)tk->raw_time.tv_nsec << tk->tkr_raw.shift;
+ tk->tkr_raw.xtime_nsec += tk->raw_interval << shift;
+ snsec_per_sec = (u64)NSEC_PER_SEC << tk->tkr_raw.shift;
+ while (tk->tkr_raw.xtime_nsec >= snsec_per_sec) {
+ tk->tkr_raw.xtime_nsec -= snsec_per_sec;
+ tk->raw_time.tv_sec++;
}
- tk->raw_time.tv_nsec = raw_nsecs;
+ tk->raw_time.tv_nsec = tk->tkr_raw.xtime_nsec >> tk->tkr_raw.shift;
+ tk->tkr_raw.xtime_nsec -= (u64)tk->raw_time.tv_nsec << tk->tkr_raw.shift;
/* Accumulate error between NTP and clock interval */
tk->ntp_error += tk->ntp_tick << shift;
--
2.7.4