Hi Greg,
Commit 3f1e53abff84c fixes commit 7d7d7e02111e9 ("("netfilter: compat: reject huge
allocation requests"), which has found its way into 4.14.y and 4.16.y.
This causes syzcaller hiccups at least in 4.14.y (more specifically chromeos-4.14).
Please apply 3f1e53abff84c to v4.14.y and v4.16.y to fix the problem. Copying Dave
and netdev in case he wants to handle it.
Sorry for the noise if this has already been queued.
Thanks,
Guenter
Decided to add Enric's commit because it is also a bug fix instead
of modifying Chris commit.
Chris Chiu (1):
tpm: self test failure should not cause suspend to fail
Enric Balletbo i Serra (1):
tpm: do not suspend/resume if power stays on
drivers/char/tpm/tpm-interface.c | 7 +++++++
drivers/char/tpm/tpm.h | 2 ++
drivers/char/tpm/tpm_of.c | 3 +++
3 files changed, 12 insertions(+)
--
2.17.0
Changes since v8 [1]:
* Rebase on v4.17-rc2
* Fix get_user_pages_fast() for ZONE_DEVICE pages to revalidate the pte,
pmd, pud after taking references (Jan)
* Kill dax_layout_lock(). With get_user_pages_fast() for ZONE_DEVICE
fixed we can then rely on the {pte,pmd}_lock to synchronize
dax_layout_busy_page() vs new page references (Jan)
* Hold the iolock over repeated invocations of dax_layout_busy_page() to
enable truncate/hole-punch to make forward progress in the presence of
a constant stream of new direct-I/O requests (Jan).
[1]: https://lists.01.org/pipermail/linux-nvdimm/2018-March/015058.html
---
Background:
get_user_pages() in the filesystem pins file backed memory pages for
access by devices performing dma. However, it only pins the memory pages
not the page-to-file offset association. If a file is truncated the
pages are mapped out of the file and dma may continue indefinitely into
a page that is owned by a device driver. This breaks coherency of the
file vs dma, but the assumption is that if userspace wants the
file-space truncated it does not matter what data is inbound from the
device, it is not relevant anymore. The only expectation is that dma can
safely continue while the filesystem reallocates the block(s).
Problem:
This expectation that dma can safely continue while the filesystem
changes the block map is broken by dax. With dax the target dma page
*is* the filesystem block. The model of leaving the page pinned for dma,
but truncating the file block out of the file, means that the filesytem
is free to reallocate a block under active dma to another file and now
the expected data-incoherency situation has turned into active
data-corruption.
Solution:
Defer all filesystem operations (fallocate(), truncate()) on a dax mode
file while any page/block in the file is under active dma. This solution
assumes that dma is transient. Cases where dma operations are known to
not be transient, like RDMA, have been explicitly disabled via
commits like 5f1d43de5416 "IB/core: disable memory registration of
filesystem-dax vmas".
The dax_layout_busy_page() routine is called by filesystems with a lock
held against mm faults (i_mmap_lock) to find pinned / busy dax pages.
The process of looking up a busy page invalidates all mappings
to trigger any subsequent get_user_pages() to block on i_mmap_lock.
The filesystem continues to call dax_layout_busy_page() until it finally
returns no more active pages. This approach assumes that the page
pinning is transient, if that assumption is violated the system would
have likely hung from the uncompleted I/O.
---
Dan Williams (9):
dax, dm: introduce ->fs_{claim,release}() dax_device infrastructure
mm, dax: enable filesystems to trigger dev_pagemap ->page_free callbacks
memremap: split devm_memremap_pages() and memremap() infrastructure
mm, dev_pagemap: introduce CONFIG_DEV_PAGEMAP_OPS
mm: fix __gup_device_huge vs unmap
mm, fs, dax: handle layout changes to pinned dax mappings
xfs: prepare xfs_break_layouts() to be called with XFS_MMAPLOCK_EXCL
xfs: prepare xfs_break_layouts() for another layout type
xfs, dax: introduce xfs_break_dax_layouts()
drivers/dax/super.c | 99 ++++++++++++++++++++--
drivers/md/dm.c | 57 +++++++++++++
drivers/nvdimm/pmem.c | 3 -
fs/Kconfig | 2
fs/dax.c | 97 +++++++++++++++++++++
fs/ext2/super.c | 6 +
fs/ext4/super.c | 6 +
fs/xfs/xfs_file.c | 72 +++++++++++++++-
fs/xfs/xfs_inode.h | 16 ++++
fs/xfs/xfs_ioctl.c | 8 --
fs/xfs/xfs_iops.c | 16 ++--
fs/xfs/xfs_pnfs.c | 16 ++--
fs/xfs/xfs_pnfs.h | 6 +
fs/xfs/xfs_super.c | 20 ++--
include/linux/dax.h | 71 +++++++++++++++-
include/linux/memremap.h | 25 ++----
include/linux/mm.h | 71 ++++++++++++----
kernel/Makefile | 3 -
kernel/iomem.c | 167 +++++++++++++++++++++++++++++++++++++
kernel/memremap.c | 208 ++++++----------------------------------------
mm/Kconfig | 5 +
mm/gup.c | 37 ++++++--
mm/hmm.c | 13 ---
mm/swap.c | 3 -
24 files changed, 730 insertions(+), 297 deletions(-)
create mode 100644 kernel/iomem.c
The schedutil driver sets sg_policy->next_freq to UINT_MAX on certain
occasions:
- In sugov_start(), when the schedutil governor is started for a group
of CPUs.
- And whenever we need to force a freq update before rate-limit
duration, which happens when:
- there is an update in cpufreq policy limits.
- Or when the utilization of DL scheduling class increases.
In return, get_next_freq() doesn't return a cached next_freq value but
instead recalculates the next frequency. This has some side effects
though and may significantly delay a required increase in frequency.
In sugov_update_single() we try to avoid decreasing frequency if the CPU
has not been idle recently. Consider this scenario, the available range
of frequencies for a CPU are from 800 MHz to 2.5 GHz and current
frequency is 800 MHz. From one of the call paths
sg_policy->need_freq_update is set to true and hence
sg_policy->next_freq is set to UINT_MAX. Now if the CPU had been busy,
next_f will always be less than UINT_MAX, whatever the value of next_f
is. And so even when we wanted to increase the frequency, we will
overwrite next_f with UINT_MAX and will not change the frequency
eventually. This will continue until the time CPU stays busy. This isn't
cross checked with any specific test cases, but rather based on general
code review.
Fix that by not resetting the sg_policy->need_freq_update flag from
sugov_should_update_freq() but get_next_freq() and we wouldn't need to
overwrite sg_policy->next_freq anymore.
Cc: 4.12+ <stable(a)vger.kernel.org> # 4.12+
Fixes: b7eaf1aab9f8 ("cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely")
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
kernel/sched/cpufreq_schedutil.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index d2c6083304b4..daaca23697dc 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -95,15 +95,8 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
if (sg_policy->work_in_progress)
return false;
- if (unlikely(sg_policy->need_freq_update)) {
- sg_policy->need_freq_update = false;
- /*
- * This happens when limits change, so forget the previous
- * next_freq value and force an update.
- */
- sg_policy->next_freq = UINT_MAX;
+ if (unlikely(sg_policy->need_freq_update))
return true;
- }
delta_ns = time - sg_policy->last_freq_update_time;
@@ -165,8 +158,10 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
freq = (freq + (freq >> 2)) * util / max;
- if (freq == sg_policy->cached_raw_freq && sg_policy->next_freq != UINT_MAX)
+ if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
return sg_policy->next_freq;
+
+ sg_policy->need_freq_update = false;
sg_policy->cached_raw_freq = freq;
return cpufreq_driver_resolve_freq(policy, freq);
}
@@ -670,7 +665,7 @@ static int sugov_start(struct cpufreq_policy *policy)
sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC;
sg_policy->last_freq_update_time = 0;
- sg_policy->next_freq = UINT_MAX;
+ sg_policy->next_freq = 0;
sg_policy->work_in_progress = false;
sg_policy->need_freq_update = false;
sg_policy->cached_raw_freq = 0;
--
2.15.0.194.g9af6a3dea062
The patch
spi: pxa2xx: Allow 64-bit DMA
has been applied to the spi tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
>From efc4a13724b852ddaa3358402a8dec024ffbcb17 Mon Sep 17 00:00:00 2001
From: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Date: Thu, 19 Apr 2018 19:53:32 +0300
Subject: [PATCH] spi: pxa2xx: Allow 64-bit DMA
Currently the 32-bit device address only is supported for DMA. However,
starting from Intel Sunrisepoint PCH the DMA address of the device FIFO
can be 64-bit.
Change the respective variable to be compatible with DMA engine
expectations, i.e. to phys_addr_t.
Fixes: 34cadd9c1bcb ("spi: pxa2xx: Add support for Intel Sunrisepoint")
Signed-off-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
drivers/spi/spi-pxa2xx.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/spi/spi-pxa2xx.h b/drivers/spi/spi-pxa2xx.h
index 513ec6c6e25b..0ae7defd3492 100644
--- a/drivers/spi/spi-pxa2xx.h
+++ b/drivers/spi/spi-pxa2xx.h
@@ -38,7 +38,7 @@ struct driver_data {
/* SSP register addresses */
void __iomem *ioaddr;
- u32 ssdr_physical;
+ phys_addr_t ssdr_physical;
/* SSP masks*/
u32 dma_cr1;
--
2.17.0