The patch titled
Subject: mm/readahead: fix large folio support in async readahead
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-readahead-fix-large-folio-support-in-async-readahead.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Yafang Shao <laoar.shao(a)gmail.com>
Subject: mm/readahead: fix large folio support in async readahead
Date: Fri, 8 Nov 2024 22:17:10 +0800
When testing large folio support with XFS on our servers, we observed that
only a few large folios are mapped when reading large files via mmap.
After a thorough analysis, I identified it was caused by the
`/sys/block/*/queue/read_ahead_kb` setting. On our test servers, this
parameter is set to 128KB. After I tune it to 2MB, the large folio can
work as expected. However, I believe the large folio behavior should not
be dependent on the value of read_ahead_kb. It would be more robust if
the kernel can automatically adopt to it.
With /sys/block/*/queue/read_ahead_kb set to 128KB and performing a
sequential read on a 1GB file using MADV_HUGEPAGE, the differences in
/proc/meminfo are as follows:
- before this patch
FileHugePages: 18432 kB
FilePmdMapped: 4096 kB
- after this patch
FileHugePages: 1067008 kB
FilePmdMapped: 1048576 kB
This shows that after applying the patch, the entire 1GB file is mapped to
huge pages. The stable list is CCed, as without this patch, large folios
don't function optimally in the readahead path.
It's worth noting that if read_ahead_kb is set to a larger value that
isn't aligned with huge page sizes (e.g., 4MB + 128KB), it may still fail
to map to hugepages.
Link: https://lkml.kernel.org/r/20241108141710.9721-1-laoar.shao@gmail.com
Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
Suggested-by: Matthew Wilcox <willy(a)infradead.org>
Signed-off-by: Yafang Shao <laoar.shao(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/readahead.c | 2 ++
1 file changed, 2 insertions(+)
--- a/mm/readahead.c~mm-readahead-fix-large-folio-support-in-async-readahead
+++ a/mm/readahead.c
@@ -385,6 +385,8 @@ static unsigned long get_next_ra_size(st
return 4 * cur;
if (cur <= max / 2)
return 2 * cur;
+ if (cur > max)
+ return cur;
return max;
}
_
Patches currently in -mm which might be from laoar.shao(a)gmail.com are
mm-readahead-fix-large-folio-support-in-async-readahead.patch
The patch titled
Subject: nommu: pass NULL argument to vma_iter_prealloc()
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
nommu-pass-null-argument-to-vma_iter_prealloc.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Hajime Tazaki <thehajime(a)gmail.com>
Subject: nommu: pass NULL argument to vma_iter_prealloc()
Date: Sat, 9 Nov 2024 07:28:34 +0900
When deleting a vma entry from a maple tree, it has to pass NULL to
vma_iter_prealloc() in order to calculate internal state of the tree, but
it passed a wrong argument. As a result, nommu kernels crashed upon
accessing a vma iterator, such as acct_collect() reading the size of vma
entries after do_munmap().
This commit fixes this issue by passing a right argument to the
preallocation call.
Link: https://lkml.kernel.org/r/20241108222834.3625217-1-thehajime@gmail.com
Fixes: b5df09226450 ("mm: set up vma iterator for vma_iter_prealloc() calls")
Signed-off-by: Hajime Tazaki <thehajime(a)gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett(a)Oracle.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/nommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/nommu.c~nommu-pass-null-argument-to-vma_iter_prealloc
+++ a/mm/nommu.c
@@ -573,7 +573,7 @@ static int delete_vma_from_mm(struct vm_
VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_start);
vma_iter_config(&vmi, vma->vm_start, vma->vm_end);
- if (vma_iter_prealloc(&vmi, vma)) {
+ if (vma_iter_prealloc(&vmi, NULL)) {
pr_warn("Allocation of vma tree for process %d failed\n",
current->pid);
return -ENOMEM;
_
Patches currently in -mm which might be from thehajime(a)gmail.com are
nommu-pass-null-argument-to-vma_iter_prealloc.patch
Fixes file corruption issues when reading contents via ceph client.
Call netfs_reset_subreq_iter() to align subreq->io_iter before
calling netfs_clear_unread() to clear tail, as subreq->io_iter count
and subreq->transferred might not be aligned after incomplete I/O,
having the subreq's NETFS_SREQ_CLEAR_TAIL set.
Based on ee4cdf7b ("netfs: Speed up buffered reading"), which
introduces a fix for the issue in mainline.
Fixes: 92b6cc5d ("netfs: Add iov_iters to (sub)requests to describe various buffers")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219237
Signed-off-by: Christian Ebner <c.ebner(a)proxmox.com>
---
Sending this patch in an attempt to backport the fix introduced by
commit ee4cdf7b ("netfs: Speed up buffered reading"), which however
can not be cherry picked for older kernels, as the patch is not
independent from other commits and touches a lot of unrelated (to
the fix) code.
fs/netfs/io.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/netfs/io.c b/fs/netfs/io.c
index d6ada4eba744..500119285346 100644
--- a/fs/netfs/io.c
+++ b/fs/netfs/io.c
@@ -528,6 +528,7 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
incomplete:
if (test_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags)) {
+ netfs_reset_subreq_iter(rreq, subreq);
netfs_clear_unread(subreq);
subreq->transferred = subreq->len;
goto complete;
--
2.39.5
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 6098475d4cb48d821bdf453c61118c56e26294f0 ]
Currently we have a global spi_add_lock which we take when adding new
devices so that we can check that we're not trying to reuse a chip
select that's already controlled. This means that if the SPI device is
itself a SPI controller and triggers the instantiation of further SPI
devices we trigger a deadlock as we try to register and instantiate
those devices while in the process of doing so for the parent controller
and hence already holding the global spi_add_lock. Since we only care
about concurrency within a single SPI bus move the lock to be per
controller, avoiding the deadlock.
This can be easily triggered in the case of spi-mux.
Reported-by: Uwe Kleine-König <u.kleine-koenig(a)pengutronix.de>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Signed-off-by: Hardik Gohil <hgohil(a)mvista.com>
---
This fix was not backported to v5.4 and 5.10
Along with this fix please also apply this fix on top of this
spi: fix use-after-free of the add_lock mutex
commit 6c53b45c71b4920b5e62f0ea8079a1da382b9434 upstream.
Commit 6098475d4cb4 ("spi: Fix deadlock when adding SPI controllers on
SPI buses") introduced a per-controller mutex. But mutex_unlock() of
said lock is called after the controller is already freed:
drivers/spi/spi.c | 15 +++++----------
include/linux/spi/spi.h | 3 +++
2 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 0f9410e..58f1947 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -472,12 +472,6 @@ static LIST_HEAD(spi_controller_list);
*/
static DEFINE_MUTEX(board_lock);
-/*
- * Prevents addition of devices with same chip select and
- * addition of devices below an unregistering controller.
- */
-static DEFINE_MUTEX(spi_add_lock);
-
/**
* spi_alloc_device - Allocate a new SPI device
* @ctlr: Controller to which device is connected
@@ -580,7 +574,7 @@ int spi_add_device(struct spi_device *spi)
* chipselect **BEFORE** we call setup(), else we'll trash
* its configuration. Lock against concurrent add() calls.
*/
- mutex_lock(&spi_add_lock);
+ mutex_lock(&ctlr->add_lock);
status = bus_for_each_dev(&spi_bus_type, NULL, spi, spi_dev_check);
if (status) {
@@ -624,7 +618,7 @@ int spi_add_device(struct spi_device *spi)
}
done:
- mutex_unlock(&spi_add_lock);
+ mutex_unlock(&ctlr->add_lock);
return status;
}
EXPORT_SYMBOL_GPL(spi_add_device);
@@ -2512,6 +2506,7 @@ int spi_register_controller(struct spi_controller *ctlr)
spin_lock_init(&ctlr->bus_lock_spinlock);
mutex_init(&ctlr->bus_lock_mutex);
mutex_init(&ctlr->io_mutex);
+ mutex_init(&ctlr->add_lock);
ctlr->bus_lock_flag = 0;
init_completion(&ctlr->xfer_completion);
if (!ctlr->max_dma_len)
@@ -2657,7 +2652,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
/* Prevent addition of new devices, unregister existing ones */
if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
- mutex_lock(&spi_add_lock);
+ mutex_lock(&ctlr->add_lock);
device_for_each_child(&ctlr->dev, NULL, __unregister);
@@ -2688,7 +2683,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
mutex_unlock(&board_lock);
if (IS_ENABLED(CONFIG_SPI_DYNAMIC))
- mutex_unlock(&spi_add_lock);
+ mutex_unlock(&ctlr->add_lock);
}
EXPORT_SYMBOL_GPL(spi_unregister_controller);
diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
index ca39b33..1b9cb90 100644
--- a/include/linux/spi/spi.h
+++ b/include/linux/spi/spi.h
@@ -483,6 +483,9 @@ struct spi_controller {
/* I/O mutex */
struct mutex io_mutex;
+ /* Used to avoid adding the same CS twice */
+ struct mutex add_lock;
+
/* lock and mutex for SPI bus locking */
spinlock_t bus_lock_spinlock;
struct mutex bus_lock_mutex;
--
2.7.4
The code for detecting and updating the connector status in
cdn_dp_pd_event_work() has a number of problems.
- It does not aquire the locks to call the detect helper and update
the connector status. These are struct drm_mode_config.connection_mutex
and struct drm_mode_config.mutex.
- It does not use drm_helper_probe_detect(), which helps with the
details of locking and detection.
- It uses the connector's status field to determine a change to
the connector status. The epoch_counter field is the correct one. The
field signals a change even if the connector status' value did not
change.
Replace the code with a call to drm_connector_helper_hpd_irq_event(),
which fixes all these problems.
Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Fixes: 81632df69772 ("drm/rockchip: cdn-dp: do not use drm_helper_hpd_irq_event")
Cc: Chris Zhong <zyw(a)rock-chips.com>
Cc: Guenter Roeck <groeck(a)chromium.org>
Cc: Sandy Huang <hjc(a)rock-chips.com>
Cc: "Heiko Stübner" <heiko(a)sntech.de>
Cc: Andy Yan <andy.yan(a)rock-chips.com>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-arm-kernel(a)lists.infradead.org
Cc: linux-rockchip(a)lists.infradead.org
Cc: <stable(a)vger.kernel.org> # v4.11+
---
drivers/gpu/drm/rockchip/cdn-dp-core.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
index b04538907f95..f576b1aa86d1 100644
--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
+++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
@@ -947,9 +947,6 @@ static void cdn_dp_pd_event_work(struct work_struct *work)
{
struct cdn_dp_device *dp = container_of(work, struct cdn_dp_device,
event_work);
- struct drm_connector *connector = &dp->connector;
- enum drm_connector_status old_status;
-
int ret;
mutex_lock(&dp->lock);
@@ -1009,11 +1006,7 @@ static void cdn_dp_pd_event_work(struct work_struct *work)
out:
mutex_unlock(&dp->lock);
-
- old_status = connector->status;
- connector->status = connector->funcs->detect(connector, false);
- if (old_status != connector->status)
- drm_kms_helper_hotplug_event(dp->drm_dev);
+ drm_connector_helper_hpd_irq_event(&dp->connector);
}
static int cdn_dp_pd_event(struct notifier_block *nb,
--
2.47.0
Hi,
6.11 already supports most functionality of AMD family 0x1a model 0x60,
but the amd-pmf driver doesn't load due to a missing device ID.
The device ID was added in 6.12 with:
commit 8ca8d07857c69 ("platform/x86/amd/pmf: Add SMU metrics table
support for 1Ah family 60h model")
Can this please come back to 6.11.y to enable it more widely?
Thanks!