The quilt patch titled
Subject: mm: add a mapping_clear_large_folios helper
has been removed from the -mm tree. Its filename was
mm-add-a-mapping_clear_large_folios-helper.patch
This patch was dropped because it is obsolete
------------------------------------------------------
From: Christoph Hellwig <hch(a)lst.de>
Subject: mm: add a mapping_clear_large_folios helper
Date: Wed, 10 Jan 2024 10:21:08 +0100
Patch series "disable large folios for shmem file used by xfs xfile".
Darrick reported that the fairly new XFS xfile code blows up when force
enabling large folio for shmem. This series fixes this quickly by
disabling large folios for this particular shmem file for now until it can
be fixed properly, which will be a lot more invasive.
This patch (of 2):
Users of shmem_kernel_file_setup might not be able to deal with large
folios (yet). Give them a way to disable large folio support on their
mapping.
Link: https://lkml.kernel.org/r/20240110092109.1950011-1-hch@lst.de
Link: https://lkml.kernel.org/r/20240110092109.1950011-2-hch@lst.de
Fixes: 3934e8ebb7cc ("xfs: create a big array data structure")
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Cc: Chandan Babu R <chandan.babu(a)oracle.com>
Cc: Christian K��nig <christian.koenig(a)amd.com>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Cc: "Darrick J. Wong" <djwong(a)kernel.org>
Cc: Dave Airlie <airlied(a)gmail.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Howells <dhowells(a)redhat.com>
Cc: Huang Rui <ray.huang(a)amd.com>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Jani Nikula <jani.nikula(a)linux.intel.com>
Cc: Jarkko Sakkinen <jarkko(a)kernel.org>
Cc: Joonas Lahtinen <joonas.lahtinen(a)linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Tvrtko Ursulin <tvrtko.ursulin(a)linux.intel.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/pagemap.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
--- a/include/linux/pagemap.h~mm-add-a-mapping_clear_large_folios-helper
+++ a/include/linux/pagemap.h
@@ -360,6 +360,20 @@ static inline void mapping_set_large_fol
__set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
}
+/**
+ * mapping_clear_large_folios() - Disable large folio support for a mapping
+ * @mapping: The mapping.
+ *
+ * This can be called to undo the effect of mapping_set_large_folios().
+ *
+ * Context: This should not be called while the inode is active as it
+ * is non-atomic.
+ */
+static inline void mapping_clear_large_folios(struct address_space *mapping)
+{
+ __clear_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+}
+
/*
* Large folio support currently depends on THP. These dependencies are
* being worked on but are not yet fixed.
_
Patches currently in -mm which might be from hch(a)lst.de are
xfs-disable-large-folio-support-in-xfile_create.patch
Hi Greg, Sasha,
Please consider applying commit e08ff622c91a to 6.6.y. It requires the
following chain:
828176d037e2 ("rust: arc: add explicit `drop()` around `Box::from_raw()`")
ae6df65dabc3 ("rust: upgrade to Rust 1.72.1")
c61bcc278b19 ("rust: task: remove redundant explicit link")
a53d8cdd5a0a ("rust: print: use explicit link in documentation")
e08ff622c91a ("rust: upgrade to Rust 1.73.0")
which applies cleanly to 6.6.y. This upgrades the Rust compiler
version from 1.71.1 to 1.73.0 (2 version upgrades + 3 prerequisites
for the upgrades), fixing a couple issues with the Rust compiler
version currently used in 6.6.y. In particular:
- A build error with `CONFIG_RUST_DEBUG_ASSERTIONS` enabled
(`.eh_frame` section unexpected generation). This is solved applying
up to ae6df65dabc3.
- A developer-only Make target error (building `.rsi` single-target
files, i.e. the equivalent to requesting a preprocessed file in C).
This is solved applying all of them.
Thanks!
Cheers,
Miguel
If the directory passed to the '.. kernel-feat::' directive does not
exist or the get_feat.pl script does not find any files to extract
features from, Sphinx will report the following error:
Sphinx parallel build error:
UnboundLocalError: local variable 'fname' referenced before assignment
make[2]: *** [Documentation/Makefile:102: htmldocs] Error 2
This is due to how I changed the script in c48a7c44a1d0 ("docs:
kernel_feat.py: fix potential command injection"). Before that, the
filename passed along to self.nestedParse() in this case was weirdly
just the whole get_feat.pl invocation.
We can fix it by doing what kernel_abi.py does -- just pass
self.arguments[0] as 'fname'.
Fixes: c48a7c44a1d0 ("docs: kernel_feat.py: fix potential command injection")
Cc: Justin Forbes <jforbes(a)fedoraproject.org>
Cc: Salvatore Bonaccorso <carnil(a)debian.org>
Cc: Jani Nikula <jani.nikula(a)intel.com>
Cc: Mauro Carvalho Chehab <mchehab(a)kernel.org>
Cc: stable(a)vger.kernel.org
Signed-off-by: Vegard Nossum <vegard.nossum(a)oracle.com>
---
Documentation/sphinx/kernel_feat.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/sphinx/kernel_feat.py b/Documentation/sphinx/kernel_feat.py
index b9df61eb4501..03ace5f01b5c 100644
--- a/Documentation/sphinx/kernel_feat.py
+++ b/Documentation/sphinx/kernel_feat.py
@@ -109,7 +109,7 @@ class KernelFeat(Directive):
else:
out_lines += line + "\n"
- nodeList = self.nestedParse(out_lines, fname)
+ nodeList = self.nestedParse(out_lines, self.arguments[0])
return nodeList
def nestedParse(self, lines, fname):
--
2.34.1
Commit d5e01266e7f5 ("leds: trigger: netdev: add additional specific link
speed mode") in the various changes, reworked the way to set the LINKUP
mode in commit cee4bd16c319 ("leds: trigger: netdev: Recheck
NETDEV_LED_MODE_LINKUP on dev rename") and moved it to a generic function.
This changed the logic where, in the previous implementation the dev
from the trigger event was used to check if the carrier was ok, but in
the new implementation with the generic function, the dev in
trigger_data is used instead.
This is problematic and cause a possible kernel panic due to the fact
that the dev in the trigger_data still reference the old one as the
new one (passed from the trigger event) still has to be hold and saved
in the trigger_data struct (done in the NETDEV_REGISTER case).
On calling of get_device_state(), an invalid net_dev is used and this
cause a kernel panic.
To handle this correctly, move the call to get_device_state() after the
new net_dev is correctly set in trigger_data (in the NETDEV_REGISTER
case) and correctly parse the new dev.
Fixes: d5e01266e7f5 ("leds: trigger: netdev: add additional specific link speed mode")
Cc: stable(a)vger.kernel.org
Signed-off-by: Christian Marangi <ansuelsmth(a)gmail.com>
---
drivers/leds/trigger/ledtrig-netdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
index 8e5475819590..df1b1d8468e6 100644
--- a/drivers/leds/trigger/ledtrig-netdev.c
+++ b/drivers/leds/trigger/ledtrig-netdev.c
@@ -504,12 +504,12 @@ static int netdev_trig_notify(struct notifier_block *nb,
trigger_data->duplex = DUPLEX_UNKNOWN;
switch (evt) {
case NETDEV_CHANGENAME:
- get_device_state(trigger_data);
- fallthrough;
case NETDEV_REGISTER:
dev_put(trigger_data->net_dev);
dev_hold(dev);
trigger_data->net_dev = dev;
+ if (evt == NETDEV_CHANGENAME)
+ get_device_state(trigger_data);
break;
case NETDEV_UNREGISTER:
dev_put(trigger_data->net_dev);
--
2.43.0
When we added mount_setattr() I added additional checks compared to the
legacy do_reconfigure_mnt() and do_change_type() helpers used by regular
mount(2). If that mount had a parent then verify that the caller and the
mount namespace the mount is attached to match and if not make sure that
it's an anonymous mount.
The real rootfs falls into neither category. It is neither an anoymous
mount because it is obviously attached to the initial mount namespace
but it also obviously doesn't have a parent mount. So that means legacy
mount(2) allows changing mount properties on the real rootfs but
mount_setattr(2) blocks this. I never thought much about this but of
course someone on this planet of earth changes properties on the real
rootfs as can be seen in [1].
Since util-linux finally switched to the new mount api in 2.39 not so
long ago it also relies on mount_setattr() and that surfaced this issue
when Fedora 39 finally switched to it. Fix this.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=2256843
Reported-by: Karel Zak <kzak(a)redhat.com>
Cc: stable(a)vger.kernel.org # v5.12+
Signed-off-by: Christian Brauner <brauner(a)kernel.org>
---
fs/namespace.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/namespace.c b/fs/namespace.c
index 437f60e96d40..fb0286920bce 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -4472,10 +4472,15 @@ static int do_mount_setattr(struct path *path, struct mount_kattr *kattr)
/*
* If this is an attached mount make sure it's located in the callers
* mount namespace. If it's not don't let the caller interact with it.
- * If this is a detached mount make sure it has an anonymous mount
- * namespace attached to it, i.e. we've created it via OPEN_TREE_CLONE.
+ *
+ * If this mount doesn't have a parent it's most often simply a
+ * detached mount with an anonymous mount namespace. IOW, something
+ * that's simply not attached yet. But there are apparently also users
+ * that do change mount properties on the rootfs itself. That obviously
+ * neither has a parent nor is it a detached mount so we cannot
+ * unconditionally check for detached mounts.
*/
- if (!(mnt_has_parent(mnt) ? check_mnt(mnt) : is_anon_ns(mnt->mnt_ns)))
+ if (mnt_has_parent(mnt) && !check_mnt(mnt))
goto out;
/*
---
base-commit: 2a42e144dd0b62eaf79148394ab057145afbc3c5
change-id: 20240206-vfs-mount-rootfs-70aff2e3956d
Syzkaller reports "memory leak in cpu_map_update_elem" in 5.10 stable release.
The problem has been fixed by the following patches which can be cleanly applied
to the 5.10 branch.
Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
Symptom:
In case of a bad cable connection (e.g. dirty optics) a fast sequence of
network DOWN-UP-DOWN-UP could happen. UP triggers recovery of the qeth
interface. In case of a second DOWN while recovery is still ongoing, it
can happen that the IP@ of a Layer3 qeth interface is lost and will not
be recovered by the second UP.
Problem:
When registration of IP addresses with Layer 3 qeth devices fails, (e.g.
because of bad address format) the respective IP address is deleted from
its hash-table in the driver. If registration fails because of a ENETDOWN
condition, the address should stay in the hashtable, so a subsequent
recovery can restore it.
3caa4af834df ("qeth: keep ip-address after LAN_OFFLINE failure")
fixes this for registration failures during normal operation, but not
during recovery.
Solution:
Keep L3-IP address in case of ENETDOWN in qeth_l3_recover_ip(). For
consistency with qeth_l3_add_ip() we also keep it in case of EADDRINUSE,
i.e. for some reason the card already/still has this address registered.
Fixes: 4a71df50047f ("qeth: new qeth device driver")
Cc: stable(a)vger.kernel.org
Signed-off-by: Alexandra Winter <wintera(a)linux.ibm.com>
---
drivers/s390/net/qeth_l3_main.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
index b92a32b4b114..04c64ce0a1ca 100644
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -255,9 +255,10 @@ static void qeth_l3_clear_ip_htable(struct qeth_card *card, int recover)
if (!recover) {
hash_del(&addr->hnode);
kfree(addr);
- continue;
+ } else {
+ /* prepare for recovery */
+ addr->disp_flag = QETH_DISP_ADDR_ADD;
}
- addr->disp_flag = QETH_DISP_ADDR_ADD;
}
mutex_unlock(&card->ip_lock);
@@ -278,9 +279,11 @@ static void qeth_l3_recover_ip(struct qeth_card *card)
if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
rc = qeth_l3_register_addr_entry(card, addr);
- if (!rc) {
+ if (!rc || rc == -EADDRINUSE || rc == -ENETDOWN) {
+ /* keep it in the records */
addr->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
} else {
+ /* bad address */
hash_del(&addr->hnode);
kfree(addr);
}
--
2.40.1
RHEL people reported some errors when compiling rtla and rv with
clang. The command line used to compile the tools is:
$ make HOSTCC=clang CC=clang LLVM_IAS=1
The first problem is two unsupported flags passed to the compiler:
-ffat-lto-objects and -Wno-maybe-uninitialized. They will be
removed if the compile is clang.
Also, the clang linker does not automatically recognize the
-flto=auto option used at compilation time, so it is explicitly
set.
With the compiler working, it starts pointing to some warnings
and errors about uninitialized variables, variable size, and an
unused function. These problems are also fixed.
Daniel Bristot de Oliveira (6):
tools/rtla: Fix Makefile compiler options for clang
tools/rtla: Fix uninitialized bucket/data->bucket_size warning
tools/rtla: Fix clang warning about mount_point var size
tools/rtla: Remove unused sched_getattr() function
tools/rv: Fix Makefile compiler options for clang
tools/rv: Fix curr_reactor uninitialized variable
tools/tracing/rtla/Makefile | 7 ++++++-
tools/tracing/rtla/src/osnoise_hist.c | 3 +--
tools/tracing/rtla/src/timerlat_hist.c | 3 +--
tools/tracing/rtla/src/utils.c | 8 +-------
tools/verification/rv/Makefile | 7 ++++++-
tools/verification/rv/src/in_kernel.c | 2 +-
6 files changed, 16 insertions(+), 14 deletions(-)
--
2.43.0