Hi,
this mainline patch 33121347fb1c359bd6e3e680b9f2c6ced5734a8 should be
applied to 5.15 as well.
Without loading of some modules fails, if
1. MODULE_UNLOAD=n
2. Architecture is aarch64 (maybe others as well)
3. KASLR is active
Without this patch the symbol .exit.text is not relocated and when the
linker generated a relative 32 bit relocation(PREL32) and the module is
loaded far enough away from the default loading address, it will trigger
a relocation overflow like this:
module algif_hash: overflow in relocation type 261 val ffff800010051c20
This happens to all modules, that use BUG in the exit section or if the
compiler generates a jump table in the exit section.
Thanks,
Joerg
commit 581dd69830341d299b0c097fc366097ab497d679 upstream.
Device drivers may decide to not load firmware when probed to avoid
slowing down the boot process should the firmware filesystem not be
available yet. In this case, the firmware loading request may be done
when a device file associated with the driver is first accessed. The
credentials of the userspace process accessing the device file may be
used to validate access to the firmware files requested by the driver.
Ensure that the kernel assumes the responsibility of reading the
firmware.
This was observed on Android for a graphic driver loading their firmware
when the device file (e.g. /dev/mali0) was first opened by userspace
(i.e. surfaceflinger). The security context of surfaceflinger was used
to validate the access to the firmware file (e.g.
/vendor/firmware/mali.bin).
Previously, Android configurations were not setting up the
firmware_class.path command line argument and were relying on the
userspace fallback mechanism. In this case, the security context of the
userspace daemon (i.e. ueventd) was consistently used to read firmware
files. More Android devices are now found to set firmware_class.path
which gives the kernel the opportunity to read the firmware directly
(via kernel_read_file_from_path_initns). In this scenario, the current
process credentials were used, even if unrelated to the loading of the
firmware file.
Signed-off-by: Thiébaud Weksteen <tweek(a)google.com>
Cc: <stable(a)vger.kernel.org> # 5.4
Reviewed-by: Paul Moore <paul(a)paul-moore.com>
Acked-by: Luis Chamberlain <mcgrof(a)kernel.org>
Link: https://lore.kernel.org/r/20220502004952.3970800-1-tweek@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/base/firmware_loader/main.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
index 4f6b76bd957e..12ab50d29548 100644
--- a/drivers/base/firmware_loader/main.c
+++ b/drivers/base/firmware_loader/main.c
@@ -761,6 +761,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
enum fw_opt opt_flags)
{
struct firmware *fw = NULL;
+ struct cred *kern_cred = NULL;
+ const struct cred *old_cred;
int ret;
if (!firmware_p)
@@ -776,6 +778,18 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
if (ret <= 0) /* error or already assigned */
goto out;
+ /*
+ * We are about to try to access the firmware file. Because we may have been
+ * called by a driver when serving an unrelated request from userland, we use
+ * the kernel credentials to read the file.
+ */
+ kern_cred = prepare_kernel_cred(NULL);
+ if (!kern_cred) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ old_cred = override_creds(kern_cred);
+
ret = fw_get_filesystem_firmware(device, fw->priv, "", NULL);
#ifdef CONFIG_FW_LOADER_COMPRESS
if (ret == -ENOENT)
@@ -792,6 +806,9 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
} else
ret = assign_fw(fw, device, opt_flags);
+ revert_creds(old_cred);
+ put_cred(kern_cred);
+
out:
if (ret < 0) {
fw_abort_batch_reqs(fw);
--
2.36.1.124.g0e6072fb45-goog
commit 47f753c1108e287edb3e27fad8a7511a9d55578e upstream.
Based on DesignWare Ethernet QoS datasheet, we are seeing the limitation
of Split Header (SPH) feature is not supported for Ipv4 fragmented packet.
This SPH limitation will cause ping failure when the packets size exceed
the MTU size. For example, the issue happens once the basic ping packet
size is larger than the configured MTU size and the data is lost inside
the fragmented packet, replaced by zeros/corrupted values, and leads to
ping fail.
So, disable the Split Header for Intel platforms.
v2: Add fixes tag in commit message.
Fixes: 67afd6d1cfdf("net: stmmac: Add Split Header support and enable it in XGMAC cores")
Cc: <stable(a)vger.kernel.org> # 5.4.x
Suggested-by: Ong, Boon Leong <boon.leong.ong(a)intel.com>
Signed-off-by: Mohammad Athari Bin Ismail <mohammad.athari.ismail(a)intel.com>
Signed-off-by: Wong Vee Khee <vee.khee.wong(a)linux.intel.com>
Signed-off-by: Tan Tee Min <tee.min.tan(a)linux.intel.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
---
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 2 +-
drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c | 1 +
include/linux/stmmac.h | 1 +
3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 9cbc0179d24e..9931724c4727 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -4531,7 +4531,7 @@ int stmmac_dvr_probe(struct device *device,
dev_info(priv->device, "TSO feature enabled\n");
}
- if (priv->dma_cap.sphen) {
+ if (priv->dma_cap.sphen && !priv->plat->sph_disable) {
ndev->hw_features |= NETIF_F_GRO;
priv->sph = true;
dev_info(priv->device, "SPH feature enabled\n");
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
index 292045f4581f..d46e3795899f 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
@@ -119,6 +119,7 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
plat->has_gmac4 = 1;
plat->force_sf_dma_mode = 0;
plat->tso_en = 1;
+ plat->sph_disable = 1;
plat->rx_sched_algorithm = MTL_RX_ALGORITHM_SP;
diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
index dc60d03c4b60..0b35747c9837 100644
--- a/include/linux/stmmac.h
+++ b/include/linux/stmmac.h
@@ -179,5 +179,6 @@ struct plat_stmmacenet_data {
int mac_port_sel_speed;
bool en_tx_lpi_clockgating;
int has_xgmac;
+ bool sph_disable;
};
#endif
--
2.25.1
From: Ming Lei <ming.lei(a)redhat.com>
When merging one bio to request, if they are discard IO and the queue
supports multi-range discard, we need to return ELEVATOR_DISCARD_MERGE
because both block core and related drivers(nvme, virtio-blk) doesn't
handle mixed discard io merge(traditional IO merge together with
discard merge) well.
Fix the issue by returning ELEVATOR_DISCARD_MERGE in this situation,
so both blk-mq and drivers just need to handle multi-range discard.
Reported-by: Oleksandr Natalenko <oleksandr(a)natalenko.name>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Tested-by: Oleksandr Natalenko <oleksandr(a)natalenko.name>
Fixes: 2705dfb20947 ("block: fix discard request merge")
Link: https://lore.kernel.org/r/20210729034226.1591070-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
commit 866663b7b52d2 upstream.
Similar to commit 87aa69aa10b42 ("block: return ELEVATOR_DISCARD_MERGE if possible")
in 5.10 kernel.
Conflicts:
block/blk-merge.c: function at a different place.
block/mq-deadline-main.c: not in 5.4, use mq-deadline.c instead.
Cc: <stable(a)vger.kernel.org> # 5.4.y
Signed-off-by: Gwendal Grignou <gwendal(a)chromium.org>
---
block/bfq-iosched.c | 3 +++
block/blk-merge.c | 15 ---------------
block/elevator.c | 3 +++
block/mq-deadline.c | 2 ++
include/linux/blkdev.h | 16 ++++++++++++++++
5 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 1d443d17cf7c5..d46806182b051 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2251,6 +2251,9 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
__rq = bfq_find_rq_fmerge(bfqd, bio, q);
if (__rq && elv_bio_merge_ok(__rq, bio)) {
*req = __rq;
+
+ if (blk_discard_mergable(__rq))
+ return ELEVATOR_DISCARD_MERGE;
return ELEVATOR_FRONT_MERGE;
}
diff --git a/block/blk-merge.c b/block/blk-merge.c
index a62692d135660..5219064cd72bb 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -721,21 +721,6 @@ static void blk_account_io_merge(struct request *req)
part_stat_unlock();
}
}
-/*
- * Two cases of handling DISCARD merge:
- * If max_discard_segments > 1, the driver takes every bio
- * as a range and send them to controller together. The ranges
- * needn't to be contiguous.
- * Otherwise, the bios/requests will be handled as same as
- * others which should be contiguous.
- */
-static inline bool blk_discard_mergable(struct request *req)
-{
- if (req_op(req) == REQ_OP_DISCARD &&
- queue_max_discard_segments(req->q) > 1)
- return true;
- return false;
-}
static enum elv_merge blk_try_req_merge(struct request *req,
struct request *next)
diff --git a/block/elevator.c b/block/elevator.c
index 78805c74ea8a4..3ba826230c578 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -337,6 +337,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
if (__rq && elv_bio_merge_ok(__rq, bio)) {
*req = __rq;
+
+ if (blk_discard_mergable(__rq))
+ return ELEVATOR_DISCARD_MERGE;
return ELEVATOR_BACK_MERGE;
}
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 19c6922e85f1b..6d6dda5cfffa3 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -452,6 +452,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
if (elv_bio_merge_ok(__rq, bio)) {
*rq = __rq;
+ if (blk_discard_mergable(__rq))
+ return ELEVATOR_DISCARD_MERGE;
return ELEVATOR_FRONT_MERGE;
}
}
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 8cc766743270f..308c2d8cdca19 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1409,6 +1409,22 @@ static inline int queue_limit_discard_alignment(struct queue_limits *lim, sector
return offset << SECTOR_SHIFT;
}
+/*
+ * Two cases of handling DISCARD merge:
+ * If max_discard_segments > 1, the driver takes every bio
+ * as a range and send them to controller together. The ranges
+ * needn't to be contiguous.
+ * Otherwise, the bios/requests will be handled as same as
+ * others which should be contiguous.
+ */
+static inline bool blk_discard_mergable(struct request *req)
+{
+ if (req_op(req) == REQ_OP_DISCARD &&
+ queue_max_discard_segments(req->q) > 1)
+ return true;
+ return false;
+}
+
static inline int bdev_discard_alignment(struct block_device *bdev)
{
struct request_queue *q = bdev_get_queue(bdev);
--
2.36.1.124.g0e6072fb45-goog