Hello,
New build issue found on stable-rc/linux-6.12.y:
---
./include/linux/blkdev.h:1668:24: error: field ‘req_list’ has
incomplete type in init/main.o (init/main.c)
[logspec:kbuild,kbuild.compiler.error]
---
- dashboard: https://d.kernelci.org/i/maestro:cd168bc9c0a1dd7790c10e1a2ad1b0bac8ae366a
- giturl: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
- commit HEAD: 98d0e5b8e62e87e5cdc3c131d5c8660e17e487a1
Log excerpt:
=====================================================
In file included from init/main.c:85:
./include/linux/blkdev.h:1668:24: error: field ‘req_list’ has incomplete type
1668 | struct rq_list req_list;
| ^~~~~~~~
AR drivers/clk/imx/built-in.a
AR drivers/clk/ingenic/built-in.a
=====================================================
# Builds where the incident occurred:
## tinyconfig on (i386):
- compiler: gcc-12
- dashboard: https://d.kernelci.org/build/maestro:6807b01843948caad94dd153
## tinyconfig+kselftest on (x86_64):
- compiler: gcc-12
- dashboard: https://d.kernelci.org/build/maestro:6807b04243948caad94dd172
#kernelci issue maestro:cd168bc9c0a1dd7790c10e1a2ad1b0bac8ae366a
Reported-by: kernelci.org bot <bot(a)kernelci.org>
--
This is an experimental report format. Please send feedback in!
Talk to us at kernelci(a)lists.linux.dev
Made with love by the KernelCI team - https://kernelci.org
From: P Praneesh <quic_ppranees(a)quicinc.com>
[ Upstream commit 63fdc4509bcf483e79548de6bc08bf3c8e504bb3 ]
Currently, ath12k_dp_mon_srng_process uses ath12k_hal_srng_src_get_next_entry
to fetch the next entry from the destination ring. This is incorrect because
ath12k_hal_srng_src_get_next_entry is intended for source rings, not destination
rings. This leads to invalid entry fetches, causing potential data corruption or
crashes due to accessing incorrect memory locations. This happens because the
source ring and destination ring have different handling mechanisms and using
the wrong function results in incorrect pointer arithmetic and ring management.
To fix this issue, replace the call to ath12k_hal_srng_src_get_next_entry with
ath12k_hal_srng_dst_get_next_entry in ath12k_dp_mon_srng_process. This ensures
that the correct function is used for fetching entries from the destination
ring, preventing invalid memory accesses.
Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.3.1-00173-QCAHKSWPL_SILICONZ-1
Tested-on: WCN7850 hw2.0 WLAN.HMT.1.0.c5-00481-QCAHMTSWPL_V1.0_V2.0_SILICONZ-3
Signed-off-by: P Praneesh <quic_ppranees(a)quicinc.com>
Link: https://patch.msgid.link/20241223060132.3506372-7-quic_ppranees@quicinc.com
Signed-off-by: Jeff Johnson <jeff.johnson(a)oss.qualcomm.com>
Signed-off-by: Alexander Tsoy <alexander(a)tsoy.me>
---
drivers/net/wireless/ath/ath12k/dp_mon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
index 5c6749bc4039..4c98b9de1e58 100644
--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
+++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
@@ -2118,7 +2118,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id, int *budget,
dest_idx = 0;
move_next:
ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
- ath12k_hal_srng_src_get_next_entry(ab, srng);
+ ath12k_hal_srng_dst_get_next_entry(ab, srng);
num_buffs_reaped++;
}
--
2.49.0
From: P Praneesh <quic_ppranees(a)quicinc.com>
[ Upstream commit 63fdc4509bcf483e79548de6bc08bf3c8e504bb3 ]
Currently, ath12k_dp_mon_srng_process uses ath12k_hal_srng_src_get_next_entry
to fetch the next entry from the destination ring. This is incorrect because
ath12k_hal_srng_src_get_next_entry is intended for source rings, not destination
rings. This leads to invalid entry fetches, causing potential data corruption or
crashes due to accessing incorrect memory locations. This happens because the
source ring and destination ring have different handling mechanisms and using
the wrong function results in incorrect pointer arithmetic and ring management.
To fix this issue, replace the call to ath12k_hal_srng_src_get_next_entry with
ath12k_hal_srng_dst_get_next_entry in ath12k_dp_mon_srng_process. This ensures
that the correct function is used for fetching entries from the destination
ring, preventing invalid memory accesses.
Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.3.1-00173-QCAHKSWPL_SILICONZ-1
Tested-on: WCN7850 hw2.0 WLAN.HMT.1.0.c5-00481-QCAHMTSWPL_V1.0_V2.0_SILICONZ-3
Signed-off-by: P Praneesh <quic_ppranees(a)quicinc.com>
Link: https://patch.msgid.link/20241223060132.3506372-7-quic_ppranees@quicinc.com
Signed-off-by: Jeff Johnson <jeff.johnson(a)oss.qualcomm.com>
Signed-off-by: Alexander Tsoy <alexander(a)tsoy.me>
---
drivers/net/wireless/ath/ath12k/dp_mon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
index 8005d30a4dbe..b952e79179d0 100644
--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
+++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
@@ -2054,7 +2054,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id, int *budget,
dest_idx = 0;
move_next:
ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
- ath12k_hal_srng_src_get_next_entry(ab, srng);
+ ath12k_hal_srng_dst_get_next_entry(ab, srng);
num_buffs_reaped++;
}
--
2.49.0
Recently _pgd_alloc() was switched from using __get_free_pages() to
pagetable_alloc_noprof(), which might return a compound page in case
the allocation order is larger than 0.
On x86 this will be the case if CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
is set, even if PTI has been disabled at runtime.
When running as a Xen PV guest (this will always disable PTI), using
a compound page for a PGD will result in VM_BUG_ON_PGFLAGS being
triggered when the Xen code tries to pin the PGD.
Fix the Xen issue together with the not needed 8k allocation for a
PGD with PTI disabled by replacing PGD_ALLOCATION_ORDER with an
inline helper returning the needed order for PGD allocations.
Reported-by: Petr Vaněk <arkamar(a)atlas.cz>
Fixes: a9b3c355c2e6 ("asm-generic: pgalloc: provide generic __pgd_{alloc,free}")
Cc: stable(a)vger.kernel.org
Signed-off-by: Juergen Gross <jgross(a)suse.com>
---
V2:
- use pgd_allocation_order() instead of PGD_ALLOCATION_ORDER (Dave Hansen)
---
arch/x86/include/asm/pgalloc.h | 19 +++++++++++--------
arch/x86/kernel/machine_kexec_32.c | 4 ++--
arch/x86/mm/pgtable.c | 4 ++--
arch/x86/platform/efi/efi_64.c | 4 ++--
4 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
index a33147520044..c88691b15f3c 100644
--- a/arch/x86/include/asm/pgalloc.h
+++ b/arch/x86/include/asm/pgalloc.h
@@ -6,6 +6,8 @@
#include <linux/mm.h> /* for struct page */
#include <linux/pagemap.h>
+#include <asm/cpufeature.h>
+
#define __HAVE_ARCH_PTE_ALLOC_ONE
#define __HAVE_ARCH_PGD_FREE
#include <asm-generic/pgalloc.h>
@@ -29,16 +31,17 @@ static inline void paravirt_release_pud(unsigned long pfn) {}
static inline void paravirt_release_p4d(unsigned long pfn) {}
#endif
-#ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
/*
- * Instead of one PGD, we acquire two PGDs. Being order-1, it is
- * both 8k in size and 8k-aligned. That lets us just flip bit 12
- * in a pointer to swap between the two 4k halves.
+ * In case of Page Table Isolation active, we acquire two PGDs instead of one.
+ * Being order-1, it is both 8k in size and 8k-aligned. That lets us just
+ * flip bit 12 in a pointer to swap between the two 4k halves.
*/
-#define PGD_ALLOCATION_ORDER 1
-#else
-#define PGD_ALLOCATION_ORDER 0
-#endif
+static inline unsigned int pgd_allocation_order(void)
+{
+ if (cpu_feature_enabled(X86_FEATURE_PTI))
+ return 1;
+ return 0;
+}
/*
* Allocate and free page tables.
diff --git a/arch/x86/kernel/machine_kexec_32.c b/arch/x86/kernel/machine_kexec_32.c
index 80265162aeff..1f325304c4a8 100644
--- a/arch/x86/kernel/machine_kexec_32.c
+++ b/arch/x86/kernel/machine_kexec_32.c
@@ -42,7 +42,7 @@ static void load_segments(void)
static void machine_kexec_free_page_tables(struct kimage *image)
{
- free_pages((unsigned long)image->arch.pgd, PGD_ALLOCATION_ORDER);
+ free_pages((unsigned long)image->arch.pgd, pgd_allocation_order());
image->arch.pgd = NULL;
#ifdef CONFIG_X86_PAE
free_page((unsigned long)image->arch.pmd0);
@@ -59,7 +59,7 @@ static void machine_kexec_free_page_tables(struct kimage *image)
static int machine_kexec_alloc_page_tables(struct kimage *image)
{
image->arch.pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
- PGD_ALLOCATION_ORDER);
+ pgd_allocation_order());
#ifdef CONFIG_X86_PAE
image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index a05fcddfc811..f7ae44d3dd9e 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -360,7 +360,7 @@ static inline pgd_t *_pgd_alloc(struct mm_struct *mm)
* We allocate one page for pgd.
*/
if (!SHARED_KERNEL_PMD)
- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);
+ return __pgd_alloc(mm, pgd_allocation_order());
/*
* Now PAE kernel is not running as a Xen domain. We can allocate
@@ -380,7 +380,7 @@ static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
static inline pgd_t *_pgd_alloc(struct mm_struct *mm)
{
- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);
+ return __pgd_alloc(mm, pgd_allocation_order());
}
static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index ac57259a432b..a4b4ebd41b8f 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -73,7 +73,7 @@ int __init efi_alloc_page_tables(void)
gfp_t gfp_mask;
gfp_mask = GFP_KERNEL | __GFP_ZERO;
- efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER);
+ efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, pgd_allocation_order());
if (!efi_pgd)
goto fail;
@@ -96,7 +96,7 @@ int __init efi_alloc_page_tables(void)
if (pgtable_l5_enabled())
free_page((unsigned long)pgd_page_vaddr(*pgd));
free_pgd:
- free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER);
+ free_pages((unsigned long)efi_pgd, pgd_allocation_order());
fail:
return -ENOMEM;
}
--
2.43.0
From: Peter Ujfalusi <peter.ujfalusi(a)ti.com>
Like paRAM slots, channels could be used by other cores and in this case
we need to make sure that the driver do not alter these channels.
Handle the generic dma-channel-mask property to mark channels in a bitmap
which can not be used by Linux and convert the legacy rsv_chans if it is
provided by platform_data.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi(a)ti.com>
Link: https://lore.kernel.org/r/20191025073056.25450-4-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul(a)kernel.org>
Signed-off-by: Hardik Gohil <hgohil(a)mvista.com>
---
The patch [dmaengine: ti: edma: Add some null pointer checks to the edma_probe] fix for CVE-2024-26771 needs to be backported to v5.4.y kernel.
patch 2/3
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=…
patch 3/3
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=…
patch 2 and 3 are cleanly applicable to v5.4.y, build test was sucessful.
drivers/dma/ti/edma.c | 59 ++++++++++++++++++++++++++++++++++++++-----
1 file changed, 53 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
index 01089e5c565f..47423bbd7bc7 100644
--- a/drivers/dma/ti/edma.c
+++ b/drivers/dma/ti/edma.c
@@ -259,6 +259,13 @@ struct edma_cc {
*/
unsigned long *slot_inuse;
+ /*
+ * For tracking reserved channels used by DSP.
+ * If the bit is cleared, the channel is allocated to be used by DSP
+ * and Linux must not touch it.
+ */
+ unsigned long *channels_mask;
+
struct dma_device dma_slave;
struct dma_device *dma_memcpy;
struct edma_chan *slave_chans;
@@ -715,6 +722,12 @@ static int edma_alloc_channel(struct edma_chan *echan,
struct edma_cc *ecc = echan->ecc;
int channel = EDMA_CHAN_SLOT(echan->ch_num);
+ if (!test_bit(echan->ch_num, ecc->channels_mask)) {
+ dev_err(ecc->dev, "Channel%d is reserved, can not be used!\n",
+ echan->ch_num);
+ return -EINVAL;
+ }
+
/* ensure access through shadow region 0 */
edma_or_array2(ecc, EDMA_DRAE, 0, EDMA_REG_ARRAY_INDEX(channel),
EDMA_CHANNEL_BIT(channel));
@@ -2249,7 +2262,7 @@ static int edma_probe(struct platform_device *pdev)
struct edma_soc_info *info = pdev->dev.platform_data;
s8 (*queue_priority_mapping)[2];
int i, off;
- const s16 (*rsv_slots)[2];
+ const s16 (*reserved)[2];
const s16 (*xbar_chans)[2];
int irq;
char *irq_name;
@@ -2330,15 +2343,32 @@ static int edma_probe(struct platform_device *pdev)
if (!ecc->slot_inuse)
return -ENOMEM;
+ ecc->channels_mask = devm_kcalloc(dev,
+ BITS_TO_LONGS(ecc->num_channels),
+ sizeof(unsigned long), GFP_KERNEL);
+ if (!ecc->channels_mask)
+ return -ENOMEM;
+
+ /* Mark all channels available initially */
+ bitmap_fill(ecc->channels_mask, ecc->num_channels);
+
ecc->default_queue = info->default_queue;
if (info->rsv) {
/* Set the reserved slots in inuse list */
- rsv_slots = info->rsv->rsv_slots;
- if (rsv_slots) {
- for (i = 0; rsv_slots[i][0] != -1; i++)
- bitmap_set(ecc->slot_inuse, rsv_slots[i][0],
- rsv_slots[i][1]);
+ reserved = info->rsv->rsv_slots;
+ if (reserved) {
+ for (i = 0; reserved[i][0] != -1; i++)
+ bitmap_set(ecc->slot_inuse, reserved[i][0],
+ reserved[i][1]);
+ }
+
+ /* Clear channels not usable for Linux */
+ reserved = info->rsv->rsv_chans;
+ if (reserved) {
+ for (i = 0; reserved[i][0] != -1; i++)
+ bitmap_clear(ecc->channels_mask, reserved[i][0],
+ reserved[i][1]);
}
}
@@ -2398,6 +2428,7 @@ static int edma_probe(struct platform_device *pdev)
if (!ecc->legacy_mode) {
int lowest_priority = 0;
+ unsigned int array_max;
struct of_phandle_args tc_args;
ecc->tc_list = devm_kcalloc(dev, ecc->num_tc,
@@ -2421,6 +2452,18 @@ static int edma_probe(struct platform_device *pdev)
}
of_node_put(tc_args.np);
}
+
+ /* See if we have optional dma-channel-mask array */
+ array_max = DIV_ROUND_UP(ecc->num_channels, BITS_PER_TYPE(u32));
+ ret = of_property_read_variable_u32_array(node,
+ "dma-channel-mask",
+ (u32 *)ecc->channels_mask,
+ 1, array_max);
+ if (ret > 0 && ret != array_max)
+ dev_warn(dev, "dma-channel-mask is not complete.\n");
+ else if (ret == -EOVERFLOW || ret == -ENODATA)
+ dev_warn(dev,
+ "dma-channel-mask is out of range or empty\n");
}
/* Event queue priority mapping */
@@ -2438,6 +2481,10 @@ static int edma_probe(struct platform_device *pdev)
edma_dma_init(ecc, legacy_mode);
for (i = 0; i < ecc->num_channels; i++) {
+ /* Do not touch reserved channels */
+ if (!test_bit(i, ecc->channels_mask))
+ continue;
+
/* Assign all channels to the default queue */
edma_assign_channel_eventq(&ecc->slave_chans[i],
info->default_queue);
--
2.25.1