This series of swiotlb patches fixes a iwlwifi regression on the i.MX8MM IoT Gateway board running kernel 6.6.
This was noticed when updating the kernel from 5.10 to 6.6.
Without this series, the board cannot boot kernel 6.6 due to the storm of alignment errors from the iwlwifi driver.
This has been reported and discussed in the linux-wireless list: https://lore.kernel.org/linux-wireless/CAOMZO5D2Atb=rnvmNLvu8nrsn+3L9X9NbG1b...
Will Deacon suggested:
"If you want to backport that change, then I think you should probably take the whole series:
https://lore.kernel.org/all/20240308152829.25754-1-will@kernel.org/
(and there were some follow-ups from Michael iirc; you're best off checking the git history for kernel/dma/swiotlb.c).
FWIW: we have this series backported to 6.6 in the android15-6.6 tree."
From this series, only the two patches below are not present in the 6.6 stable tree:
swiotlb: Enforce page alignment in swiotlb_alloc() swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
While at it, also backport: swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
as it fixes a commit that is present in 6.6.
Petr Tesarik (1): swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
Will Deacon (2): swiotlb: Enforce page alignment in swiotlb_alloc() swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
kernel/dma/swiotlb.c | 83 +++++++++++++++++++++++++++++++++----------- 1 file changed, 63 insertions(+), 20 deletions(-)
From: Will Deacon will@kernel.org
commit 823353b7cf0ea9dfb09f5181d5fb2825d727200b upstream.
When allocating pages from a restricted DMA pool in swiotlb_alloc(), the buffer address is blindly converted to a 'struct page *' that is returned to the caller. In the unlikely event of an allocation bug, page-unaligned addresses are not detected and slots can silently be double-allocated.
Add a simple check of the buffer alignment in swiotlb_alloc() to make debugging a little easier if something has gone wonky.
Cc: stable@vger.kernel.org # v6.6+ Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Michael Kelley mhklinux@outlook.com Reviewed-by: Petr Tesarik petr.tesarik1@huawei-partners.com Tested-by: Nicolin Chen nicolinc@nvidia.com Tested-by: Michael Kelley mhklinux@outlook.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Fabio Estevam festevam@denx.de --- kernel/dma/swiotlb.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index a7d5fb473b32..4c10700c61d2 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1627,6 +1627,12 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) return NULL;
tlb_addr = slot_addr(pool->start, index); + if (unlikely(!PAGE_ALIGNED(tlb_addr))) { + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n", + &tlb_addr); + swiotlb_release_slots(dev, tlb_addr); + return NULL; + }
return pfn_to_page(PFN_DOWN(tlb_addr)); }
From: Will Deacon will@kernel.org
commit 14cebf689a78e8a1c041138af221ef6eac6bc7da upstream.
For swiotlb allocations >= PAGE_SIZE, the slab search historically adjusted the stride to avoid checking unaligned slots. This had the side-effect of aligning large mapping requests to PAGE_SIZE, but that was broken by 0eee5ae10256 ("swiotlb: fix slot alignment checks").
Since this alignment could be relied upon drivers, reinstate PAGE_SIZE alignment for swiotlb mappings >= PAGE_SIZE.
Cc: stable@vger.kernel.org # v6.6+ Reported-by: Michael Kelley mhklinux@outlook.com Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Petr Tesarik petr.tesarik1@huawei-partners.com Tested-by: Nicolin Chen nicolinc@nvidia.com Tested-by: Michael Kelley mhklinux@outlook.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Fabio Estevam festevam@denx.de --- kernel/dma/swiotlb.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 4c10700c61d2..0dc3ec199fe4 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -992,6 +992,17 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool, BUG_ON(!nslots); BUG_ON(area_index >= pool->nareas);
+ /* + * Historically, swiotlb allocations >= PAGE_SIZE were guaranteed to be + * page-aligned in the absence of any other alignment requirements. + * 'alloc_align_mask' was later introduced to specify the alignment + * explicitly, however this is passed as zero for streaming mappings + * and so we preserve the old behaviour there in case any drivers are + * relying on it. + */ + if (!alloc_align_mask && !iotlb_align_mask && alloc_size >= PAGE_SIZE) + alloc_align_mask = PAGE_SIZE - 1; + /* * Ensure that the allocation is at least slot-aligned and update * 'iotlb_align_mask' to ignore bits that will be preserved when @@ -1006,13 +1017,6 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool, */ stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask));
- /* - * For allocations of PAGE_SIZE or larger only look for page aligned - * allocations. - */ - if (alloc_size >= PAGE_SIZE) - stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1); - spin_lock_irqsave(&area->lock, flags); if (unlikely(nslots > pool->area_nslabs - area->used)) goto not_found;
From: Petr Tesarik petr.tesarik1@huawei-partners.com
commit af133562d5aff41fcdbe51f1a504ae04788b5fc0 upstream.
Allow a buffer pre-padding of up to alloc_align_mask, even if it requires allocating additional IO TLB slots.
If the allocation alignment is bigger than IO_TLB_SIZE and min_align_mask covers any non-zero bits in the original address between IO_TLB_SIZE and alloc_align_mask, these bits are not preserved in the swiotlb buffer address.
To fix this case, increase the allocation size and use a larger offset within the allocated buffer. As a result, extra padding slots may be allocated before the mapping start address.
Leave orig_addr in these padding slots initialized to INVALID_PHYS_ADDR. These slots do not correspond to any CPU buffer, so attempts to sync the data should be ignored.
The padding slots should be automatically released when the buffer is unmapped. However, swiotlb_tbl_unmap_single() takes only the address of the DMA buffer slot, not the first padding slot. Save the number of padding slots in struct io_tlb_slot and use it to adjust the slot index in swiotlb_release_slots(), so all allocated slots are properly freed.
Cc: stable@vger.kernel.org # v6.6+ Fixes: 2fd4fa5d3fb5 ("swiotlb: Fix alignment checks when both allocation and DMA masks are present") Link: https://lore.kernel.org/linux-iommu/20240311210507.217daf8b@meshulam.tesaric... Signed-off-by: Petr Tesarik petr.tesarik1@huawei-partners.com Reviewed-by: Michael Kelley mhklinux@outlook.com Tested-by: Michael Kelley mhklinux@outlook.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Fabio Estevam festevam@denx.de --- kernel/dma/swiotlb.c | 59 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 46 insertions(+), 13 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 0dc3ec199fe4..e7c3fbd0737e 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -69,11 +69,14 @@ * @alloc_size: Size of the allocated buffer. * @list: The free list describing the number of free entries available * from each index. + * @pad_slots: Number of preceding padding slots. Valid only in the first + * allocated non-padding slot. */ struct io_tlb_slot { phys_addr_t orig_addr; size_t alloc_size; - unsigned int list; + unsigned short list; + unsigned short pad_slots; };
static bool swiotlb_force_bounce; @@ -287,6 +290,7 @@ static void swiotlb_init_io_tlb_pool(struct io_tlb_pool *mem, phys_addr_t start, mem->nslabs - i); mem->slots[i].orig_addr = INVALID_PHYS_ADDR; mem->slots[i].alloc_size = 0; + mem->slots[i].pad_slots = 0; }
memset(vaddr, 0, bytes); @@ -821,12 +825,30 @@ void swiotlb_dev_init(struct device *dev) #endif }
-/* - * Return the offset into a iotlb slot required to keep the device happy. +/** + * swiotlb_align_offset() - Get required offset into an IO TLB allocation. + * @dev: Owning device. + * @align_mask: Allocation alignment mask. + * @addr: DMA address. + * + * Return the minimum offset from the start of an IO TLB allocation which is + * required for a given buffer address and allocation alignment to keep the + * device happy. + * + * First, the address bits covered by min_align_mask must be identical in the + * original address and the bounce buffer address. High bits are preserved by + * choosing a suitable IO TLB slot, but bits below IO_TLB_SHIFT require extra + * padding bytes before the bounce buffer. + * + * Second, @align_mask specifies which bits of the first allocated slot must + * be zero. This may require allocating additional padding slots, and then the + * offset (in bytes) from the first such padding slot is returned. */ -static unsigned int swiotlb_align_offset(struct device *dev, u64 addr) +static unsigned int swiotlb_align_offset(struct device *dev, + unsigned int align_mask, u64 addr) { - return addr & dma_get_min_align_mask(dev) & (IO_TLB_SIZE - 1); + return addr & dma_get_min_align_mask(dev) & + (align_mask | (IO_TLB_SIZE - 1)); }
/* @@ -847,7 +869,7 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size return;
tlb_offset = tlb_addr & (IO_TLB_SIZE - 1); - orig_addr_offset = swiotlb_align_offset(dev, orig_addr); + orig_addr_offset = swiotlb_align_offset(dev, 0, orig_addr); if (tlb_offset < orig_addr_offset) { dev_WARN_ONCE(dev, 1, "Access before mapping start detected. orig offset %u, requested offset %u.\n", @@ -983,7 +1005,7 @@ static int swiotlb_area_find_slots(struct device *dev, struct io_tlb_pool *pool, unsigned long max_slots = get_max_slots(boundary_mask); unsigned int iotlb_align_mask = dma_get_min_align_mask(dev); unsigned int nslots = nr_slots(alloc_size), stride; - unsigned int offset = swiotlb_align_offset(dev, orig_addr); + unsigned int offset = swiotlb_align_offset(dev, 0, orig_addr); unsigned int index, slots_checked, count = 0, i; unsigned long flags; unsigned int slot_base; @@ -1282,11 +1304,12 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, unsigned long attrs) { struct io_tlb_mem *mem = dev->dma_io_tlb_mem; - unsigned int offset = swiotlb_align_offset(dev, orig_addr); + unsigned int offset; struct io_tlb_pool *pool; unsigned int i; int index; phys_addr_t tlb_addr; + unsigned short pad_slots;
if (!mem || !mem->nslabs) { dev_warn_ratelimited(dev, @@ -1303,6 +1326,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return (phys_addr_t)DMA_MAPPING_ERROR; }
+ offset = swiotlb_align_offset(dev, alloc_align_mask, orig_addr); index = swiotlb_find_slots(dev, orig_addr, alloc_size + offset, alloc_align_mask, &pool); if (index == -1) { @@ -1318,6 +1342,10 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, * This is needed when we sync the memory. Then we sync the buffer if * needed. */ + pad_slots = offset >> IO_TLB_SHIFT; + offset &= (IO_TLB_SIZE - 1); + index += pad_slots; + pool->slots[index].pad_slots = pad_slots; for (i = 0; i < nr_slots(alloc_size + offset); i++) pool->slots[index + i].orig_addr = slot_addr(orig_addr, i); tlb_addr = slot_addr(pool->start, index) + offset; @@ -1336,13 +1364,17 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr) { struct io_tlb_pool *mem = swiotlb_find_pool(dev, tlb_addr); unsigned long flags; - unsigned int offset = swiotlb_align_offset(dev, tlb_addr); - int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; - int nslots = nr_slots(mem->slots[index].alloc_size + offset); - int aindex = index / mem->area_nslabs; - struct io_tlb_area *area = &mem->areas[aindex]; + unsigned int offset = swiotlb_align_offset(dev, 0, tlb_addr); + int index, nslots, aindex; + struct io_tlb_area *area; int count, i;
+ index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT; + index -= mem->slots[index].pad_slots; + nslots = nr_slots(mem->slots[index].alloc_size + offset); + aindex = index / mem->area_nslabs; + area = &mem->areas[aindex]; + /* * Return the buffer to the free list by setting the corresponding * entries to indicate the number of contiguous entries available. @@ -1365,6 +1397,7 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr) mem->slots[i].list = ++count; mem->slots[i].orig_addr = INVALID_PHYS_ADDR; mem->slots[i].alloc_size = 0; + mem->slots[i].pad_slots = 0; }
/*
Don't expect a qualified review of stable backports to me as I don't have the 6.6 base in my mind, but the patches do look sane to me.
On Mon, Jun 17, 2024 at 11:23:12AM -0300, Fabio Estevam wrote:
This series of swiotlb patches fixes a iwlwifi regression on the i.MX8MM IoT Gateway board running kernel 6.6.
This was noticed when updating the kernel from 5.10 to 6.6.
Without this series, the board cannot boot kernel 6.6 due to the storm of alignment errors from the iwlwifi driver.
This has been reported and discussed in the linux-wireless list: https://lore.kernel.org/linux-wireless/CAOMZO5D2Atb=rnvmNLvu8nrsn+3L9X9NbG1b...
Will Deacon suggested:
"If you want to backport that change, then I think you should probably take the whole series:
https://lore.kernel.org/all/20240308152829.25754-1-will@kernel.org/
(and there were some follow-ups from Michael iirc; you're best off checking the git history for kernel/dma/swiotlb.c).
FWIW: we have this series backported to 6.6 in the android15-6.6 tree."
From this series, only the two patches below are not present in the
6.6 stable tree:
swiotlb: Enforce page alignment in swiotlb_alloc() swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
While at it, also backport: swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
as it fixes a commit that is present in 6.6.
Petr Tesarik (1): swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
Will Deacon (2): swiotlb: Enforce page alignment in swiotlb_alloc() swiotlb: Reinstate page-alignment for mappings >= PAGE_SIZE
kernel/dma/swiotlb.c | 83 +++++++++++++++++++++++++++++++++----------- 1 file changed, 63 insertions(+), 20 deletions(-)
All now queued up, thanks.
greg k-h
linux-stable-mirror@lists.linaro.org