6.7-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon will@kernel.org
[ Upstream commit cbf53074a528191df82b4dba1e3d21191102255e ]
core-api/dma-api-howto.rst states the following properties of dma_alloc_coherent():
| The CPU virtual address and the DMA address are both guaranteed to | be aligned to the smallest PAGE_SIZE order which is greater than or | equal to the requested size.
However, swiotlb_alloc() passes zero for the 'alloc_align_mask' parameter of swiotlb_find_slots() and so this property is not upheld. Instead, allocations larger than a page are aligned to PAGE_SIZE,
Calculate the mask corresponding to the page order suitable for holding the allocation and pass that to swiotlb_find_slots().
Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers") Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Michael Kelley mhklinux@outlook.com Reviewed-by: Petr Tesarik petr.tesarik1@huawei-partners.com Tested-by: Nicolin Chen nicolinc@nvidia.com Tested-by: Michael Kelley mhklinux@outlook.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/dma/swiotlb.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 8fba61069b84d..2d347685cf566 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1610,12 +1610,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size) struct io_tlb_mem *mem = dev->dma_io_tlb_mem; struct io_tlb_pool *pool; phys_addr_t tlb_addr; + unsigned int align; int index;
if (!mem) return NULL;
- index = swiotlb_find_slots(dev, 0, size, 0, &pool); + align = (1 << (get_order(size) + PAGE_SHIFT)) - 1; + index = swiotlb_find_slots(dev, 0, size, align, &pool); if (index == -1) return NULL;