6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon will@kernel.org
commit 823353b7cf0ea9dfb09f5181d5fb2825d727200b upstream.
When allocating pages from a restricted DMA pool in swiotlb_alloc(), the buffer address is blindly converted to a 'struct page *' that is returned to the caller. In the unlikely event of an allocation bug, page-unaligned addresses are not detected and slots can silently be double-allocated.
Add a simple check of the buffer alignment in swiotlb_alloc() to make debugging a little easier if something has gone wonky.
Cc: stable@vger.kernel.org # v6.6+ Signed-off-by: Will Deacon will@kernel.org Reviewed-by: Michael Kelley mhklinux@outlook.com Reviewed-by: Petr Tesarik petr.tesarik1@huawei-partners.com Tested-by: Nicolin Chen nicolinc@nvidia.com Tested-by: Michael Kelley mhklinux@outlook.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Fabio Estevam festevam@denx.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/dma/swiotlb.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1627,6 +1627,12 @@ struct page *swiotlb_alloc(struct device return NULL;
tlb_addr = slot_addr(pool->start, index); + if (unlikely(!PAGE_ALIGNED(tlb_addr))) { + dev_WARN_ONCE(dev, 1, "Cannot allocate pages from non page-aligned swiotlb addr 0x%pa.\n", + &tlb_addr); + swiotlb_release_slots(dev, tlb_addr); + return NULL; + }
return pfn_to_page(PFN_DOWN(tlb_addr)); }