The patch 099606a7b2d5 didn't cleanly apply to 5.15 due to the significant difference in codebases.
I've tried to manually bring it back to 5.15 via some minor conflict resolution but also invoking the newly introduced API using inverted logic as the conditional statements present in 5.15 are the opposite of those in 6.1 xen/swiotlib.
Harshvardhan Jha (1): xen/swiotlb: relax alignment requirements
drivers/xen/swiotlb-xen.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
[ Upstream commit 85fcb57c983f423180ba6ec5d0034242da05cc54 ]
When mapping a buffer for DMA via .map_page or .map_sg DMA operations, there is no need to check the machine frames to be aligned according to the mapped areas size. All what is needed in these cases is that the buffer is contiguous at machine level.
So carve out the alignment check from range_straddles_page_boundary() and move it to a helper called by xen_swiotlb_alloc_coherent() and xen_swiotlb_free_coherent() directly.
Fixes: 9f40ec84a797 ("xen/swiotlb: add alignment check for dma buffers") Signed-off-by: Harshvardhan Jha harshvardhan.j.jha@oracle.com --- drivers/xen/swiotlb-xen.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 0392841a822fa..65da97be06285 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -75,19 +75,21 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev, return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr)); }
+static inline bool range_requires_alignment(phys_addr_t p, size_t size) +{ + phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT); + phys_addr_t bus_addr = pfn_to_bfn(XEN_PFN_DOWN(p)) << XEN_PAGE_SHIFT; + + return IS_ALIGNED(p, algn) && !IS_ALIGNED(bus_addr, algn); +} + static inline int range_straddles_page_boundary(phys_addr_t p, size_t size) { unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p); unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size); - phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
next_bfn = pfn_to_bfn(xen_pfn);
- /* If buffer is physically aligned, ensure DMA alignment. */ - if (IS_ALIGNED(p, algn) && - !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn)) - return 1; - for (i = 1; i < nr_pages; i++) if (pfn_to_bfn(++xen_pfn) != ++next_bfn) return 1; @@ -306,7 +308,8 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, phys = dma_to_phys(hwdev, *dma_handle); dev_addr = xen_phys_to_dma(hwdev, phys); if (((dev_addr + size - 1 <= dma_mask)) && - !range_straddles_page_boundary(phys, size)) + !range_straddles_page_boundary(phys, size) && + !range_requires_alignment(phys, size)) *dma_handle = dev_addr; else { if (xen_create_contiguous_region(phys, order, @@ -347,6 +350,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && + !range_requires_alignment(phys, size) && TestClearPageXenRemapped(page)) xen_destroy_contiguous_region(phys, order);
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 85fcb57c983f423180ba6ec5d0034242da05cc54
WARNING: Author mismatch between patch and upstream commit: Backport author: Harshvardhan Jhaharshvardhan.j.jha@oracle.com Commit author: Juergen Grossjgross@suse.com
Status in newer kernel trees: 6.14.y | Present (exact SHA1) 6.13.y | Present (different SHA1: d8027f173a99) 6.12.y | Present (different SHA1: 5a10af375347) 6.6.y | Present (different SHA1: 461d9e8acaa4) 6.1.y | Present (different SHA1: 099606a7b2d5)
Note: The patch differs from the upstream commit: --- 1: 85fcb57c983f4 ! 1: 80548811d6af4 xen/swiotlb: relax alignment requirements @@ ## Metadata ## -Author: Juergen Gross jgross@suse.com +Author: Harshvardhan Jha harshvardhan.j.jha@oracle.com
## Commit message ## xen/swiotlb: relax alignment requirements
+ [ Upstream commit 85fcb57c983f423180ba6ec5d0034242da05cc54 ] + When mapping a buffer for DMA via .map_page or .map_sg DMA operations, there is no need to check the machine frames to be aligned according to the mapped areas size. All what is needed in these cases is that the @@ Commit message xen_swiotlb_free_coherent() directly.
Fixes: 9f40ec84a797 ("xen/swiotlb: add alignment check for dma buffers") - Reported-by: Jan Vejvalka jan.vejvalka@lfmotol.cuni.cz - Tested-by: Jan Vejvalka jan.vejvalka@lfmotol.cuni.cz - Signed-off-by: Juergen Gross jgross@suse.com - Reviewed-by: Stefano Stabellini sstabellini@kernel.org - Signed-off-by: Juergen Gross jgross@suse.com + Signed-off-by: Harshvardhan Jha harshvardhan.j.jha@oracle.com
## drivers/xen/swiotlb-xen.c ## @@ drivers/xen/swiotlb-xen.c: static inline phys_addr_t xen_dma_to_phys(struct device *dev, @@ drivers/xen/swiotlb-xen.c: static inline phys_addr_t xen_dma_to_phys(struct devi for (i = 1; i < nr_pages; i++) if (pfn_to_bfn(++xen_pfn) != ++next_bfn) return 1; -@@ drivers/xen/swiotlb-xen.c: xen_swiotlb_alloc_coherent(struct device *dev, size_t size, - - *dma_handle = xen_phys_to_dma(dev, phys); - if (*dma_handle + size - 1 > dma_mask || -- range_straddles_page_boundary(phys, size)) { -+ range_straddles_page_boundary(phys, size) || -+ range_requires_alignment(phys, size)) { - if (xen_create_contiguous_region(phys, order, fls64(dma_mask), - dma_handle) != 0) - goto out_free_pages; -@@ drivers/xen/swiotlb-xen.c: xen_swiotlb_free_coherent(struct device *dev, size_t size, void *vaddr, - size = ALIGN(size, XEN_PAGE_SIZE); +@@ drivers/xen/swiotlb-xen.c: xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size, + phys = dma_to_phys(hwdev, *dma_handle); + dev_addr = xen_phys_to_dma(hwdev, phys); + if (((dev_addr + size - 1 <= dma_mask)) && +- !range_straddles_page_boundary(phys, size)) ++ !range_straddles_page_boundary(phys, size) && ++ !range_requires_alignment(phys, size)) + *dma_handle = dev_addr; + else { + if (xen_create_contiguous_region(phys, order, +@@ drivers/xen/swiotlb-xen.c: xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
- if (WARN_ON_ONCE(dma_handle + size - 1 > dev->coherent_dma_mask) || -- WARN_ON_ONCE(range_straddles_page_boundary(phys, size))) -+ WARN_ON_ONCE(range_straddles_page_boundary(phys, size) || -+ range_requires_alignment(phys, size))) - return; + if (!WARN_ON((dev_addr + size - 1 > dma_mask) || + range_straddles_page_boundary(phys, size)) && ++ !range_requires_alignment(phys, size) && + TestClearPageXenRemapped(page)) + xen_destroy_contiguous_region(phys, order);
- if (TestClearPageXenRemapped(virt_to_page(vaddr))) ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-5.15.y | Success | Success |
linux-stable-mirror@lists.linaro.org