Jason Gunthorpe jgg@ziepe.ca writes:
On Tue, Apr 21, 2026 at 01:53:31PM +0200, Jiri Pirko wrote:
You reach there when is_swiotlb_force_bounce(dev) is true and DMA_ATTR_CC_SHARED is set. What am I missing?
So a swiotlb_force_bounce will not use swiotlb bouncing if DMA_ATTR_CC_SHARED is set ?
Correct. Bouncing does not make sense in this case, as shared memory is already being mapped.
It is a little bit mangled, there are many reasons force_swiotlb can be set, but we loose them as it flows through - swiotlb_init() just has a simple SWIOTLB_FORCE
Ideally DMA_ATTR_CC_SHARED would skip swiotlb only if it is being selected for CC reasons. For instance if you have the swiotlb force command line parameter I would still expect it bounce shared memory.
Arguably I think this arch flow is misdesigned, the is_swiotlb_force_bounce() should not be used for CC. dma_capable() is the correct API to check if the device can DMA to the presented address, and it will trigger swiotlb_map() just the same without creating this gap.
Jason
Something like this?
static inline dma_addr_t dma_direct_map_phys(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs, bool flush) { dma_addr_t dma_addr;
if (is_swiotlb_force_bounce(dev)) { if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) return DMA_MAPPING_ERROR;
return swiotlb_map(dev, phys, size, dir, attrs); }
if (attrs & DMA_ATTR_MMIO) { dma_addr = phys; if (unlikely(!dma_capable(dev, dma_addr, size, false, attrs))) goto err_overflow; goto dma_mapped; } else if (attrs & DMA_ATTR_CC_SHARED) { dma_addr = phys_to_dma_unencrypted(dev, phys); } else { dma_addr = phys_to_dma_encrypted(dev, phys); }
if (unlikely(!dma_capable(dev, dma_addr, size, true, attrs)) || dma_kmalloc_needs_bounce(dev, size, dir)) { if (is_swiotlb_active(dev) && !(attrs & DMA_ATTR_REQUIRE_COHERENT)) return swiotlb_map(dev, phys, size, dir, attrs); goto err_overflow; }
dma_mapped: if (!dev_is_dma_coherent(dev) && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_device(phys, size, dir); if (flush) arch_sync_dma_flush(); } return dma_addr;
and dma_capable() now does static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size, bool is_ram, unsigned long attrs) { ....
/* * if phys addr attribute is encrypted but the * device is forcing an encrypted dma addr */ if (!(attrs & DMA_ATTR_CC_SHARED) && force_dma_unencrypted(dev)) return false; ...
}
-aneesh