This patch updates dma_direct_unmap_sg() to mark each scatter/gather entry invalid, after it's unmapped. This fixes two issues:
1. It makes the unmapping code able to tolerate a double unmap. 2. It prevents the NVMe driver from erroneously treating an unmapped DMA address as mapped.
The bug that motivated this patch was the following sequence, which occurred within the NVMe driver, with the kernel flag `swiotlb=force`.
* NVMe driver calls dma_direct_map_sg() * dma_direct_map_sg() fails part way through the scatter gather/list * dma_direct_map_sg() calls dma_direct_unmap_sg() to unmap any entries succeeded. * NVMe driver calls dma_direct_unmap_sg(), redundantly, leading to a double unmap, which is a bug.
With this patch, a hadoop workload running on a cluster of three AMD SEV VMs, is able to succeed. Without the patch, the hadoop workload suffers application-level and even VM-level failures.
Tested-by: Jianxiong Gao jxgao@google.com Tested-by: Marc Orr marcorr@google.com Reviewed-by: Jianxiong Gao jxgao@google.com Signed-off-by: Marc Orr marcorr@google.com --- kernel/dma/direct.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 0a4881e59aa7..3d9b17fe5771 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -374,9 +374,11 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, struct scatterlist *sg; int i;
- for_each_sg(sgl, sg, nents, i) + for_each_sg(sgl, sg, nents, i) { dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); + sg->dma_address = DMA_MAPPING_ERROR; + } } EXPORT_SYMBOL(dma_direct_unmap_sg); #endif
On 2021-01-11 15:43, Marc Orr wrote:
This patch updates dma_direct_unmap_sg() to mark each scatter/gather entry invalid, after it's unmapped. This fixes two issues:
s/fixes/bodges around (badly)/
- It makes the unmapping code able to tolerate a double unmap.
- It prevents the NVMe driver from erroneously treating an unmapped DMA
address as mapped.
The bug that motivated this patch was the following sequence, which occurred within the NVMe driver, with the kernel flag `swiotlb=force`.
- NVMe driver calls dma_direct_map_sg()
- dma_direct_map_sg() fails part way through the scatter gather/list
- dma_direct_map_sg() calls dma_direct_unmap_sg() to unmap any entries succeeded.
- NVMe driver calls dma_direct_unmap_sg(), redundantly, leading to a double unmap, which is a bug.
So why not just fix that actual bug?
With this patch, a hadoop workload running on a cluster of three AMD SEV VMs, is able to succeed. Without the patch, the hadoop workload suffers application-level and even VM-level failures.
Tested-by: Jianxiong Gao jxgao@google.com Tested-by: Marc Orr marcorr@google.com Reviewed-by: Jianxiong Gao jxgao@google.com Signed-off-by: Marc Orr marcorr@google.com
kernel/dma/direct.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 0a4881e59aa7..3d9b17fe5771 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -374,9 +374,11 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, struct scatterlist *sg; int i;
- for_each_sg(sgl, sg, nents, i)
- for_each_sg(sgl, sg, nents, i) { dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, attrs);
sg->dma_address = DMA_MAPPING_ERROR;
There are more DMA API backends than just dma-direct, so while this might help paper over bugs when SWIOTLB is in use, it's not going to have any effect when those same bugs are hit under other circumstances. Once again, the moral of the story is that effort is better spent just fixing the bugs ;)
Robin.
- } } EXPORT_SYMBOL(dma_direct_unmap_sg); #endif
On Mon, Jan 11, 2021 at 07:43:35AM -0800, Marc Orr wrote:
This patch updates dma_direct_unmap_sg() to mark each scatter/gather entry invalid, after it's unmapped. This fixes two issues:
- It makes the unmapping code able to tolerate a double unmap.
- It prevents the NVMe driver from erroneously treating an unmapped DMA
address as mapped.
The bug that motivated this patch was the following sequence, which occurred within the NVMe driver, with the kernel flag `swiotlb=force`.
- NVMe driver calls dma_direct_map_sg()
- dma_direct_map_sg() fails part way through the scatter gather/list
- dma_direct_map_sg() calls dma_direct_unmap_sg() to unmap any entries succeeded.
- NVMe driver calls dma_direct_unmap_sg(), redundantly, leading to a double unmap, which is a bug.
With this patch, a hadoop workload running on a cluster of three AMD SEV VMs, is able to succeed. Without the patch, the hadoop workload suffers application-level and even VM-level failures.
Tested-by: Jianxiong Gao jxgao@google.com Tested-by: Marc Orr marcorr@google.com Reviewed-by: Jianxiong Gao jxgao@google.com Signed-off-by: Marc Orr marcorr@google.com
kernel/dma/direct.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
linux-stable-mirror@lists.linaro.org