This is a backport of ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE") and aa6f8dcbab47 ("swiotlb: rework "fix info leak with DMA_FROM_DEVICE"") for 5.4.y.
I had to handle some merge conflicts, that at this point we have swiotlb_tbl_sync_single() as opposed to swiotlb_sync_single_for_device(), and also a file rename from Documentation/DMA-attributes.txt to Documentation/core-api/dma-attributes.rst.
Halil Pasic (2): swiotlb: fix info leak with DMA_FROM_DEVICE swiotlb: rework "fix info leak with DMA_FROM_DEVICE"
kernel/dma/swiotlb.c | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-)
base-commit: 7f44fdc1563d6bca95ee9fb4414e4b8286bccb0c
commit ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e upstream.
The problem I'm addressing was discovered by the LTP test covering cve-2018-1000204.
A short description of what happens follows: 1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV and a corresponding dxferp. The peculiar thing about this is that TUR is not reading from the device. 2) In sg_start_req() the invocation of blk_rq_map_user() effectively bounces the user-space buffer. As if the device was to transfer into it. Since commit a45b599ad808 ("scsi: sg: allocate with __GFP_ZERO in sg_build_indirect()") we make sure this first bounce buffer is allocated with GFP_ZERO. 3) For the rest of the story we keep ignoring that we have a TUR, so the device won't touch the buffer we prepare as if the we had a DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device and the buffer allocated by SG is mapped by the function virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here scatter-gather and not scsi generics). This mapping involves bouncing via the swiotlb (we need swiotlb to do virtio in protected guest like s390 Secure Execution, or AMD SEV). 4) When the SCSI TUR is done, we first copy back the content of the second (that is swiotlb) bounce buffer (which most likely contains some previous IO data), to the first bounce buffer, which contains all zeros. Then we copy back the content of the first bounce buffer to the user-space buffer. 5) The test case detects that the buffer, which it zero-initialized, ain't all zeros and fails.
One can argue that this is an swiotlb problem, because without swiotlb we leak all zeros, and the swiotlb should be transparent in a sense that it does not affect the outcome (if all other participants are well behaved).
Copying the content of the original buffer into the swiotlb buffer is the only way I can think of to make swiotlb transparent in such scenarios. So let's do just that if in doubt, but allow the driver to tell us that the whole mapped buffer is going to be overwritten, in which case we can preserve the old behavior and avoid the performance impact of the extra bounce.
Signed-off-by: Halil Pasic pasic@linux.ibm.com Signed-off-by: Christoph Hellwig hch@lst.de Cc: stable@vger.kernel.org --- Documentation/DMA-attributes.txt | 10 ++++++++++ include/linux/dma-mapping.h | 8 ++++++++ kernel/dma/swiotlb.c | 3 ++- 3 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt index 8f8d97f65d73..7193505a98ca 100644 --- a/Documentation/DMA-attributes.txt +++ b/Documentation/DMA-attributes.txt @@ -156,3 +156,13 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). + +DMA_ATTR_PRIVILEGED +------------------- + +Some advanced peripherals such as remote processors and GPUs perform +accesses to DMA buffers in both privileged "supervisor" and unprivileged +"user" modes. This attribute is used to indicate to the DMA-mapping +subsystem that the buffer is fully accessible at the elevated privilege +level (and ideally inaccessible or at least read-only at the +lesser-privileged levels). diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4d450672b7d6..da90f20e11c1 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -70,6 +70,14 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9)
+/* + * This is a hint to the DMA-mapping subsystem that the device is expected + * to overwrite the entire mapped size, thus the caller does not require any + * of the previous buffer contents to be preserved. This allows + * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. + */ +#define DMA_ATTR_OVERWRITE (1UL << 10) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. * It can be given to a device to use as a DMA source or target. A CPU cannot diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f99b79d7e123..f17b771856d1 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -572,7 +572,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, for (i = 0; i < nslots; i++) io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) + (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || + dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
return tlb_addr;
commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 upstream.
Unfortunately, we ended up merging an old version of the patch "fix info leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph (the swiotlb maintainer), he asked me to create an incremental fix (after I have pointed this out the mix up, and asked him for guidance). So here we go.
The main differences between what we got and what was agreed are: * swiotlb_sync_single_for_device is also required to do an extra bounce * We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters * The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE must take precedence over DMA_ATTR_SKIP_CPU_SYNC
Thus this patch removes DMA_ATTR_OVERWRITE, and makes swiotlb_sync_single_for_device() bounce unconditionally (that is, also when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale data from the swiotlb buffer.
Let me note, that if the size used with dma_sync_* API is less than the size used with dma_[un]map_*, under certain circumstances we may still end up with swiotlb not being transparent. In that sense, this is no perfect fix either.
To get this bullet proof, we would have to bounce the entire mapping/bounce buffer. For that we would have to figure out the starting address, and the size of the mapping in swiotlb_sync_single_for_device(). While this does seem possible, there seems to be no firm consensus on how things are supposed to work.
Signed-off-by: Halil Pasic pasic@linux.ibm.com Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE") Cc: stable@vger.kernel.org Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Linus Torvalds torvalds@linux-foundation.org --- Documentation/DMA-attributes.txt | 10 ---------- include/linux/dma-mapping.h | 8 -------- kernel/dma/swiotlb.c | 25 ++++++++++++++++--------- 3 files changed, 16 insertions(+), 27 deletions(-)
diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt index 7193505a98ca..8f8d97f65d73 100644 --- a/Documentation/DMA-attributes.txt +++ b/Documentation/DMA-attributes.txt @@ -156,13 +156,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). - -DMA_ATTR_PRIVILEGED -------------------- - -Some advanced peripherals such as remote processors and GPUs perform -accesses to DMA buffers in both privileged "supervisor" and unprivileged -"user" modes. This attribute is used to indicate to the DMA-mapping -subsystem that the buffer is fully accessible at the elevated privilege -level (and ideally inaccessible or at least read-only at the -lesser-privileged levels). diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index da90f20e11c1..4d450672b7d6 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -70,14 +70,6 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9)
-/* - * This is a hint to the DMA-mapping subsystem that the device is expected - * to overwrite the entire mapped size, thus the caller does not require any - * of the previous buffer contents to be preserved. This allows - * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. - */ -#define DMA_ATTR_OVERWRITE (1UL << 10) - /* * A dma_addr_t can hold any valid DMA or bus address for the platform. * It can be given to a device to use as a DMA source or target. A CPU cannot diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index f17b771856d1..76fb001dddd4 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -571,10 +571,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, */ for (i = 0; i < nslots; i++) io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT); - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + /* + * When dir == DMA_FROM_DEVICE we could omit the copy from the orig + * to the tlb buffer, if we knew for sure the device will + * overwirte the entire current content. But we don't. Thus + * unconditional bounce may prevent leaking swiotlb content (i.e. + * kernel memory) to user-space. + */ + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
return tlb_addr; } @@ -649,11 +653,14 @@ void swiotlb_tbl_sync_single(struct device *hwdev, phys_addr_t tlb_addr, BUG_ON(dir != DMA_TO_DEVICE); break; case SYNC_FOR_DEVICE: - if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, - size, DMA_TO_DEVICE); - else - BUG_ON(dir != DMA_FROM_DEVICE); + /* + * Unconditional bounce is necessary to avoid corruption on + * sync_*_for_cpu or dma_ummap_* when the device didn't + * overwrite the whole lengt of the bounce buffer. + */ + swiotlb_bounce(orig_addr, tlb_addr, + size, DMA_TO_DEVICE); + BUG_ON(!valid_dma_direction(dir)); break; default: BUG();
On Tue, Mar 22, 2022 at 01:41:26PM +0100, Halil Pasic wrote:
This is a backport of ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE") and aa6f8dcbab47 ("swiotlb: rework "fix info leak with DMA_FROM_DEVICE"") for 5.4.y.
I had to handle some merge conflicts, that at this point we have swiotlb_tbl_sync_single() as opposed to swiotlb_sync_single_for_device(), and also a file rename from Documentation/DMA-attributes.txt to Documentation/core-api/dma-attributes.rst.
Halil Pasic (2): swiotlb: fix info leak with DMA_FROM_DEVICE swiotlb: rework "fix info leak with DMA_FROM_DEVICE"
kernel/dma/swiotlb.c | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-)
base-commit: 7f44fdc1563d6bca95ee9fb4414e4b8286bccb0c
2.32.0
All now queued up, thanks!
greg k-h
linux-stable-mirror@lists.linaro.org