This is a note to let you know that I've just added the patch titled
dmaengine: virt-dma: Support for race free transfer termination
to the 4.14-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: dmaengine-virt-dma-support-for-race-free-transfer-termination.patch and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From 1c7f072d94e8b697fd9b70cdb268622a18faf522 Mon Sep 17 00:00:00 2001
From: Peter Ujfalusi peter.ujfalusi@ti.com Date: Tue, 14 Nov 2017 16:32:04 +0200 Subject: dmaengine: virt-dma: Support for race free transfer termination
From: Peter Ujfalusi peter.ujfalusi@ti.com
commit 1c7f072d94e8b697fd9b70cdb268622a18faf522 upstream.
Even with the introduced vchan_synchronize() we can face race when terminating a cyclic transfer.
If the terminate_all is called after the interrupt handler called vchan_cyclic_callback(), but before the vchan_complete tasklet is called: vc->cyclic is set to the cyclic descriptor, but the descriptor itself was freed up in the driver's terminate_all() callback. When the vhan_complete() is executed it will try to fetch the vc->cyclic vdesc, but the pointer is pointing now to uninitialized memory leading to (hard to reproduce) kernel crash.
In order to fix this, drivers should: - call vchan_terminate_vdesc() from their terminate_all callback instead calling their free_desc function to free up the descriptor. - implement device_synchronize callback and call vchan_synchronize().
This way we can make sure that the descriptor is only going to be freed up after the vchan_callback was executed in a safe manner.
Signed-off-by: Peter Ujfalusi peter.ujfalusi@ti.com Reviewed-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Vinod Koul vinod.koul@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/dma/virt-dma.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+)
--- a/drivers/dma/virt-dma.h +++ b/drivers/dma/virt-dma.h @@ -35,6 +35,7 @@ struct virt_dma_chan { struct list_head desc_completed;
struct virt_dma_desc *cyclic; + struct virt_dma_desc *vd_terminated; };
static inline struct virt_dma_chan *to_virt_chan(struct dma_chan *chan) @@ -116,6 +117,25 @@ static inline void vchan_cyclic_callback }
/** + * vchan_terminate_vdesc - Disable pending cyclic callback + * @vd: virtual descriptor to be terminated + * + * vc.lock must be held by caller + */ +static inline void vchan_terminate_vdesc(struct virt_dma_desc *vd) +{ + struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan); + + /* free up stuck descriptor */ + if (vc->vd_terminated) + vchan_vdesc_fini(vc->vd_terminated); + + vc->vd_terminated = vd; + if (vc->cyclic == vd) + vc->cyclic = NULL; +} + +/** * vchan_next_desc - peek at the next descriptor to be processed * @vc: virtual channel to obtain descriptor from * @@ -168,10 +188,20 @@ static inline void vchan_free_chan_resou * Makes sure that all scheduled or active callbacks have finished running. For * proper operation the caller has to ensure that no new callbacks are scheduled * after the invocation of this function started. + * Free up the terminated cyclic descriptor to prevent memory leakage. */ static inline void vchan_synchronize(struct virt_dma_chan *vc) { + unsigned long flags; + tasklet_kill(&vc->task); + + spin_lock_irqsave(&vc->lock, flags); + if (vc->vd_terminated) { + vchan_vdesc_fini(vc->vd_terminated); + vc->vd_terminated = NULL; + } + spin_unlock_irqrestore(&vc->lock, flags); }
#endif
Patches currently in stable-queue which might be from peter.ujfalusi@ti.com are
queue-4.14/dmaengine-bcm2835-dma-use-vchan_terminate_vdesc-instead-of-desc_free.patch queue-4.14/dmaengine-virt-dma-support-for-race-free-transfer-termination.patch queue-4.14/dmaengine-amba-pl08x-use-vchan_terminate_vdesc-instead-of-desc_free.patch
linux-stable-mirror@lists.linaro.org