The current driver have some issues, this series fixes them.
PATCH 1 is fixing a wrong offset computation in the dma descriptor address PATCH 2 is fixing the xdma_synchronize callback, which was not waiting properly for the last transfer. PATCH 3 is clarifing the documentation for xdma_fill_descs
--- Louis Chauvet (1): dmaengine: xilinx: xdma: Fix synchronization issue
Miquel Raynal (2): dmaengine: xilinx: xdma: Fix wrong offsets in the buffers addresses in dma descriptor dmaengine: xilinx: xdma: Clarify kdoc in XDMA driver
drivers/dma/xilinx/xdma-regs.h | 3 +++ drivers/dma/xilinx/xdma.c | 42 +++++++++++++++++++++++++++--------------- 2 files changed, 30 insertions(+), 15 deletions(-) --- base-commit: 8e938e39866920ddc266898e6ae1fffc5c8f51aa change-id: 20240322-digigram-xdma-fixes-aa13451b7474
Best regards,
From: Miquel Raynal miquel.raynal@bootlin.com
The addition of interleaved transfers slightly changed the way addresses inside DMA descriptors are derived, breaking cyclic transfers.
Fixes: 3e184e64c2e5 ("dmaengine: xilinx: xdma: Prepare the introduction of interleaved DMA transfers") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Louis Chauvet louis.chauvet@bootlin.com --- drivers/dma/xilinx/xdma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c index 170017ff2aad..b9788aa8f6b7 100644 --- a/drivers/dma/xilinx/xdma.c +++ b/drivers/dma/xilinx/xdma.c @@ -704,7 +704,7 @@ xdma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t address, desc_num = 0; for (i = 0; i < periods; i++) { desc_num += xdma_fill_descs(sw_desc, *src, *dst, period_size, desc_num); - addr += i * period_size; + addr += period_size; }
tx_desc = vchan_tx_prep(&xdma_chan->vchan, &sw_desc->vdesc, flags);
The current xdma_synchronize method does not properly wait for the last transfer to be done. Due to limitations of the XMDA engine, it is not possible to stop a transfer in the middle of a descriptor. Said otherwise, if a stop is requested at the end of descriptor "N" and the OS is fast enough, the DMA controller will effectively stop immediately. However, if the OS is slightly too slow to request the stop and the DMA engine starts descriptor "N+1", the N+1 transfer will be performed until its end. This means that after a terminate_all, the last descriptor must remain valid and the synchronization must wait for this last descriptor to be terminated.
Fixes: 855c2e1d1842 ("dmaengine: xilinx: xdma: Rework xdma_terminate_all()") Fixes: f5c392d106e7 ("dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks") Cc: stable@vger.kernel.org Suggested-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Louis Chauvet louis.chauvet@bootlin.com --- drivers/dma/xilinx/xdma-regs.h | 3 +++ drivers/dma/xilinx/xdma.c | 26 ++++++++++++++++++-------- 2 files changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/dma/xilinx/xdma-regs.h b/drivers/dma/xilinx/xdma-regs.h index 98f5f6fb9ff9..6ad08878e938 100644 --- a/drivers/dma/xilinx/xdma-regs.h +++ b/drivers/dma/xilinx/xdma-regs.h @@ -117,6 +117,9 @@ struct xdma_hw_desc { CHAN_CTRL_IE_WRITE_ERROR | \ CHAN_CTRL_IE_DESC_ERROR)
+/* bits of the channel status register */ +#define XDMA_CHAN_STATUS_BUSY BIT(0) + #define XDMA_CHAN_STATUS_MASK CHAN_CTRL_START
#define XDMA_CHAN_ERROR_MASK (CHAN_CTRL_IE_DESC_ALIGN_MISMATCH | \ diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c index b9788aa8f6b7..5a3a3293b21b 100644 --- a/drivers/dma/xilinx/xdma.c +++ b/drivers/dma/xilinx/xdma.c @@ -71,6 +71,8 @@ struct xdma_chan { enum dma_transfer_direction dir; struct dma_slave_config cfg; u32 irq; + struct completion last_interrupt; + bool stop_requested; };
/** @@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) return ret;
xchan->busy = true; + xchan->stop_requested = false; + reinit_completion(&xchan->last_interrupt);
return 0; } @@ -387,7 +391,6 @@ static int xdma_xfer_start(struct xdma_chan *xchan) static int xdma_xfer_stop(struct xdma_chan *xchan) { int ret; - u32 val; struct xdma_device *xdev = xchan->xdev_hdl;
/* clear run stop bit to prevent any further auto-triggering */ @@ -395,13 +398,7 @@ static int xdma_xfer_stop(struct xdma_chan *xchan) CHAN_CTRL_RUN_STOP); if (ret) return ret; - - /* Clear the channel status register */ - ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS_RC, &val); - if (ret) - return ret; - - return 0; + return ret; }
/** @@ -474,6 +471,8 @@ static int xdma_alloc_channels(struct xdma_device *xdev, xchan->xdev_hdl = xdev; xchan->base = base + i * XDMA_CHAN_STRIDE; xchan->dir = dir; + xchan->stop_requested = false; + init_completion(&xchan->last_interrupt);
ret = xdma_channel_init(xchan); if (ret) @@ -521,6 +520,7 @@ static int xdma_terminate_all(struct dma_chan *chan) spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
xdma_chan->busy = false; + xdma_chan->stop_requested = true; vd = vchan_next_desc(&xdma_chan->vchan); if (vd) { list_del(&vd->node); @@ -542,6 +542,13 @@ static int xdma_terminate_all(struct dma_chan *chan) static void xdma_synchronize(struct dma_chan *chan) { struct xdma_chan *xdma_chan = to_xdma_chan(chan); + struct xdma_device *xdev = xdma_chan->xdev_hdl; + int st = 0; + + /* If the engine continues running, wait for the last interrupt */ + regmap_read(xdev->rmap, xdma_chan->base + XDMA_CHAN_STATUS, &st); + if (st & XDMA_CHAN_STATUS_BUSY) + wait_for_completion_timeout(&xdma_chan->last_interrupt, msecs_to_jiffies(1000));
vchan_synchronize(&xdma_chan->vchan); } @@ -876,6 +883,9 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id) u32 st; bool repeat_tx;
+ if (xchan->stop_requested) + complete(&xchan->last_interrupt); + spin_lock(&xchan->vchan.lock);
/* get submitted request */
On 3/27/24 02:58, Louis Chauvet wrote:
The current xdma_synchronize method does not properly wait for the last transfer to be done. Due to limitations of the XMDA engine, it is not possible to stop a transfer in the middle of a descriptor. Said otherwise, if a stop is requested at the end of descriptor "N" and the OS is fast enough, the DMA controller will effectively stop immediately. However, if the OS is slightly too slow to request the stop and the DMA engine starts descriptor "N+1", the N+1 transfer will be performed until its end. This means that after a terminate_all, the last descriptor must remain valid and the synchronization must wait for this last descriptor to be terminated.
Fixes: 855c2e1d1842 ("dmaengine: xilinx: xdma: Rework xdma_terminate_all()") Fixes: f5c392d106e7 ("dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks") Cc: stable@vger.kernel.org Suggested-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Louis Chauvet louis.chauvet@bootlin.com
drivers/dma/xilinx/xdma-regs.h | 3 +++ drivers/dma/xilinx/xdma.c | 26 ++++++++++++++++++-------- 2 files changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/dma/xilinx/xdma-regs.h b/drivers/dma/xilinx/xdma-regs.h index 98f5f6fb9ff9..6ad08878e938 100644 --- a/drivers/dma/xilinx/xdma-regs.h +++ b/drivers/dma/xilinx/xdma-regs.h @@ -117,6 +117,9 @@ struct xdma_hw_desc { CHAN_CTRL_IE_WRITE_ERROR | \ CHAN_CTRL_IE_DESC_ERROR) +/* bits of the channel status register */ +#define XDMA_CHAN_STATUS_BUSY BIT(0)
- #define XDMA_CHAN_STATUS_MASK CHAN_CTRL_START
#define XDMA_CHAN_ERROR_MASK (CHAN_CTRL_IE_DESC_ALIGN_MISMATCH | \ diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c index b9788aa8f6b7..5a3a3293b21b 100644 --- a/drivers/dma/xilinx/xdma.c +++ b/drivers/dma/xilinx/xdma.c @@ -71,6 +71,8 @@ struct xdma_chan { enum dma_transfer_direction dir; struct dma_slave_config cfg; u32 irq;
- struct completion last_interrupt;
- bool stop_requested; };
/** @@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) return ret; xchan->busy = true;
- xchan->stop_requested = false;
- reinit_completion(&xchan->last_interrupt);
If stop_requested is true, it should not start another transfer. So I would suggest to add
if (xchan->stop_requested)
return -ENODEV;
at the beginning of xdma_xfer_start().
xdma_xfer_start() is protected by chan lock.
return 0; } @@ -387,7 +391,6 @@ static int xdma_xfer_start(struct xdma_chan *xchan) static int xdma_xfer_stop(struct xdma_chan *xchan) { int ret;
- u32 val; struct xdma_device *xdev = xchan->xdev_hdl;
/* clear run stop bit to prevent any further auto-triggering */ @@ -395,13 +398,7 @@ static int xdma_xfer_stop(struct xdma_chan *xchan) CHAN_CTRL_RUN_STOP); if (ret) return ret;
Above two lines can be removed with your change.
- /* Clear the channel status register */
- ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS_RC, &val);
- if (ret)
return ret;
- return 0;
- return ret; }
/** @@ -474,6 +471,8 @@ static int xdma_alloc_channels(struct xdma_device *xdev, xchan->xdev_hdl = xdev; xchan->base = base + i * XDMA_CHAN_STRIDE; xchan->dir = dir;
xchan->stop_requested = false;
init_completion(&xchan->last_interrupt);
ret = xdma_channel_init(xchan); if (ret) @@ -521,6 +520,7 @@ static int xdma_terminate_all(struct dma_chan *chan) spin_lock_irqsave(&xdma_chan->vchan.lock, flags); xdma_chan->busy = false;
- xdma_chan->stop_requested = true; vd = vchan_next_desc(&xdma_chan->vchan); if (vd) { list_del(&vd->node);
@@ -542,6 +542,13 @@ static int xdma_terminate_all(struct dma_chan *chan) static void xdma_synchronize(struct dma_chan *chan) { struct xdma_chan *xdma_chan = to_xdma_chan(chan);
- struct xdma_device *xdev = xdma_chan->xdev_hdl;
- int st = 0;
- /* If the engine continues running, wait for the last interrupt */
- regmap_read(xdev->rmap, xdma_chan->base + XDMA_CHAN_STATUS, &st);
- if (st & XDMA_CHAN_STATUS_BUSY)
wait_for_completion_timeout(&xdma_chan->last_interrupt, msecs_to_jiffies(1000));
I suggest to add error message for timeout case.
vchan_synchronize(&xdma_chan->vchan); } @@ -876,6 +883,9 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id) u32 st; bool repeat_tx;
- if (xchan->stop_requested)
complete(&xchan->last_interrupt);
This should be moved to the end of function to make sure processing previous transfer is completed.
out:
if (xchan->stop_requested) {
xchan->busy = false;
complete(&xchan->last_interrupt);
}
spin_unlock(&xchan->vchan.lock);
return IRQ_HANDLED;
Thanks,
Lizhi
spin_lock(&xchan->vchan.lock); /* get submitted request */
Hi Lizhi,
@@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) return ret;
xchan->busy = true;
- xchan->stop_requested = false;
- reinit_completion(&xchan->last_interrupt);
If stop_requested is true, it should not start another transfer. So I would suggest to add
if (xchan->stop_requested)
return -ENODEV;
Maybe -EBUSY in this case?
I thought synchronize() was mandatory in-between. If that's not the case then indeed we need to block or error-out if a new transfer gets started.
at the beginning of xdma_xfer_start().
xdma_xfer_start() is protected by chan lock.
return 0;
}
Thanks, Miquèl
On 3/27/24 17:23, Miquel Raynal wrote:
Hi Lizhi,
@@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) return ret;
xchan->busy = true;
- xchan->stop_requested = false;
- reinit_completion(&xchan->last_interrupt);
If stop_requested is true, it should not start another transfer. So I would suggest to add
if (xchan->stop_requested)
return -ENODEV;
Maybe -EBUSY in this case?
I thought synchronize() was mandatory in-between. If that's not the case then indeed we need to block or error-out if a new transfer gets started.
Okay. It looks issue_pending is not expected between terminate_all() and synchronize().
This check is not needed.
Thanks,
Lizhi
at the beginning of xdma_xfer_start().
xdma_xfer_start() is protected by chan lock.
return 0;
}
Thanks, Miquèl
Hi Louis,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 8e938e39866920ddc266898e6ae1fffc5c8f51aa]
url: https://github.com/intel-lab-lkp/linux/commits/Louis-Chauvet/dmaengine-xilin... base: 8e938e39866920ddc266898e6ae1fffc5c8f51aa patch link: https://lore.kernel.org/r/20240327-digigram-xdma-fixes-v1-2-45f4a52c0283%40b... patch subject: [PATCH 2/3] dmaengine: xilinx: xdma: Fix synchronization issue config: sparc-allyesconfig (https://download.01.org/0day-ci/archive/20240328/202403280803.IAFl90ZE-lkp@i...) compiler: sparc64-linux-gcc (GCC) 13.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403280803.IAFl90ZE-lkp@i...)
If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot lkp@intel.com | Closes: https://lore.kernel.org/oe-kbuild-all/202403280803.IAFl90ZE-lkp@intel.com/
All warnings (new ones prefixed by >>):
drivers/dma/xilinx/xdma.c:76: warning: Function parameter or struct member 'last_interrupt' not described in 'xdma_chan' drivers/dma/xilinx/xdma.c:76: warning: Function parameter or struct member 'stop_requested' not described in 'xdma_chan'
vim +76 drivers/dma/xilinx/xdma.c
17ce252266c7f0 Lizhi Hou 2023-01-19 53 17ce252266c7f0 Lizhi Hou 2023-01-19 54 /** 17ce252266c7f0 Lizhi Hou 2023-01-19 55 * struct xdma_chan - Driver specific DMA channel structure 17ce252266c7f0 Lizhi Hou 2023-01-19 56 * @vchan: Virtual channel 17ce252266c7f0 Lizhi Hou 2023-01-19 57 * @xdev_hdl: Pointer to DMA device structure 17ce252266c7f0 Lizhi Hou 2023-01-19 58 * @base: Offset of channel registers 17ce252266c7f0 Lizhi Hou 2023-01-19 59 * @desc_pool: Descriptor pool 17ce252266c7f0 Lizhi Hou 2023-01-19 60 * @busy: Busy flag of the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 61 * @dir: Transferring direction of the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 62 * @cfg: Transferring config of the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 63 * @irq: IRQ assigned to the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 64 */ 17ce252266c7f0 Lizhi Hou 2023-01-19 65 struct xdma_chan { 17ce252266c7f0 Lizhi Hou 2023-01-19 66 struct virt_dma_chan vchan; 17ce252266c7f0 Lizhi Hou 2023-01-19 67 void *xdev_hdl; 17ce252266c7f0 Lizhi Hou 2023-01-19 68 u32 base; 17ce252266c7f0 Lizhi Hou 2023-01-19 69 struct dma_pool *desc_pool; 17ce252266c7f0 Lizhi Hou 2023-01-19 70 bool busy; 17ce252266c7f0 Lizhi Hou 2023-01-19 71 enum dma_transfer_direction dir; 17ce252266c7f0 Lizhi Hou 2023-01-19 72 struct dma_slave_config cfg; 17ce252266c7f0 Lizhi Hou 2023-01-19 73 u32 irq; 70e8496bf693e1 Louis Chauvet 2024-03-27 74 struct completion last_interrupt; 70e8496bf693e1 Louis Chauvet 2024-03-27 75 bool stop_requested; 17ce252266c7f0 Lizhi Hou 2023-01-19 @76 }; 17ce252266c7f0 Lizhi Hou 2023-01-19 77
From: Miquel Raynal miquel.raynal@bootlin.com
Clarify the kernel doc of xdma_fill_descs(), especially how big chunks will be handled.
Signed-off-by: Miquel Raynal miquel.raynal@bootlin.com Signed-off-by: Louis Chauvet louis.chauvet@bootlin.com --- drivers/dma/xilinx/xdma.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c index 5a3a3293b21b..313b217388fe 100644 --- a/drivers/dma/xilinx/xdma.c +++ b/drivers/dma/xilinx/xdma.c @@ -554,12 +554,14 @@ static void xdma_synchronize(struct dma_chan *chan) }
/** - * xdma_fill_descs - Fill hardware descriptors with contiguous memory block addresses - * @sw_desc: tx descriptor state container - * @src_addr: Value for a ->src_addr field of a first descriptor - * @dst_addr: Value for a ->dst_addr field of a first descriptor - * @size: Total size of a contiguous memory block - * @filled_descs_num: Number of filled hardware descriptors for corresponding sw_desc + * xdma_fill_descs() - Fill hardware descriptors for one contiguous memory chunk. + * More than one descriptor will be used if the size is bigger + * than XDMA_DESC_BLEN_MAX. + * @sw_desc: Descriptor container + * @src_addr: First value for the ->src_addr field + * @dst_addr: First value for the ->dst_addr field + * @size: Size of the contiguous memory block + * @filled_descs_num: Index of the first descriptor to take care of in @sw_desc */ static inline u32 xdma_fill_descs(struct xdma_desc *sw_desc, u64 src_addr, u64 dst_addr, u32 size, u32 filled_descs_num)
Hi,
Thanks for your patch.
FYI: kernel test robot notices the stable kernel rule is not satisfied.
The check is based on https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#opti...
Rule: add the tag "Cc: stable@vger.kernel.org" in the sign-off area to have the patch automatically included in the stable tree. Subject: [PATCH 3/3] dmaengine: xilinx: xdma: Clarify kdoc in XDMA driver Link: https://lore.kernel.org/stable/20240327-digigram-xdma-fixes-v1-3-45f4a52c028...
On Wed, 27 Mar 2024 10:58:47 +0100, Louis Chauvet wrote:
The current driver have some issues, this series fixes them.
PATCH 1 is fixing a wrong offset computation in the dma descriptor address PATCH 2 is fixing the xdma_synchronize callback, which was not waiting properly for the last transfer. PATCH 3 is clarifing the documentation for xdma_fill_descs
[...]
Applied, thanks!
[1/3] dmaengine: xilinx: xdma: Fix wrong offsets in the buffers addresses in dma descriptor commit: 5b9706bfc094314c600ab810a61208a7cbaa4cb3 [2/3] dmaengine: xilinx: xdma: Fix synchronization issue commit: 6a40fb8245965b481b4dcce011cd63f20bf91ee0 [3/3] dmaengine: xilinx: xdma: Clarify kdoc in XDMA driver commit: 7a71c6dc21d5ae83ab27c39a67845d6d23ac271f
Best regards,
linux-stable-mirror@lists.linaro.org