From: Devendra K Verma devverma@amd.com
[ Upstream commit 5e742de97c806a4048418237ef1283e7d71eaf4b ]
DMA Engine has support for the callback_result which provides the status of the request and the residue. This helps in determining the correct status of the request and in efficient resource management of the request. The 'callback_result' method is preferred over the deprecated 'callback' method.
Signed-off-by: Devendra K Verma devverma@amd.com Link: https://lore.kernel.org/r/20250821121505.318179-1-devverma@amd.com Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
YES
- What it fixes - Before this change, virt-dma initializes every descriptor’s result to “no error, residue 0” and drivers that don’t overwrite it will always report success with no remaining bytes, even on abort or partial transfer. See default init in vchan_tx_prep: drivers/dma/virt-dma.h:66 and drivers/dma/virt-dma.h:67. - This patch correctly sets both the transaction status and residue in the dw-edma driver when a transfer completes or aborts, so clients using callback_result get accurate results rather than misleading defaults.
- What changed, precisely - Adds a helper to compute and set the result if a `callback_result` was registered: - Helper introduction: drivers/dma/dw-edma/dw-edma-core.c:587 - Guard against legacy callbacks (no change if `tx.callback_result` is NULL): drivers/dma/dw-edma/dw-edma-core.c:594 - Residue computed as bytes left in the descriptor: `desc->alloc_sz - desc->xfer_sz` at drivers/dma/dw-edma/dw-edma-core.c:599 - Sets result on successful completion (no remaining chunks) to NOERROR, then completes the cookie: - Call site in done IRQ: drivers/dma/dw-edma/dw-edma-core.c:619 - Sets result on abort to ABORTED, then completes the cookie: - Call site in abort IRQ: drivers/dma/dw-edma/dw-edma-core.c:657 - The result struct is then propagated by virt-dma when invoking the client’s callback_result: - vchan_complete uses `vd->tx_result` for invocation: drivers/dma/virt-dma.c:101
- Consistency with existing semantics - The residue computation matches what `tx_status` reports (same `alloc_sz - xfer_sz` basis), so callback_result and tx_status agree: - Residue for tx_status: drivers/dma/dw-edma/dw-edma-core.c:340 - The driver already advertises `DMA_RESIDUE_GRANULARITY_DESCRIPTOR`, so per-descriptor residue reporting is expected and appropriate: drivers/dma/dw-edma/dw-edma-core.c:813
- Risk assessment - Scope is minimal and self-contained to the dw-edma driver: introduction of one helper and two guarded call sites in IRQ paths. No architectural changes. - Safe for legacy users: if a client still uses the deprecated `callback` (no `callback_result`), the helper is a no-op (drivers/dma/dw-edma/dw-edma-core.c:594), and virt-dma continues to invoke the legacy callback path unchanged. - Concurrency is respected: both call sites execute under `chan->vc.lock`, and fields used for residue (`alloc_sz`, `xfer_sz`) are already updated under the same locking in the transfer path.
- Why this fits stable backport criteria - Correctness bugfix: ensures accurate completion status and residue are delivered to clients using the standard DMAEngine `callback_result`, preventing misinterpretation of aborts as successful completions or reporting zero residue when bytes remain. - Small, contained change in a single driver, low regression risk, no new features or API changes, aligns with established DMAEngine usage.
Notes - One follow-up improvement (not required for this backport) would be to also set an explicit status when handling `EDMA_REQ_STOP` in the done interrupt path (drivers/dma/dw-edma/dw-edma-core.c:630), but the absence of that does not detract from the clear correctness win provided here.
drivers/dma/dw-edma/dw-edma-core.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index b43255f914f33..8e5f7defa6b67 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -584,6 +584,25 @@ dw_edma_device_prep_interleaved_dma(struct dma_chan *dchan, return dw_edma_device_transfer(&xfer); }
+static void dw_hdma_set_callback_result(struct virt_dma_desc *vd, + enum dmaengine_tx_result result) +{ + u32 residue = 0; + struct dw_edma_desc *desc; + struct dmaengine_result *res; + + if (!vd->tx.callback_result) + return; + + desc = vd2dw_edma_desc(vd); + if (desc) + residue = desc->alloc_sz - desc->xfer_sz; + + res = &vd->tx_result; + res->result = result; + res->residue = residue; +} + static void dw_edma_done_interrupt(struct dw_edma_chan *chan) { struct dw_edma_desc *desc; @@ -597,6 +616,8 @@ static void dw_edma_done_interrupt(struct dw_edma_chan *chan) case EDMA_REQ_NONE: desc = vd2dw_edma_desc(vd); if (!desc->chunks_alloc) { + dw_hdma_set_callback_result(vd, + DMA_TRANS_NOERROR); list_del(&vd->node); vchan_cookie_complete(vd); } @@ -633,6 +654,7 @@ static void dw_edma_abort_interrupt(struct dw_edma_chan *chan) spin_lock_irqsave(&chan->vc.lock, flags); vd = vchan_next_desc(&chan->vc); if (vd) { + dw_hdma_set_callback_result(vd, DMA_TRANS_ABORTED); list_del(&vd->node); vchan_cookie_complete(vd); }