From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
This patchset introduces improvements and fixes for cadence nand driver. The changes include:
1. Support deferred prob mechanism when DMA driver is not probed yet. 2. Map the slave DMA address using dma_map_resource. When ARM SMMU is enabled, using a direct physical address of SDMA results in DMA transaction failure. 3. Fixed the incorrect device context used for dma_unmap_single.
v2 changes:- - Added the missing Fixes and Cc: stable tags to the patches.
Niravkumar L Rabara (3): mtd: rawnand: cadence: support deferred prob when DMA is not ready mtd: rawnand: cadence: use dma_map_resource for sdma address mtd: rawnand: cadence: fix incorrect dev context in dma_unmap_single
.../mtd/nand/raw/cadence-nand-controller.c | 35 +++++++++++++++---- 1 file changed, 28 insertions(+), 7 deletions(-)
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
Use deferred driver probe in case the DMA driver is not probed. When ARM SMMU is enabled, all peripheral device drivers, including NAND, are probed earlier than the DMA driver.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com --- drivers/mtd/nand/raw/cadence-nand-controller.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c index 8d1d710e439d..5e27f5546f1b 100644 --- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2908,7 +2908,7 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) if (!cdns_ctrl->dmac) { dev_err(cdns_ctrl->dev, "Unable to get a DMA channel\n"); - ret = -EBUSY; + ret = -EPROBE_DEFER; goto disable_irq; } }
Hello,
On 16/01/2025 at 11:21:52 +08, niravkumar.l.rabara@intel.com wrote:
Typo (prob) in the title.
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
Use deferred driver probe in case the DMA driver is not probed.
Only devices are probed, not drivers.
When ARM SMMU is enabled, all peripheral device drivers, including NAND, are probed earlier than the DMA driver.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com
drivers/mtd/nand/raw/cadence-nand-controller.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c index 8d1d710e439d..5e27f5546f1b 100644 --- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2908,7 +2908,7 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) if (!cdns_ctrl->dmac) { dev_err(cdns_ctrl->dev, "Unable to get a DMA channel\n");
ret = -EBUSY;
ret = -EPROBE_DEFER;
Does it work if there is no DMA channel provided? The bindings do not mention DMA channels as mandatory.
Also, wouldn't it be more pleasant to use another helper from the DMA core that returns a proper return code? So we now which one among -EBUSY, -ENODEV or -EPROBE_DEFER we get?
Thanks, Miquèl
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
Map the slave DMA I/O address using dma_map_resource. When ARM SMMU is enabled, using a direct physical address of SDMA results in DMA transaction failure.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com --- .../mtd/nand/raw/cadence-nand-controller.c | 29 ++++++++++++++++--- 1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c index 5e27f5546f1b..8281151cf869 100644 --- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -471,6 +471,8 @@ struct cdns_nand_ctrl { struct { void __iomem *virt; dma_addr_t dma; + dma_addr_t iova_dma; + u32 size; } io;
int irq; @@ -1835,11 +1837,11 @@ static int cadence_nand_slave_dma_transfer(struct cdns_nand_ctrl *cdns_ctrl, }
if (dir == DMA_FROM_DEVICE) { - src_dma = cdns_ctrl->io.dma; + src_dma = cdns_ctrl->io.iova_dma; dst_dma = buf_dma; } else { src_dma = buf_dma; - dst_dma = cdns_ctrl->io.dma; + dst_dma = cdns_ctrl->io.iova_dma; }
tx = dmaengine_prep_dma_memcpy(cdns_ctrl->dmac, dst_dma, src_dma, len, @@ -2869,6 +2871,7 @@ cadence_nand_irq_cleanup(int irqnum, struct cdns_nand_ctrl *cdns_ctrl) static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) { dma_cap_mask_t mask; + struct dma_device *dma_dev = cdns_ctrl->dmac->device; int ret;
cdns_ctrl->cdma_desc = dma_alloc_coherent(cdns_ctrl->dev, @@ -2913,6 +2916,16 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) } }
+ cdns_ctrl->io.iova_dma = dma_map_resource(dma_dev->dev, cdns_ctrl->io.dma, + cdns_ctrl->io.size, + DMA_BIDIRECTIONAL, 0); + + ret = dma_mapping_error(dma_dev->dev, cdns_ctrl->io.iova_dma); + if (ret) { + dev_err(cdns_ctrl->dev, "Failed to map I/O resource to DMA\n"); + goto dma_release_chnl; + } + nand_controller_init(&cdns_ctrl->controller); INIT_LIST_HEAD(&cdns_ctrl->chips);
@@ -2923,18 +2936,22 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) if (ret) { dev_err(cdns_ctrl->dev, "Failed to register MTD: %d\n", ret); - goto dma_release_chnl; + goto unmap_dma_resource; }
kfree(cdns_ctrl->buf); cdns_ctrl->buf = kzalloc(cdns_ctrl->buf_size, GFP_KERNEL); if (!cdns_ctrl->buf) { ret = -ENOMEM; - goto dma_release_chnl; + goto unmap_dma_resource; }
return 0;
+unmap_dma_resource: + dma_unmap_resource(dma_dev->dev, cdns_ctrl->io.iova_dma, + cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0); + dma_release_chnl: if (cdns_ctrl->dmac) dma_release_channel(cdns_ctrl->dmac); @@ -2956,6 +2973,8 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl) static void cadence_nand_remove(struct cdns_nand_ctrl *cdns_ctrl) { cadence_nand_chips_cleanup(cdns_ctrl); + dma_unmap_resource(cdns_ctrl->dmac->device->dev, cdns_ctrl->io.iova_dma, + cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0); cadence_nand_irq_cleanup(cdns_ctrl->irq, cdns_ctrl); kfree(cdns_ctrl->buf); dma_free_coherent(cdns_ctrl->dev, sizeof(struct cadence_nand_cdma_desc), @@ -3020,7 +3039,9 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev) cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res); if (IS_ERR(cdns_ctrl->io.virt)) return PTR_ERR(cdns_ctrl->io.virt); + cdns_ctrl->io.dma = res->start; + cdns_ctrl->io.size = resource_size(res);
dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk"); if (IS_ERR(dt->clk))
Hello,
On 16/01/2025 at 11:21:53 +08, niravkumar.l.rabara@intel.com wrote:
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
Map the slave DMA I/O address using dma_map_resource. When ARM SMMU is enabled, using a direct physical address of SDMA results in DMA transaction failure.
It is in general a better practice anyway. Drivers should be portable and always remap resources.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com
Thanks, Miquèl
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
dma_map_single is using dma_dev->dev, however dma_unmap_single is using cdns_ctrl->dev, which is incorrect. Used the correct device context dma_dev->dev for dma_unmap_single.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com --- drivers/mtd/nand/raw/cadence-nand-controller.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c index 8281151cf869..2d50eeb902ac 100644 --- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -1863,12 +1863,12 @@ static int cadence_nand_slave_dma_transfer(struct cdns_nand_ctrl *cdns_ctrl, dma_async_issue_pending(cdns_ctrl->dmac); wait_for_completion(&finished);
- dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir); + dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
return 0;
err_unmap: - dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir); + dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
err: dev_dbg(cdns_ctrl->dev, "Fall back to CPU I/O\n");
Hello,
On 16/01/2025 at 11:21:54 +08, niravkumar.l.rabara@intel.com wrote:
From: Niravkumar L Rabara niravkumar.l.rabara@intel.com
dma_map_single is using dma_dev->dev, however dma_unmap_single is using cdns_ctrl->dev, which is incorrect. Used the correct device context dma_dev->dev for dma_unmap_single.
I guess on is the physical/bus device and the other the framework device? It would be nice to clarify this in the commit log.
Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Cc: stable@vger.kernel.org Signed-off-by: Niravkumar L Rabara niravkumar.l.rabara@intel.com
Thanks, Miquèl
linux-stable-mirror@lists.linaro.org