Intel integrated DMA 32-bit support multi-block transfers. Add missed setting to the platform data.
Fixes: f7c799e950f9 ("dmaengine: dw: we do support Merrifield SoC in PCI mode") Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: stable@vger.kernel.org --- drivers/dma/dw/pci.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c index 7778ed705a1a..313ba10c6224 100644 --- a/drivers/dma/dw/pci.c +++ b/drivers/dma/dw/pci.c @@ -25,6 +25,7 @@ static struct dw_dma_platform_data mrfld_pdata = { .block_size = 131071, .nr_masters = 1, .data_width = {4}, + .multi_block = {1, 1, 1, 1, 1, 1, 1, 1}, };
static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
On 05-12-18, 18:28, Andy Shevchenko wrote:
Intel integrated DMA 32-bit support multi-block transfers. Add missed setting to the platform data.
Fixes: f7c799e950f9 ("dmaengine: dw: we do support Merrifield SoC in PCI mode") Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: stable@vger.kernel.org
How is this a stable material? It would improve performance by using multi blocks but given the fact that this is used for slow peripherals, do you really see a user impact?
drivers/dma/dw/pci.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c index 7778ed705a1a..313ba10c6224 100644 --- a/drivers/dma/dw/pci.c +++ b/drivers/dma/dw/pci.c @@ -25,6 +25,7 @@ static struct dw_dma_platform_data mrfld_pdata = { .block_size = 131071, .nr_masters = 1, .data_width = {4},
- .multi_block = {1, 1, 1, 1, 1, 1, 1, 1},
}; static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) -- 2.19.2
On Wed, Dec 05, 2018 at 11:42:02PM +0530, Vinod Koul wrote:
On 05-12-18, 18:28, Andy Shevchenko wrote:
Intel integrated DMA 32-bit support multi-block transfers. Add missed setting to the platform data.
Fixes: f7c799e950f9 ("dmaengine: dw: we do support Merrifield SoC in PCI mode") Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: stable@vger.kernel.org
How is this a stable material? It would improve performance by using multi blocks but given the fact that this is used for slow peripherals, do you really see a user impact?
Last my testing was done with SPI VGA panel connected and I ran few FB based tests with more or less good FPS numbers. So, I can't tell about notable user impact. Feel free not to add for stable.
Btw, I have tested via DMA test, where it's indeed at least visible, but still memory-to-memory is not what that DMA is used for.
drivers/dma/dw/pci.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c index 7778ed705a1a..313ba10c6224 100644 --- a/drivers/dma/dw/pci.c +++ b/drivers/dma/dw/pci.c @@ -25,6 +25,7 @@ static struct dw_dma_platform_data mrfld_pdata = { .block_size = 131071, .nr_masters = 1, .data_width = {4},
- .multi_block = {1, 1, 1, 1, 1, 1, 1, 1},
}; static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid) -- 2.19.2
-- ~Vinod
linux-stable-mirror@lists.linaro.org