4.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
[ Upstream commit de7eab301de78869322104ea13e124c936ad3e1f ]
As the recent swiotlb bug revealed, we seem to have given up the direct DMA allocation too early and felt back to swiotlb allocation. The reason is that swiotlb allocator expected that dma_direct_alloc() would try harder to get pages even below 64bit DMA mask with GFP_DMA32, but the function doesn't do that but only deals with GFP_DMA case.
This patch adds a similar fallback reallocation with GFP_DMA32 as we've done with GFP_DMA. The condition is that the coherent mask is smaller than 64bit (i.e. some address limitation), and neither GFP_DMA nor GFP_DMA32 is set beforehand.
Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin alexander.levin@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- lib/dma-direct.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/lib/dma-direct.c +++ b/lib/dma-direct.c @@ -84,6 +84,13 @@ again: __free_pages(page, page_order); page = NULL;
+ if (IS_ENABLED(CONFIG_ZONE_DMA32) && + dev->coherent_dma_mask < DMA_BIT_MASK(64) && + !(gfp & (GFP_DMA32 | GFP_DMA))) { + gfp |= GFP_DMA32; + goto again; + } + if (IS_ENABLED(CONFIG_ZONE_DMA) && dev->coherent_dma_mask < DMA_BIT_MASK(32) && !(gfp & GFP_DMA)) {