On Wed, 12 Jun 2024 16:12:16 +0800 "zhai.he" zhai.he@nxp.com wrote:
From: He Zhai zhai.he@nxp.com
(cc Barry & Christoph)
What was your reason for adding cc:stable to the email headers? Does this address some serious problem? If so, please fully describe that problem.
In the current code logic, if the device-specified CMA memory allocation fails, memory will not be allocated from the default CMA area. This patch will use the default cma region when the device's specified CMA is not enough.
In addition, the log level of allocation failure is changed to debug. Because these logs will be printed when memory allocation from the device specified CMA fails, but if the allocation fails, it will be allocated from the default cma area. It can easily mislead developers' judgment.
...
--- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -357,8 +357,13 @@ struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp) /* CMA can be used only in the context which permits sleeping */ if (!gfpflags_allow_blocking(gfp)) return NULL;
- if (dev->cma_area)
return cma_alloc_aligned(dev->cma_area, size, gfp);
- if (dev->cma_area) {
struct page *page = NULL;
page = cma_alloc_aligned(dev->cma_area, size, gfp);
if (page)
return page;
- } if (size <= PAGE_SIZE) return NULL;
The dma_alloc_contiguous() kerneldoc should be updated for this.
The patch prompts the question "why does the device-specified CMA area exist?". Why not always allocate from the global pool? If the device-specified area exists to prevent one device from going crazy and consuming too much contiguous memory, this patch violates that intent?
@@ -406,6 +411,8 @@ void dma_free_contiguous(struct device *dev, struct page *page, size_t size) if (dev->cma_area) { if (cma_release(dev->cma_area, page, count)) return;
if (cma_release(dma_contiguous_default_area, page, count))
} else { /*return;
- otherwise, page is from either per-numa cma or default cma
diff --git a/mm/cma.c b/mm/cma.c index 3e9724716bad..6e12faf1bea7 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -495,8 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, } if (ret && !no_warn) {
pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n",
__func__, cma->name, count, ret);
pr_debug("%s: alloc failed, req-size: %lu pages, ret: %d, try to use default cma\n",
cma_debug_show_areas(cma); }cma->name, count, ret);