On 03.09.25 18:12, Thierry Reding wrote:
On Tue, Sep 02, 2025 at 09:04:24PM +0200, David Hildenbrand wrote:
+>> +struct cma *__init cma_create(phys_addr_t base, phys_addr_t size,
unsigned int order_per_bit, const char *name)
+{
struct cma *cma;
int ret;
ret = cma_check_memory(base, size);
if (ret < 0)
return ERR_PTR(ret);
cma = kzalloc(sizeof(*cma), GFP_KERNEL);
if (!cma)
return ERR_PTR(-ENOMEM);
cma_init_area(cma, name, size, order_per_bit);
cma->ranges[0].base_pfn = PFN_DOWN(base);
cma->ranges[0].early_pfn = PFN_DOWN(base);
cma->ranges[0].count = cma->count;
cma->nranges = 1;
cma_activate_area(cma);
return cma;
+}
+void cma_free(struct cma *cma) +{
kfree(cma);
+}
2.50.0
I agree that supporting dynamic CMA areas would be good. However, by doing it like this, these CMA areas are invisible to the rest of the system. E.g. cma_for_each_area() does not know about them. It seems a bit inconsistent that there will now be some areas that are globally known, and some that are not.
Yeah, I'm not a fan of that.
What is the big problem we are trying to solve here? Why do they have to be dynamic, why do they even have to support freeing?
Freeing isn't necessarily something that I've needed. It just seemed like there wasn't really a good reason not to support it. The current implementation here is not sufficient, though, because we'd need to properly undo everything that cma_activate_area() does. I think the cleanup: block in cma_activate_area() is probably sufficient.
The problem that I'm trying to solve is that currently, depending on the use-case the kernel configuration needs to be changed and the kernel rebuilt in order to support it. However there doesn't seem to be a good technical reason for that limitation. The only reason it is this way seems to be that, well, it's always been this way.
Right, and we can just dynamically grow the array, keep them in a list etc.