On Fri 16-08-24 19:46:26, Hailong Liu wrote:
On Fri, 16. Aug 12:13, Uladzislau Rezki wrote:
On Fri, Aug 16, 2024 at 05:12:32PM +0800, Hailong Liu wrote:
On Thu, 15. Aug 22:07, Andrew Morton wrote:
On Fri, 9 Aug 2024 11:41:42 +0200 Uladzislau Rezki urezki@gmail.com wrote:
> Acked-by: Barry Song baohua@kernel.org > > because we already have a fallback here: > > void *__vmalloc_node_range_noprof : > > fail: > if (shift > PAGE_SHIFT) { > shift = PAGE_SHIFT; > align = real_align; > size = real_size; > goto again; > }
This really deserves a comment because this is not really clear at all. The code is also fragile and it would benefit from some re-org.
Thanks for the fix.
Acked-by: Michal Hocko mhocko@suse.com
I agree. This is only clear for people who know the code. A "fallback" to order-0 should be commented.
It's been a week. Could someone please propose a fixup patch to add this comment?
Hi Andrew:
Do you mean that I need to send a v2 patch with the the comments included?
It is better to post v2.
Got it.
But before, could you please comment on:
in case of order-0, bulk path may easily fail and fallback to the single page allocator. If an request is marked as NO_FAIL, i am talking about order-0 request, your change breaks GFP_NOFAIL for !order.
Am i missing something obvious?
For order-0, alloc_pages(GFP_X | __GFP_NOFAIL, 0), buddy allocator will handle the flag correctly. IMO we don't need to handle the flag here.
Let me clarify what I would like to have clarified:
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6b783baf12a1..fea90a39f5c5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3510,13 +3510,13 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) EXPORT_SYMBOL_GPL(vmap_pfn); #endif /* CONFIG_VMAP_PFN */
+/* GFP_NOFAIL semantic is implemented by __vmalloc_node_range_noprof */ static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned int nr_pages, struct page **pages) { unsigned int nr_allocated = 0; - gfp_t alloc_gfp = gfp; - bool nofail = gfp & __GFP_NOFAIL; + gfp_t alloc_gfp = gfp & ~ __GFP_NOFAIL; struct page *page; int i;
@@ -3527,9 +3527,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * more permissive. */ if (!order) { - /* bulk allocator doesn't support nofail req. officially */ - gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL; - while (nr_allocated < nr_pages) { unsigned int nr, nr_pages_request;
@@ -3547,12 +3544,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolicy wants to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid == NUMA_NO_NODE) - nr = alloc_pages_bulk_array_mempolicy_noprof(bulk_gfp, + nr = alloc_pages_bulk_array_mempolicy_noprof(alloc_gfp, nr_pages_request, pages + nr_allocated);
else - nr = alloc_pages_bulk_array_node_noprof(bulk_gfp, nid, + nr = alloc_pages_bulk_array_node_noprof(alloc_gfp, nid, nr_pages_request, pages + nr_allocated);
@@ -3566,13 +3563,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, if (nr != nr_pages_request) break; } - } else if (gfp & __GFP_NOFAIL) { - /* - * Higher order nofail allocations are really expensive and - * potentially dangerous (pre-mature OOM, disruptive reclaim - * and compaction etc. - */ - alloc_gfp &= ~__GFP_NOFAIL; }
/* High-order pages or fallback path if "bulk" fails. */