On Sun, Dec 21, 2025 at 5:49 PM David Hildenbrand (Red Hat) david@kernel.org wrote:
On 12/21/25 10:44, Li Wang wrote:
David Hildenbrand (Red Hat) david@kernel.org wrote:
On 12/21/25 09:58, Li Wang wrote:
charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with a fixed size of 256M. On systems with large base hugepages (e.g. 512MB), this is smaller than a single hugepage, so the hugetlbfs mount ends up with effectively zero capacity (often visible as size=0 in mount output).
As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang waiting for progress.
I'm curious, what's the history of using "256MB" in the first place (or specifying any size?).
Seems the script initializes it with "256MB" from:
commit 29750f71a9b4cfae57cdddfbd8ca287eddca5503 Author: Mina Almasry almasrymina@google.com Date: Wed Apr 1 21:11:38 2020 -0700
hugetlb_cgroup: add hugetlb_cgroup reservation testsWhat would happen if we don't specify a size at all?
It still works well, I have gone through the whole file and there is no subtest that relies on the 256M capability.
So we could just:
mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge