On 12/21/25 09:58, Li Wang wrote:
charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with a fixed size of 256M. On systems with large base hugepages (e.g. 512MB), this is smaller than a single hugepage, so the hugetlbfs mount ends up with effectively zero capacity (often visible as size=0 in mount output).
As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang waiting for progress.
I'm curious, what's the history of using "256MB" in the first place (or specifying any size?).