On 1/19/24 9:09 PM, Ryan Roberts wrote:
Hi Muhammad,
Afraid this patch is causing a regression on our CI system when it turned up in linux-next today. Additionally, 2 of thetests you have added are failing because the scripts are not exported correctly...
Andrew has dropped this patch for now.
On 16/01/2024 09:06, Muhammad Usama Anjum wrote:
Add missing tests to run_vmtests.sh. The mm kselftests are run through run_vmtests.sh. If a test isn't present in this script, it'll not run with run_tests or `make -C tools/testing/selftests/mm run_tests`.
Signed-off-by: Muhammad Usama Anjum usama.anjum@collabora.com
tools/testing/selftests/mm/run_vmtests.sh | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index 246d53a5d7f2..a5e6ba8d3579 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -248,6 +248,9 @@ CATEGORY="hugetlb" run_test ./map_hugetlb CATEGORY="hugetlb" run_test ./hugepage-mremap CATEGORY="hugetlb" run_test ./hugepage-vmemmap CATEGORY="hugetlb" run_test ./hugetlb-madvise +CATEGORY="hugetlb" run_test ./charge_reserved_hugetlb.sh +CATEGORY="hugetlb" run_test ./hugetlb_reparenting_test.sh
These 2 tests are failing because the test scripts are not exported. You will need to add them to the TEST_FILES variable in the Makefile.
This must be done. I'll investigate even after adding them if these scripts are robust enough to pass.
+CATEGORY="hugetlb" run_test ./hugetlb-read-hwpoison
The addition of this test causes 2 later tests to fail with ENOMEM. I suspect its a side-effect of marking the hugetlbs as hwpoisoned? (just a guess based on the test name!). Once a page is marked poisoned, is there a way to un-poison it? If not, I suspect that's why it wasn't part of the standard test script in the first place.
hugetlb-read-hwpoison failed as probably the fix in the kernel for the test hasn't been merged in the kernel. The other tests (uffd-stress) aren't failing on my end and on CI [1][2]
[1] https://lava.collabora.dev/scheduler/job/12577207#L3677 [2] https://lava.collabora.dev/scheduler/job/12577229#L4027
Maybe its configurations issue which is exposed now. Not sure. Maybe hugetlb-read-hwpoison is changing some configuration and not restoring it. Maybe your system has less number of hugetlb pages.
These are the tests that start failing:
# # ------------------------------------ # # running ./uffd-stress hugetlb 128 32 # # ------------------------------------ # # nr_pages: 64, nr_pages_per_cpu: 8 # # ERROR: context init failed (errno=12, @uffd-stress.c:254) # # [FAIL] # not ok 18 uffd-stress hugetlb 128 32 # exit=1 # # -------------------------------------------- # # running ./uffd-stress hugetlb-private 128 32 # # -------------------------------------------- # # nr_pages: 64, nr_pages_per_cpu: 8 # # bounces: 31, mode: rnd racing ver poll, ERROR: UFFDIO_COPY error: -12ERROR: UFFDIO_COPY error: -12 (errno=12, @uffd-common.c:614) # # (errno=12, @uffd-common.c:614) # # [FAIL]
Quickest way to repo is:
$ sudo ./run_vmtests.sh -t "userfaultfd hugetlb"
Thanks, Ryan
nr_hugepages_tmp=$(cat /proc/sys/vm/nr_hugepages) # For this test, we need one and just one huge page