Add new basic kselftest that checks if the available rust sample modules can be added and removed correctly.
Signed-off-by: Laura Nao laura.nao@collabora.com Reviewed-by: Sergio Gonzalez Collado sergio.collado@gmail.com Reviewed-by: Muhammad Usama Anjum usama.anjum@collabora.com --- Depends on: - https://lore.kernel.org/all/20240102141528.169947-1-laura.nao@collabora.com/... - https://lore.kernel.org/all/20240131-ktap-sh-helpers-extend-v1-0-98ffb468712... Changes in v5: - Skip the test gracefully when ktap helpers file is missing - Skip the test when the sample modules are missing Changes in v4: - Added config file Changes in v3: - Removed useless KSFT_PASS, KSFT_FAIL, KSFT_SKIP constants - Used ktap_finished to print the results summary and handle the return code Changes in v2: - Added missing SPDX line - Edited test_probe_samples.sh script to use the common KTAP helpers file --- MAINTAINERS | 1 + tools/testing/selftests/Makefile | 1 + tools/testing/selftests/rust/Makefile | 4 ++ tools/testing/selftests/rust/config | 5 +++ .../selftests/rust/test_probe_samples.sh | 41 +++++++++++++++++++ 5 files changed, 52 insertions(+) create mode 100644 tools/testing/selftests/rust/Makefile create mode 100644 tools/testing/selftests/rust/config create mode 100755 tools/testing/selftests/rust/test_probe_samples.sh
diff --git a/MAINTAINERS b/MAINTAINERS index e1475ca38ff2..94e31dac6d2c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19231,6 +19231,7 @@ F: Documentation/rust/ F: rust/ F: samples/rust/ F: scripts/*rust* +F: tools/testing/selftests/rust/ K: \b(?i:rust)\b
RXRPC SOCKETS (AF_RXRPC) diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index f7255969b695..e1504833654d 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -80,6 +80,7 @@ TARGETS += riscv TARGETS += rlimits TARGETS += rseq TARGETS += rtc +TARGETS += rust TARGETS += seccomp TARGETS += sgx TARGETS += sigaltstack diff --git a/tools/testing/selftests/rust/Makefile b/tools/testing/selftests/rust/Makefile new file mode 100644 index 000000000000..fce1584d3bc0 --- /dev/null +++ b/tools/testing/selftests/rust/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 +TEST_PROGS += test_probe_samples.sh + +include ../lib.mk diff --git a/tools/testing/selftests/rust/config b/tools/testing/selftests/rust/config new file mode 100644 index 000000000000..b4002acd40bc --- /dev/null +++ b/tools/testing/selftests/rust/config @@ -0,0 +1,5 @@ +CONFIG_RUST=y +CONFIG_SAMPLES=y +CONFIG_SAMPLES_RUST=y +CONFIG_SAMPLE_RUST_MINIMAL=m +CONFIG_SAMPLE_RUST_PRINT=m \ No newline at end of file diff --git a/tools/testing/selftests/rust/test_probe_samples.sh b/tools/testing/selftests/rust/test_probe_samples.sh new file mode 100755 index 000000000000..ad0397e4986f --- /dev/null +++ b/tools/testing/selftests/rust/test_probe_samples.sh @@ -0,0 +1,41 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Copyright (c) 2023 Collabora Ltd +# +# This script tests whether the rust sample modules can +# be added and removed correctly. +# +DIR="$(dirname "$(readlink -f "$0")")" + +KTAP_HELPERS="${DIR}/../kselftest/ktap_helpers.sh" +if [ -e "$KTAP_HELPERS" ]; then + source "$KTAP_HELPERS" +else + echo "$KTAP_HELPERS file not found [SKIP]" + exit 4 +fi + +rust_sample_modules=("rust_minimal" "rust_print") + +ktap_print_header + +for sample in "${rust_sample_modules[@]}"; do + if ! /sbin/modprobe -n -q "$sample"; then + ktap_skip_all "module $sample is not found in /lib/modules/$(uname -r)" + exit "$KSFT_SKIP" + fi +done + +ktap_set_plan "${#rust_sample_modules[@]}" + +for sample in "${rust_sample_modules[@]}"; do + if /sbin/modprobe -q "$sample"; then + /sbin/modprobe -q -r "$sample" + ktap_test_pass "$sample" + else + ktap_test_fail "$sample" + fi +done + +ktap_finished
On Thu, Feb 29, 2024 at 4:53 PM Laura Nao laura.nao@collabora.com wrote:
Add new basic kselftest that checks if the available rust sample modules can be added and removed correctly.
Signed-off-by: Laura Nao laura.nao@collabora.com Reviewed-by: Sergio Gonzalez Collado sergio.collado@gmail.com Reviewed-by: Muhammad Usama Anjum usama.anjum@collabora.com
Thanks for this Laura!
Replying here to what you wrote in v4:
At first, I hadn't planned for the kselftest to skip entirely if only one of the two sample modules was missing. However, considering that this kselftest is designed to test all available sample modules, and given that both are enabled with the provided configuration file, I believe it's more logical to verify the presence of both modules before running the test. If either of them is missing, then we exit the test with a skip code. This also covers the case where rust is not available.
I guess it depends on what is the expected behavior in kselftests in general and whether the user is expected to have merged the provided `config` or not.
Also, what about modules being built-in / `--first-run` in `modprobe`? `modprobe` by default may return successfully even if no module was loaded (or even present, if it was builtin). In that case, is a kselftest script supposed to succeed, skip or fail? I would say at the least it should be "skip" (like it is done in the case where the module is not found), and I wouldn't mind "fail" either (i.e. running `modprobe` with `--first-run`).
In addition, what about module removal failures? Are they ignored on purpose, e.g. because the kernel might not be configured with module unloading? If it is possible to check whether `MODULE_UNLOAD` is supported in the current config, it would be nice to check the removal also worked. And if it is not supported, skipping the removal entirely.
Finally, what about the case where `RUST` isn't enabled? I think Shuah mentioned it in a previous version.
+KTAP_HELPERS="${DIR}/../kselftest/ktap_helpers.sh" +if [ -e "$KTAP_HELPERS" ]; then
- source "$KTAP_HELPERS"
+else
- echo "$KTAP_HELPERS file not found [SKIP]"
- exit 4
+fi
I am not sure I understand this. In which situation could this happen? The helpers should always be there, no? I tested this with `make -C...../selftests install TARGETS=rust INSTALL_PATH=...` and it seems to work in that case too.
To be clear, I agree with Shuah that we should test that everything is working as expected. In fact, I would prefer to run with `-e` or, much better, use something else than bash :) But if something should never happen, should it be a skip? Shouldn't we just fail because the test infrastructure is somehow missing?
Orthogonally, if we want the test, shouldn't this just test the `source` command directly rather than a proxy (file existing)?
Thanks!
Cheers, Miguel
Hi Miguel,
On 2/29/24 17:44, Miguel Ojeda wrote:
On Thu, Feb 29, 2024 at 4:53 PM Laura Nao laura.nao@collabora.com wrote:
Add new basic kselftest that checks if the available rust sample modules can be added and removed correctly.
Signed-off-by: Laura Nao laura.nao@collabora.com Reviewed-by: Sergio Gonzalez Collado sergio.collado@gmail.com Reviewed-by: Muhammad Usama Anjum usama.anjum@collabora.com
Thanks for this Laura!
Replying here to what you wrote in v4:
At first, I hadn't planned for the kselftest to skip entirely if only one of the two sample modules was missing. However, considering that this kselftest is designed to test all available sample modules, and given that both are enabled with the provided configuration file, I believe it's more logical to verify the presence of both modules before running the test. If either of them is missing, then we exit the test with a skip code. This also covers the case where rust is not available.
I guess it depends on what is the expected behavior in kselftests in general and whether the user is expected to have merged the provided `config` or not.
It's my understanding (and please correct if I'm wrong) that when a kselftest is shipped with a config file, that config file should be treated as a requirement for the test and the user is expected to use it (running make kselftest-merge). I agree the script shouldn't blow up if the user doesn't though, so it still makes sense to gracefully skip the test when the requirements are not met.
Also, what about modules being built-in / `--first-run` in `modprobe`? `modprobe` by default may return successfully even if no module was loaded (or even present, if it was builtin). In that case, is a kselftest script supposed to succeed, skip or fail? I would say at the least it should be "skip" (like it is done in the case where the module is not found), and I wouldn't mind "fail" either (i.e. running `modprobe` with `--first-run`).
This makes me realize that I should probably put these in the config too:
CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y
Adding --first-time (you meant --first-time, right?) definitely makes sense, thanks for the pointer. I think having the modules being built-in should be treated as a skip, same as when they are not there at all.
So something like this:
for sample in "${rust_sample_modules[@]}"; do - if ! /sbin/modprobe -n -q "$sample"; then + if ! /sbin/modprobe -n -q --first-time "$sample"; then ktap_skip_all "module $sample is not found in /lib/modules/$(uname -r)" exit "$KSFT_SKIP" fi
will cover both cases.
In addition, what about module removal failures? Are they ignored on purpose, e.g. because the kernel might not be configured with module unloading? If it is possible to check whether `MODULE_UNLOAD` is supported in the current config, it would be nice to check the removal also worked. And if it is not supported, skipping the removal entirely.
I think it's safe to assume no other module will depend on the sample rust modules, so is there any other reason unloading the modules might fail apart from MODULE_UNLOAD not being enabled? If not, then I think we should just check if the removal worked and continue/skip the test accordingly.
I can't just simply skip all tests like this though:
for sample in "${rust_sample_modules[@]}"; do if /sbin/modprobe -q "$sample"; then - /sbin/modprobe -q -r "$sample" + if ! /sbin/modprobe -q -r "$sample"; then + ktap_skip_all "Failed to unload module $sample, please enable CONFIG_MODULE_UNLOAD" + exit "$KSFT_SKIP" + fi ktap_test_pass "$sample" else ktap_test_fail "$sample"
as the test plan has already been printed by then. I'll need to rework the script a bit to skip the test upon errors on module removal.
Finally, what about the case where `RUST` isn't enabled? I think Shuah mentioned it in a previous version.
When rust is not enabled, no sample module is enabled either so the test would still catch this in the first `if ! /sbin/modprobe -n -q --first-time "$sample"` block and exit with the skip code.
If we need more granularity on the feedback provided to the user (i.e. indication on what particular options are missing), then I guess we could check the current kernel config (/proc/config.gz) and skip the entire test if any required config is missing. However, this adds an extra dependency on CONFIG_IKCONFIG=y and CONFIG_IKCONFIG_PROC=y.
Any advice on the best approach here?
+KTAP_HELPERS="${DIR}/../kselftest/ktap_helpers.sh" +if [ -e "$KTAP_HELPERS" ]; then
- source "$KTAP_HELPERS"
+else
- echo "$KTAP_HELPERS file not found [SKIP]"
- exit 4
+fi
I am not sure I understand this. In which situation could this happen? The helpers should always be there, no? I tested this with `make -C...../selftests install TARGETS=rust INSTALL_PATH=...` and it seems to work in that case too.
To be clear, I agree with Shuah that we should test that everything is working as expected. In fact, I would prefer to run with `-e` or, much better, use something else than bash :) But if something should never happen, should it be a skip? Shouldn't we just fail because the test infrastructure is somehow missing?
Kselftest exit codes are predefined (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tool...), so if we use `set -e` and source a missing file we end up returning "1" as if the test was run and failed. With this check we're sure to return a value that makes sense in the event the helpers file ever gets moved.
Orthogonally, if we want the test, shouldn't this just test the `source` command directly rather than a proxy (file existing)?
Sure, checking the return value for source also makes sense.
Thanks!
Best, Laura
On Fri, Mar 1, 2024 at 4:22 PM Laura Nao laura.nao@collabora.com wrote:
Adding --first-time (you meant --first-time, right?) definitely makes sense, thanks for the pointer. I think having the modules being built-in should be treated as a skip, same as when they are not there at all.
Yeah, I meant `--first-time`, sorry.
I didn't see other tests using it, so I am not sure if there is a reason not to do that (ditto for adding `MODULES` etc. to `config` and whether we should fail/skip in certain cases) -- I guess Shuah will let us know.
So something like this:
for sample in "${rust_sample_modules[@]}"; do
- if ! /sbin/modprobe -n -q "$sample"; then
- if ! /sbin/modprobe -n -q --first-time "$sample"; then ktap_skip_all "module $sample is not found in /lib/modules/$(uname -r)" exit "$KSFT_SKIP" fi
will cover both cases.
What about the other calls to `modprobe`?
I think it's safe to assume no other module will depend on the sample rust modules, so is there any other reason unloading the modules might fail apart from MODULE_UNLOAD not being enabled? If not, then I
I was thinking more in general terms: that we would like to catch if unloading does not work as expected.
Especially since these "simple samples" are, in part, testing that the basic infrastructure for Rust modules works. So I would say it is important to check whether module unloading failed.
For instance, if something is very broken, a Rust module could in principle fail unloading even if `MODULE_UNLOAD=y` and even if C modules unload without issue.
I can't just simply skip all tests like this though:
for sample in "${rust_sample_modules[@]}"; do if /sbin/modprobe -q "$sample"; then
/sbin/modprobe -q -r "$sample"
if ! /sbin/modprobe -q -r "$sample"; then
ktap_skip_all "Failed to unload module $sample, please enable CONFIG_MODULE_UNLOAD"
exit "$KSFT_SKIP"
else ktap_test_fail "$sample"fi ktap_test_pass "$sample"
as the test plan has already been printed by then. I'll need to rework the script a bit to skip the test upon errors on module removal.
Perhaps Shuah prefers to merge this before and then improve it instead -- I don't know. I didn't mean to trigger a rework :)
Especially since it is unclear what is the "pattern" to follow here -- perhaps this is another case of a wider cleanup for more tests, like the ktap helpers I suggested (thanks for implementing those by the way!).
If we need more granularity on the feedback provided to the user (i.e. indication on what particular options are missing), then I guess we could check the current kernel config (/proc/config.gz) and skip the entire test if any required config is missing. However, this adds an extra dependency on CONFIG_IKCONFIG=y and CONFIG_IKCONFIG_PROC=y.
Any advice on the best approach here?
I guess this also depends on what tests are supposed to do etc., so let's see what Shuah says.
Kselftest exit codes are predefined (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tool...), so if we use `set -e` and source a missing file we end up returning "1" as if the test was run and failed. With this check we're sure to return a value that makes sense in the event the helpers file ever gets moved.
Yeah, definitely. I was thinking here about just failing if something does not work as expected, i.e. speaking more generally (that is why I also mentioned even other languages).
By "failing" here I didn't mean reporting the test as failing; I see it as something in the layer above. That is, if the helpers file is ever moved or is not installed for whatever reason, then it is the test infrastructure that failed. So I would have expected that "skip" is due to a reason related to the test itself rather than something unexpected related to the infrastructure, but I guess it may be part of the "skip" meaning in kselftests. So it depends on what is supposed to mean in kselftests, which I don't know.
Thanks!
My pleasure!
Cheers, Miguel
On 3/1/24 09:28, Miguel Ojeda wrote:
On Fri, Mar 1, 2024 at 4:22 PM Laura Nao laura.nao@collabora.com wrote:
Adding --first-time (you meant --first-time, right?) definitely makes sense, thanks for the pointer. I think having the modules being built-in should be treated as a skip, same as when they are not there at all.
Yeah, I meant `--first-time`, sorry.
I didn't see other tests using it, so I am not sure if there is a reason not to do that (ditto for adding `MODULES` etc. to `config` and whether we should fail/skip in certain cases) -- I guess Shuah will let us know.
So something like this:
for sample in "${rust_sample_modules[@]}"; do
- if ! /sbin/modprobe -n -q "$sample"; then
- if ! /sbin/modprobe -n -q --first-time "$sample"; then ktap_skip_all "module $sample is not found in /lib/modules/$(uname -r)" exit "$KSFT_SKIP" fi
will cover both cases.
What about the other calls to `modprobe`?
I think it's safe to assume no other module will depend on the sample rust modules, so is there any other reason unloading the modules might fail apart from MODULE_UNLOAD not being enabled? If not, then I
I was thinking more in general terms: that we would like to catch if unloading does not work as expected.
Especially since these "simple samples" are, in part, testing that the basic infrastructure for Rust modules works. So I would say it is important to check whether module unloading failed.
For instance, if something is very broken, a Rust module could in principle fail unloading even if `MODULE_UNLOAD=y` and even if C modules unload without issue.
I can't just simply skip all tests like this though:
for sample in "${rust_sample_modules[@]}"; do if /sbin/modprobe -q "$sample"; then
/sbin/modprobe -q -r "$sample"
if ! /sbin/modprobe -q -r "$sample"; then
ktap_skip_all "Failed to unload module $sample, please enable CONFIG_MODULE_UNLOAD"
exit "$KSFT_SKIP"
fi ktap_test_pass "$sample" else ktap_test_fail "$sample"
as the test plan has already been printed by then. I'll need to rework the script a bit to skip the test upon errors on module removal.
Perhaps Shuah prefers to merge this before and then improve it instead -- I don't know. I didn't mean to trigger a rework :)
Especially since it is unclear what is the "pattern" to follow here -- perhaps this is another case of a wider cleanup for more tests, like the ktap helpers I suggested (thanks for implementing those by the way!).
If we need more granularity on the feedback provided to the user (i.e. indication on what particular options are missing), then I guess we could check the current kernel config (/proc/config.gz) and skip the entire test if any required config is missing. However, this adds an extra dependency on CONFIG_IKCONFIG=y and CONFIG_IKCONFIG_PROC=y.
Any advice on the best approach here?
I guess this also depends on what tests are supposed to do etc., so let's see what Shuah says.
Kselftest should all the tests it can run and skip only the ones it can't run if configuration and other dependencies aren't met. The reason is to avoid false failures.
Kselftest exit codes are predefined (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tool...), so if we use `set -e` and source a missing file we end up returning "1" as if the test was run and failed. With this check we're sure to return a value that makes sense in the event the helpers file ever gets moved.
Yeah, definitely. I was thinking here about just failing if something does not work as expected, i.e. speaking more generally (that is why I also mentioned even other languages).
By "failing" here I didn't mean reporting the test as failing; I see it as something in the layer above. That is, if the helpers file is ever moved or is not installed for whatever reason, then it is the test infrastructure that failed. So I would have expected that "skip" is due to a reason related to the test itself rather than something unexpected related to the infrastructure, but I guess it may be part of the "skip" meaning in kselftests. So it depends on what is supposed to mean in kselftests, which I don't know.
I applied this to linux-kselftest next for Linux 6.9-rc1.
We can make improvements if any on top of this.
thanks, -- Shuah
Perhaps we can also add a section about these tests to the new `Documentation/rust/testing.rst` [1]. We could mention that they exist, how to run them, and link to the kselftest documentation for further information.
Not necessary to resend, perhaps we can do that in a separate patch when this patch and the one by Dirk are merged.
- Best Valentin
Cc: Dirk Behme dirk.behme@de.bosch.com Link: https://lore.kernel.org/r/20240130075117.4137360-1-dirk.behme@de.bosch.com [1]
On 03.03.2024 15:48, Valentin Obst wrote:
Perhaps we can also add a section about these tests to the new `Documentation/rust/testing.rst` [1]. We could mention that they exist, how to run them, and link to the kselftest documentation for further information.
Not necessary to resend, perhaps we can do that in a separate patch when this patch and the one by Dirk are merged.
Yes, that sounds good :)
Laura: If you like to write some basic intro as proposed by Valentin please do it against
https://github.com/Rust-for-Linux/linux/blob/rust-next/Documentation/rust/te...
in a separate patch.
Best regards
Dirk
linux-kselftest-mirror@lists.linaro.org