Add specification for declaring test metadata to the KTAP v2 spec.
The purpose of test metadata is to allow for the declaration of essential
testing information in KTAP output. This information includes test
names, test configuration info, test attributes, and test files.
There have been similar ideas around the idea of test metadata such as test
prefixes and test name lines. However, I propose this specification as an
overall fix for these issues.
These test metadata lines are a form of diagnostic lines with the
format: "# <metadata_type>: <data>". As a type of diagnostic line, test
metadata lines are compliant with KTAP v1, which will help to not
interfere too much with current parsers.
Specifically the "# Subtest:" line is derived from the TAP 14 spec:
https://testanything.org/tap-version-14-specification.html.
The proposed location for test metadata is in the test header, between the
version line and the test plan line. Note including diagnostic lines in
the test header is a depature from KTAP v1.
This location provides two main benefits:
First, metadata will be printed prior to when subtests are run. Then if a
test fails, test metadata can help discern which test is causing the issue
and potentially why.
Second, this location ensures that the lines will not be accidentally
parsed as a subtest's diagnostic lines because the lines are bordered by
the version line and plan line.
Here is an example of test metadata:
KTAP version 2
# Config: CONFIG_TEST=y
1..1
KTAP version 2
# Subtest: test_suite
# File: /sys/kernel/...
# Attributes: slow
# Other: example_test
1..2
ok 1 test_1
ok 2 test_2
ok 1 test_suite
Here is a link to a version of the KUnit parser that is able to parse test
metadata lines for KTAP version 2. Note this includes test metadata
lines for the main level of KTAP.
Link: https://kunit-review.googlesource.com/c/linux/+/5809
Signed-off-by: Rae Moar <rmoar(a)google.com>
---
Hi everyone,
I would like to use this proposal similar to an RFC to gather ideas on the
topic of test metadata. Let me know what you think.
I am also interested in brainstorming a list of recognized metadata types.
Providing recognized metadata types would be helpful in parsing and
displaying test metadata in a useful way.
Current ideas:
- "# Subtest: <test_name>" to indicate test name (name must match
corresponding result line)
- "# Attributes: <attributes list>" to indicate test attributes (list
separated by commas)
- "# File: <file_path>" to indicate file used in testing
Any other ideas?
Note this proposal replaces two of my previous proposals: "ktap_v2: add
recognized test name line" and "ktap_v2: allow prefix to KTAP lines."
Thanks!
-Rae
Note: this patch is based on Frank's ktap_spec_version_2 branch.
Documentation/dev-tools/ktap.rst | 51 ++++++++++++++++++++++++++++++--
1 file changed, 48 insertions(+), 3 deletions(-)
diff --git a/Documentation/dev-tools/ktap.rst b/Documentation/dev-tools/ktap.rst
index ff77f4aaa6ef..a2d0a196c115 100644
--- a/Documentation/dev-tools/ktap.rst
+++ b/Documentation/dev-tools/ktap.rst
@@ -17,7 +17,9 @@ KTAP test results describe a series of tests (which may be nested: i.e., test
can have subtests), each of which can contain both diagnostic data -- e.g., log
lines -- and a final result. The test structure and results are
machine-readable, whereas the diagnostic data is unstructured and is there to
-aid human debugging.
+aid human debugging. One exception to this is test metadata lines - a type
+of diagnostic lines. Test metadata is located between the version line and
+plan line of a test and can be machine-readable.
KTAP output is built from four different types of lines:
- Version lines
@@ -28,8 +30,7 @@ KTAP output is built from four different types of lines:
In general, valid KTAP output should also form valid TAP output, but some
information, in particular nested test results, may be lost. Also note that
there is a stagnant draft specification for TAP14, KTAP diverges from this in
-a couple of places (notably the "Subtest" header), which are described where
-relevant later in this document.
+a couple of places, which are described where relevant later in this document.
Version lines
-------------
@@ -166,6 +167,45 @@ even if they do not start with a "#": this is to capture any other useful
kernel output which may help debug the test. It is nevertheless recommended
that tests always prefix any diagnostic output they have with a "#" character.
+Test metadata lines
+-------------------
+
+Test metadata lines are a type of diagnostic lines used to the declare the
+name of a test and other helpful testing information in the test header.
+These lines are often helpful for parsing and for providing context during
+crashes.
+
+Test metadata lines must follow the format: "# <metadata_type>: <data>".
+These lines must be located between the version line and the plan line
+within a test header.
+
+There are a few currently recognized metadata types:
+- "# Subtest: <test_name>" to indicate test name (name must match
+ corresponding result line)
+- "# Attributes: <attributes list>" to indicate test attributes (list
+ separated by commas)
+- "# File: <file_path>" to indicate file used in testing
+
+As a rule, the "# Subtest:" line is generally first to declare the test
+name. Note that metadata lines do not necessarily need to use a
+recognized metadata type.
+
+An example of using metadata lines:
+
+::
+
+ KTAP version 2
+ 1..1
+ # File: /sys/kernel/...
+ KTAP version 2
+ # Subtest: example
+ # Attributes: slow, example_test
+ 1..1
+ ok 1 test_1
+ # example passed
+ ok 1 example
+
+
Unknown lines
-------------
@@ -206,6 +246,7 @@ An example of a test with two nested subtests:
KTAP version 2
1..1
KTAP version 2
+ # Subtest: example
1..2
ok 1 test_1
not ok 2 test_2
@@ -219,6 +260,7 @@ An example format with multiple levels of nested testing:
KTAP version 2
1..2
KTAP version 2
+ # Subtest: example_test_1
1..2
KTAP version 2
1..2
@@ -254,6 +296,7 @@ Example KTAP output
KTAP version 2
1..1
KTAP version 2
+ # Subtest: main_test
1..3
KTAP version 2
1..1
@@ -261,11 +304,13 @@ Example KTAP output
ok 1 test_1
ok 1 example_test_1
KTAP version 2
+ # Attributes: slow
1..2
ok 1 test_1 # SKIP test_1 skipped
ok 2 test_2
ok 2 example_test_2
KTAP version 2
+ # Subtest: example_test_3
1..3
ok 1 test_1
# test_2: FAIL
base-commit: 906f02e42adfbd5ae70d328ee71656ecb602aaf5
--
2.40.0.396.gfff15efe05-goog
lwt xmit hook does not expect positive return values in function
ip_finish_output2 and ip6_finish_output2. However, BPF redirect programs
can return positive values such like NET_XMIT_DROP, NET_RX_DROP, and etc
as errors. Such return values can panic the kernel unexpectedly:
https://gist.github.com/zhaiyan920/8fbac245b261fe316a7ef04c9b1eba48
This patch fixes the return values from BPF redirect, so the error
handling would be consistent at xmit hook. It also adds a few test cases
to prevent future regressions.
v2: https://lore.kernel.org/netdev/ZLdY6JkWRccunvu0@debian.debian/
v1: https://lore.kernel.org/bpf/ZLbYdpWC8zt9EJtq@debian.debian/
changes since v2:
* subject name changed
* also covered redirect to ingress case
* added selftests
changes since v1:
* minor code style changes
Yan Zhai (2):
bpf: fix skb_do_redirect return values
bpf: selftests: add lwt redirect regression test cases
net/core/filter.c | 12 +-
tools/testing/selftests/bpf/Makefile | 1 +
.../selftests/bpf/progs/test_lwt_redirect.c | 67 +++++++
.../selftests/bpf/test_lwt_redirect.sh | 165 ++++++++++++++++++
4 files changed, 244 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/test_lwt_redirect.c
create mode 100755 tools/testing/selftests/bpf/test_lwt_redirect.sh
--
2.30.2
Remove clean target in Makefile to fix the following warning
and use the one in common lib.mk
Makefile:14: warning: overriding recipe for target 'clean'
../lib.mk:160: warning: ignoring old recipe for target 'clean'
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
---
tools/testing/selftests/prctl/Makefile | 2 --
1 file changed, 2 deletions(-)
diff --git a/tools/testing/selftests/prctl/Makefile b/tools/testing/selftests/prctl/Makefile
index cfc35d29fc2e..01dc90fbb509 100644
--- a/tools/testing/selftests/prctl/Makefile
+++ b/tools/testing/selftests/prctl/Makefile
@@ -10,7 +10,5 @@ all: $(TEST_PROGS)
include ../lib.mk
-clean:
- rm -fr $(TEST_PROGS)
endif
endif
--
2.39.2
On Tue, Jul 25, 2023 at 12:11 AM Markus Elfring <Markus.Elfring(a)web.de> wrote:
>
> > … unexpected problems. This change
> > converts the positive status code to proper error code.
>
> Please choose a corresponding imperative change suggestion.
>
> See also:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Doc…
>
>
> Did you provide sufficient justification for a possible addition of the tag “Fixes”?
>
>
> …
> > v2: code style change suggested by Stanislav Fomichev
> > ---
> > net/core/filter.c | 12 +++++++++++-
> …
>
> How do you think about to replace this marker by a line break?
>
> See also:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Doc…
>
> Regards,
> Markus
Hi Markus,
Thanks for the suggestions, those are what I could use more help with.
Will address these in the next version.
Yan
Hello everyone,
This patch series adds a test attributes framework to KUnit.
There has been interest in filtering out "slow" KUnit tests. Most notably,
a new config, CONFIG_MEMCPY_SLOW_KUNIT_TEST, has been added to exclude a
particularly slow memcpy test
(https://lore.kernel.org/all/20230118200653.give.574-kees@kernel.org/).
This attributes framework can be used to save and access test associated
data, including whether a test is slow. These attributes are reportable
(via KTAP and command line output) and are also filterable.
This framework is designed to allow for the addition of other attributes in
the future. These attributes could include whether the test can be run
concurrently, test file path, etc.
To try out the framework I suggest running:
"./tools/testing/kunit/kunit.py run --filter speed!=slow"
This patch series was originally sent out as an RFC. Here is a link to the
RFC v2:
https://lore.kernel.org/all/20230707210947.1208717-1-rmoar@google.com/
Thanks!
Rae
Rae Moar (9):
kunit: Add test attributes API structure
kunit: Add speed attribute
kunit: Add module attribute
kunit: Add ability to filter attributes
kunit: tool: Add command line interface to filter and report
attributes
kunit: memcpy: Mark tests as slow using test attributes
kunit: time: Mark test as slow using test attributes
kunit: add tests for filtering attributes
kunit: Add documentation of KUnit test attributes
.../dev-tools/kunit/running_tips.rst | 166 +++++++
include/kunit/attributes.h | 50 +++
include/kunit/test.h | 70 ++-
kernel/time/time_test.c | 2 +-
lib/Kconfig.debug | 3 +
lib/kunit/Makefile | 3 +-
lib/kunit/attributes.c | 418 ++++++++++++++++++
lib/kunit/executor.c | 115 ++++-
lib/kunit/executor_test.c | 128 +++++-
lib/kunit/kunit-example-test.c | 9 +
lib/kunit/test.c | 27 +-
lib/memcpy_kunit.c | 8 +-
tools/testing/kunit/kunit.py | 70 ++-
tools/testing/kunit/kunit_kernel.py | 8 +-
tools/testing/kunit/kunit_parser.py | 11 +-
tools/testing/kunit/kunit_tool_test.py | 39 +-
16 files changed, 1051 insertions(+), 76 deletions(-)
create mode 100644 include/kunit/attributes.h
create mode 100644 lib/kunit/attributes.c
base-commit: 64bd4641310c41a1ecf07c13c67bc0ed61045dfd
--
2.41.0.487.g6d72f3e995-goog
Hi All,
This is v3 of my series to clean up mm selftests so that they run correctly on
arm64. See [1] for full explanation.
Only patch 6 has changed vs v2. The rest are the same and already carry
reviewed/acked-bys. So I'm hoping I can get the final patch reviewed and this
series is hopefully then good enough to merge?
Changes Since v2 [2]
--------------------
- Patch 6: Change approach to cleaning up child processes; Use "parent death
signal", as suggested by David.
- Added Reviewed-by/Acked-by tags: thanks to David, Mark and Peter!
Changes Since v1 [1]
--------------------
- Patch 1: Explicitly set line buffer mode in ksft_print_header()
- Dropped v1 patch 2 (set execute permissions): Andrew has taken this into his
branch separately.
- Patch 2: Don't compile `soft-dirty` suite for arm64 instead of skipping it
at runtime.
- Patch 2: Declare fewer tests and skip all of test_softdirty() if soft-dirty
is not supported, rather than conditionally marking each check as skipped.
- Added Reviewed-by tags: thanks DavidH!
- Patch 8: Clarified commit message.
[1] https://lore.kernel.org/linux-mm/20230713135440.3651409-1-ryan.roberts@arm.…
[2] https://lore.kernel.org/linux-mm/20230717103152.202078-1-ryan.roberts@arm.c…
Thanks,
Ryan
Ryan Roberts (8):
selftests: Line buffer test program's stdout
selftests/mm: Skip soft-dirty tests on arm64
selftests/mm: Enable mrelease_test for arm64
selftests/mm: Fix thuge-gen test bugs
selftests/mm: va_high_addr_switch should skip unsupported arm64
configs
selftests/mm: Make migration test robust to failure
selftests/mm: Optionally pass duration to transhuge-stress
selftests/mm: Run all tests from run_vmtests.sh
tools/testing/selftests/kselftest.h | 9 ++
tools/testing/selftests/kselftest/runner.sh | 7 +-
tools/testing/selftests/mm/Makefile | 82 ++++++++++---------
tools/testing/selftests/mm/madv_populate.c | 26 +++++-
tools/testing/selftests/mm/migration.c | 12 ++-
tools/testing/selftests/mm/mrelease_test.c | 1 +
tools/testing/selftests/mm/run_vmtests.sh | 28 ++++++-
tools/testing/selftests/mm/settings | 2 +-
tools/testing/selftests/mm/thuge-gen.c | 4 +-
tools/testing/selftests/mm/transhuge-stress.c | 12 ++-
.../selftests/mm/va_high_addr_switch.c | 2 +-
11 files changed, 132 insertions(+), 53 deletions(-)
--
2.25.1