On Wed, Nov 23, 2022 at 11:28 PM Maxime Ripard <maxime(a)cerno.tech> wrote:
>
> Hi,
>
> This series introduce Kunit tests to the vc4 KMS driver, but unlike what we
> have been doing so far in KMS, it actually tests the atomic modesetting code.
>
> In order to do so, I've had to improve a fair bit on the Kunit helpers already
> found in the tree in order to register a full blown and somewhat functional KMS
> driver.
>
> It's of course relying on a mock so that we can test it anywhere. The mocking
> approach created a number of issues, the main one being that we need to create
> a decent mock in the first place, see patch 22. The basic idea is that I
> created some structures to provide a decent approximation of the actual
> hardware, and that would support both major architectures supported by vc4.
>
> This is of course meant to evolve over time and support more tests, but I've
> focused on testing the HVS FIFO assignment code which is fairly tricky (and the
> tests have actually revealed one more bug with our current implementation). I
> used to have a userspace implementation of those tests, where I would copy and
> paste the kernel code and run the tests on a regular basis. It's was obviously
> fairly suboptimal, so it seemed like the perfect testbed for that series.
>
> Let me know what you think,
> Maxime
>
> To: David Airlie <airlied(a)gmail.com>
> To: Daniel Vetter <daniel(a)ffwll.ch>
> To: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
> To: Maxime Ripard <mripard(a)kernel.org>
> To: Thomas Zimmermann <tzimmermann(a)suse.de>
> Cc: Dave Stevenson <dave.stevenson(a)raspberrypi.com>
> Cc: Javier Martinez Canillas <javierm(a)redhat.com>
> Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Cc: Maíra Canal <mairacanal(a)riseup.net>
> Cc: Brendan Higgins <brendan.higgins(a)linux.dev>
> Cc: David Gow <davidgow(a)google.com>
> Cc: linux-kselftest(a)vger.kernel.org
> Cc: kunit-dev(a)googlegroups.com
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linux-kernel(a)vger.kernel.org
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Signed-off-by: Maxime Ripard <maxime(a)cerno.tech>
>
> ---
Hi Maxime,
Thanks very much for this! I'm really excited to see these sorts of
tests being written.
I was able to successfully run these under qemu with:
./tools/testing/kunit/kunit.py run --kunitconfig
drivers/gpu/drm/vc4/tests --arch arm64
--cross_compile=aarch64-linux-gnu-
(and also with clang, using --make_options LLVM=1 instead of the
--cross_compile flag)
On the other hand, they don't compile as a module:
ERROR: modpost: missing MODULE_LICENSE() in drivers/gpu/drm/vc4/tests/vc4_mock.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_mock_crtc.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_mock_output.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_mock_plane.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/tests/drm_managed_test.o
ERROR: modpost: "vc4_drm_driver"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc5_drm_driver"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "drm_kunit_helper_alloc_device"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "__drm_kunit_helper_alloc_drm_device_with_driver"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "__vc4_hvs_alloc"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_dummy_plane"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_mock_pv" [drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_dummy_output"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_kms_load" [drivers/gpu/drm/vc4/tests/vc4_mock.ko]
undefined!
ERROR: modpost: "vc4_txp_crtc_data"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
WARNING: modpost: suppressed 17 unresolved symbol warnings because
there were too many)
Most of those are just the need to export some symbols. There's some
work underway to support conditionally exporting symbols only if KUnit
is enabled, which may help:
https://lore.kernel.org/linux-kselftest/20221102175959.2921063-1-rmoar@goog…
Otherwise, I suspect the better short-term solution would just be to
require that the tests are built-in (or at least compiled into
whatever of the drm/vc4 modules makes most sense).
The only other thing which has me a little confused is the naming of
some of the functions, specifically with the __ prefix. Is it just for
internal functions (many of them aren't static, but maybe they could
use the VISIBLE_IF_KUNIT macro if that makes sense), or for versions
of functions which accept extra arguments? Not a big deal (and maybe
it's a DRM naming convention I'm ignorant of), but I couldn't quite
find a pattern on my first read through.
But on the whole, these look good from a KUnit point-of-view. It's
really to see some solid mocking and driver testing, too. There would
be ways to avoid passing the 'struct kunit' around in more places (or
to store extra data as a kunit_resource), but I think it's better
overall to pass it around like you have in this case -- it's certainly
more compatible with things which might span threads (e.g. the
workqueues).
Thanks a bunch,
-- David
Change the KUnit parser to be able to parse test output that complies with
the KTAP version 1 specification format found here:
https://kernel.org/doc/html/latest/dev-tools/ktap.html. Ensure the parser
is able to parse tests with the original KUnit test output format as
well.
KUnit parser now accepts any of the following test output formats:
Original KUnit test output format:
TAP version 14
1..1
# Subtest: kunit-test-suite
1..3
ok 1 - kunit_test_1
ok 2 - kunit_test_2
ok 3 - kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 - kunit-test-suite
KTAP version 1 test output format:
KTAP version 1
1..1
KTAP version 1
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
ok 1 kunit-test-suite
New KUnit test output format (changes made in the next patch of
this series):
KTAP version 1
1..1
KTAP version 1
# Subtest: kunit-test-suite
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 kunit-test-suite
Signed-off-by: Rae Moar <rmoar(a)google.com>
Reviewed-by: Daniel Latypov <dlatypov(a)google.com>
Reviewed-by: David Gow <davidgow(a)google.com>
---
Changes since v2:
https://lore.kernel.org/all/CA+GJov4QZ8yrD8sgGeMYJ4zYkg2CEUX8owqzPFE0BQGe_f…
- Rebased onto linux-kselftest/kunit to correct merge conflict with
recently approved patch
- Fixed typo
- Added test_parse_subtest_header to test whether the “# Subtest:”
line is being parsed correctly when using the new test format
Changes since v1:
https://lore.kernel.org/all/20221104194705.3245738-2-rmoar@google.com/
- Switch order of patches to make changes to the parser before making
changes to the test output
- Change placeholder label for test header from “Test suite” to empty
string
- Change parser to approve the new KTAP version line in the subtest header
to be before the subtest header line rather than after.
- Note: Considered changing parser to allow for the top-level of testing
to have a '# Subtest' line as discussed in v1 but this breaks the missing
test plan test. So I think it would be best to add this ability at a later
time or after top-level test name and result lines are discussed for
KTAP v2.
tools/testing/kunit/kunit_parser.py | 79 ++++++++++++-------
tools/testing/kunit/kunit_tool_test.py | 14 ++++
.../test_data/test_parse_ktap_output.log | 8 ++
.../test_data/test_parse_subtest_header.log | 7 ++
4 files changed, 80 insertions(+), 28 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_parse_ktap_output.log
create mode 100644 tools/testing/kunit/test_data/test_parse_subtest_header.log
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index d0ed5dd5cfc4..4cc2f8b7ecd0 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -441,6 +441,7 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
- '# Subtest: [test name]'
- '[ok|not ok] [test number] [-] [test name] [optional skip
directive]'
+ - 'KTAP version [version number]'
Parameters:
lines - LineStream of KTAP output to parse
@@ -449,8 +450,9 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
Log of diagnostic lines
"""
log = [] # type: List[str]
- while lines and not TEST_RESULT.match(lines.peek()) and not \
- TEST_HEADER.match(lines.peek()):
+ non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START]
+ while lines and not any(re.match(lines.peek())
+ for re in non_diagnostic_lines):
log.append(lines.pop())
return log
@@ -496,11 +498,15 @@ def print_test_header(test: Test) -> None:
test - Test object representing current test being printed
"""
message = test.name
+ if message != "":
+ # Add a leading space before the subtest counts only if a test name
+ # is provided using a "# Subtest" header line.
+ message += " "
if test.expected_count:
if test.expected_count == 1:
- message += ' (1 subtest)'
+ message += '(1 subtest)'
else:
- message += f' ({test.expected_count} subtests)'
+ message += f'({test.expected_count} subtests)'
stdout.print_with_timestamp(format_test_divider(message, len(message)))
def print_log(log: Iterable[str]) -> None:
@@ -647,7 +653,7 @@ def bubble_up_test_results(test: Test) -> None:
elif test.counts.get_status() == TestStatus.TEST_CRASHED:
test.status = TestStatus.TEST_CRASHED
-def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool) -> Test:
"""
Finds next test to parse in LineStream, creates new Test object,
parses any subtests of the test, populates Test object with all
@@ -665,15 +671,32 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
1..4
[subtests]
- - Subtest header line
+ - Subtest header (must include either the KTAP version line or
+ "# Subtest" header line)
- Example:
+ Example (preferred format with both KTAP version line and
+ "# Subtest" line):
+
+ KTAP version 1
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ Example (only "# Subtest" line):
# Subtest: name
1..3
[subtests]
ok 1 name
+ Example (only KTAP version line, compliant with KTAP v1 spec):
+
+ KTAP version 1
+ 1..3
+ [subtests]
+ ok 1 name
+
- Test result line
Example:
@@ -685,28 +708,29 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
expected_num - expected test number for test to be parsed
log - list of strings containing any preceding diagnostic lines
corresponding to the current test
+ is_subtest - boolean indicating whether test is a subtest
Return:
Test object populated with characteristics and any subtests
"""
test = Test()
test.log.extend(log)
- parent_test = False
- main = parse_ktap_header(lines, test)
- if main:
- # If KTAP/TAP header is found, attempt to parse
+ if not is_subtest:
+ # If parsing the main/top-level test, parse KTAP version line and
# test plan
test.name = "main"
+ ktap_line = parse_ktap_header(lines, test)
parse_test_plan(lines, test)
parent_test = True
else:
- # If KTAP/TAP header is not found, test must be subtest
- # header or test result line so parse attempt to parser
- # subtest header
- parent_test = parse_test_header(lines, test)
+ # If not the main test, attempt to parse a test header containing
+ # the KTAP version line and/or subtest header line
+ ktap_line = parse_ktap_header(lines, test)
+ subtest_line = parse_test_header(lines, test)
+ parent_test = (ktap_line or subtest_line)
if parent_test:
- # If subtest header is found, attempt to parse
- # test plan and print header
+ # If KTAP version line and/or subtest header is found, attempt
+ # to parse test plan and print test header
parse_test_plan(lines, test)
print_test_header(test)
expected_count = test.expected_count
@@ -721,7 +745,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
sub_log = parse_diagnostic(lines)
sub_test = Test()
if not lines or (peek_test_name_match(lines, test) and
- not main):
+ is_subtest):
if expected_count and test_num <= expected_count:
# If parser reaches end of test before
# parsing expected number of subtests, print
@@ -735,20 +759,19 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
test.log.extend(sub_log)
break
else:
- sub_test = parse_test(lines, test_num, sub_log)
+ sub_test = parse_test(lines, test_num, sub_log, True)
subtests.append(sub_test)
test_num += 1
test.subtests = subtests
- if not main:
+ if is_subtest:
# If not main test, look for test result line
test.log.extend(parse_diagnostic(lines))
- if (parent_test and peek_test_name_match(lines, test)) or \
- not parent_test:
- parse_test_result(lines, test, expected_num)
- else:
+ if test.name != "" and not peek_test_name_match(lines, test):
test.add_error('missing subtest result line!')
+ else:
+ parse_test_result(lines, test, expected_num)
- # Check for there being no tests
+ # Check for there being no subtests within parent test
if parent_test and len(subtests) == 0:
# Don't override a bad status if this test had one reported.
# Assumption: no subtests means CRASHED is from Test.__init__()
@@ -758,11 +781,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# Add statuses to TestCounts attribute in Test object
bubble_up_test_results(test)
- if parent_test and not main:
+ if parent_test and is_subtest:
# If test has subtests and is not the main test object, print
# footer.
print_test_footer(test)
- elif not main:
+ elif is_subtest:
print_test_result(test)
return test
@@ -785,7 +808,7 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test:
test.add_error('Could not find any KTAP output. Did any KUnit tests run?')
test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- test = parse_test(lines, 0, [])
+ test = parse_test(lines, 0, [], False)
if test.status != TestStatus.NO_TESTS:
test.status = test.counts.get_status()
stdout.print_with_timestamp(DIVIDER)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 84a08cf07242..d7f669cbf2a8 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -312,6 +312,20 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(kunit_parser._summarize_failed_tests(result),
'Failures: all_failed_suite, some_failed_suite.test2')
+ def test_ktap_format(self):
+ ktap_log = test_data_path('test_parse_ktap_output.log')
+ with open(ktap_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(result.counts, kunit_parser.TestCounts(passed=3))
+ self.assertEqual('suite', result.subtests[0].name)
+ self.assertEqual('case_1', result.subtests[0].subtests[0].name)
+ self.assertEqual('case_2', result.subtests[0].subtests[1].name)
+
+ def test_parse_subtest_header(self):
+ ktap_log = test_data_path('test_parse_subtest_header.log')
+ with open(ktap_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.print_mock.assert_any_call(StrContains('suite (1 subtest)'))
def line_stream_from_strs(strs: Iterable[str]) -> kunit_parser.LineStream:
return kunit_parser.LineStream(enumerate(strs, start=1))
diff --git a/tools/testing/kunit/test_data/test_parse_ktap_output.log b/tools/testing/kunit/test_data/test_parse_ktap_output.log
new file mode 100644
index 000000000000..ccdf244e5303
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_parse_ktap_output.log
@@ -0,0 +1,8 @@
+KTAP version 1
+1..1
+ KTAP version 1
+ 1..3
+ ok 1 case_1
+ ok 2 case_2
+ ok 3 case_3
+ok 1 suite
diff --git a/tools/testing/kunit/test_data/test_parse_subtest_header.log b/tools/testing/kunit/test_data/test_parse_subtest_header.log
new file mode 100644
index 000000000000..216631092e7b
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_parse_subtest_header.log
@@ -0,0 +1,7 @@
+KTAP version 1
+1..1
+ KTAP version 1
+ # Subtest: suite
+ 1..1
+ ok 1 test
+ok 1 suite
\ No newline at end of file
base-commit: 99c8c9276be71e6bc98979e95d56cdcbe0c2454e
--
2.38.1.584.g0f3c55d4c2-goog
Signal that a test run is complete through perf_test_args instead of
having tests open code a similar solution. Ensure that the field resets
to false at the beginning of a test run as the structure is reused
between test runs, eliminating a couple of bugs:
access_tracking_perf_test hangs indefinitely on a subsequent test run,
as 'done' remains true. The bug doesn't amount to much right now, as x86
supports a single guest mode. However, this is a precondition of
enabling the test for other architectures with >1 guest mode, like
arm64.
memslot_modification_stress_test has the exact opposite problem, where
subsequent test runs complete immediately as 'run_vcpus' remains false.
Co-developed-by: Sean Christopherson <seanjc(a)google.com>
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
[oliver: added commit message, preserve spin_wait_for_next_iteration()]
Signed-off-by: Oliver Upton <oliver.upton(a)linux.dev>
---
tools/testing/selftests/kvm/access_tracking_perf_test.c | 8 +-------
tools/testing/selftests/kvm/include/perf_test_util.h | 3 +++
tools/testing/selftests/kvm/lib/perf_test_util.c | 3 +++
.../selftests/kvm/memslot_modification_stress_test.c | 6 +-----
4 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 76c583a07ea2..942370d57392 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -58,9 +58,6 @@ static enum {
ITERATION_MARK_IDLE,
} iteration_work;
-/* Set to true when vCPU threads should exit. */
-static bool done;
-
/* The iteration that was last completed by each vCPU. */
static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
@@ -211,7 +208,7 @@ static bool spin_wait_for_next_iteration(int *current_iteration)
int last_iteration = *current_iteration;
do {
- if (READ_ONCE(done))
+ if (READ_ONCE(perf_test_args.stop_vcpus))
return false;
*current_iteration = READ_ONCE(iteration);
@@ -321,9 +318,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
mark_memory_idle(vm, nr_vcpus);
access_memory(vm, nr_vcpus, ACCESS_READ, "Reading from idle memory");
- /* Set done to signal the vCPU threads to exit */
- done = true;
-
perf_test_join_vcpu_threads(nr_vcpus);
perf_test_destroy_vm(vm);
}
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index eaa88df0555a..536d7c3c3f14 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -40,6 +40,9 @@ struct perf_test_args {
/* Run vCPUs in L2 instead of L1, if the architecture supports it. */
bool nested;
+ /* Test is done, stop running vCPUs. */
+ bool stop_vcpus;
+
struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS];
};
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 9618b37c66f7..ee3f499ccbd2 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -267,6 +267,7 @@ void perf_test_start_vcpu_threads(int nr_vcpus,
vcpu_thread_fn = vcpu_fn;
WRITE_ONCE(all_vcpu_threads_running, false);
+ WRITE_ONCE(perf_test_args.stop_vcpus, false);
for (i = 0; i < nr_vcpus; i++) {
struct vcpu_thread *vcpu = &vcpu_threads[i];
@@ -289,6 +290,8 @@ void perf_test_join_vcpu_threads(int nr_vcpus)
{
int i;
+ WRITE_ONCE(perf_test_args.stop_vcpus, true);
+
for (i = 0; i < nr_vcpus; i++)
pthread_join(vcpu_threads[i].thread, NULL);
}
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index bb1d17a1171b..3a5e4518307c 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -34,8 +34,6 @@
static int nr_vcpus = 1;
static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
-static bool run_vcpus = true;
-
static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
{
struct kvm_vcpu *vcpu = vcpu_args->vcpu;
@@ -45,7 +43,7 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
run = vcpu->run;
/* Let the guest access its memory until a stop signal is received */
- while (READ_ONCE(run_vcpus)) {
+ while (!READ_ONCE(perf_test_args.stop_vcpus)) {
ret = _vcpu_run(vcpu);
TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
@@ -110,8 +108,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
add_remove_memslot(vm, p->memslot_modification_delay,
p->nr_memslot_modifications);
- run_vcpus = false;
-
perf_test_join_vcpu_threads(nr_vcpus);
pr_info("All vCPU threads joined\n");
--
2.38.1.584.g0f3c55d4c2-goog
Change the KUnit parser to be able to parse test output that complies with
the KTAP version 1 specification format found here:
https://kernel.org/doc/html/latest/dev-tools/ktap.html. Ensure the parser
is able to parse tests with the original KUnit test output format as
well.
KUnit parser now accepts any of the following test output formats:
Original KUnit test output format:
TAP version 14
1..1
# Subtest: kunit-test-suite
1..3
ok 1 - kunit_test_1
ok 2 - kunit_test_2
ok 3 - kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 - kunit-test-suite
KTAP version 1 test output format:
KTAP version 1
1..1
KTAP version 1
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
ok 1 kunit-test-suite
New KUnit test output format (changes made in the next patch of
this series):
KTAP version 1
1..1
KTAP version 1
# Subtest: kunit-test-suite
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 kunit-test-suite
Signed-off-by: Rae Moar <rmoar(a)google.com>
Reviewed-by: Daniel Latypov <dlatypov(a)google.com>
Reviewed-by: David Gow <davidgow(a)google.com>
---
Changes since v1:
https://lore.kernel.org/all/20221104194705.3245738-2-rmoar@google.com/
- Switch order of patches to make changes to the parser before making
changes to the test output
- Change placeholder label for test header from “Test suite” to empty
string
- Change parser to approve the new KTAP version line in the subtest header
to be before the subtest header line rather than after.
- Note: Considered changing parser to allow for the top-level of testing
to have a '# Subtest' line as discussed in v1 but this breaks the
missing test plan test. So I think it would be best to add this ability
at a later time or after top-level test name and result lines are
discussed for KTAP v2.
tools/testing/kunit/kunit_parser.py | 79 ++++++++++++-------
tools/testing/kunit/kunit_tool_test.py | 8 ++
.../test_data/test_parse_ktap_output.log | 8 ++
3 files changed, 67 insertions(+), 28 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_parse_ktap_output.log
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index a56c75a973b5..ed752d53d6a8 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -441,6 +441,7 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
- '# Subtest: [test name]'
- '[ok|not ok] [test number] [-] [test name] [optional skip
directive]'
+ - 'KTAP version [version number]'
Parameters:
lines - LineStream of KTAP output to parse
@@ -449,8 +450,9 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
Log of diagnostic lines
"""
log = [] # type: List[str]
- while lines and not TEST_RESULT.match(lines.peek()) and not \
- TEST_HEADER.match(lines.peek()):
+ non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START]
+ while lines and not any(re.match(lines.peek())
+ for re in non_diagnostic_lines):
log.append(lines.pop())
return log
@@ -496,11 +498,15 @@ def print_test_header(test: Test) -> None:
test - Test object representing current test being printed
"""
message = test.name
+ if message != "":
+ # Add a leading space before the subtest counts only if a test name
+ # is provided using a "# Subtest" header line.
+ message += " "
if test.expected_count:
if test.expected_count == 1:
- message += ' (1 subtest)'
+ message += '(1 subtest)'
else:
- message += f' ({test.expected_count} subtests)'
+ message += f'({test.expected_count} subtests)'
stdout.print_with_timestamp(format_test_divider(message, len(message)))
def print_log(log: Iterable[str]) -> None:
@@ -647,7 +653,7 @@ def bubble_up_test_results(test: Test) -> None:
elif test.counts.get_status() == TestStatus.TEST_CRASHED:
test.status = TestStatus.TEST_CRASHED
-def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool) -> Test:
"""
Finds next test to parse in LineStream, creates new Test object,
parses any subtests of the test, populates Test object with all
@@ -665,15 +671,32 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
1..4
[subtests]
- - Subtest header line
+ - Subtest header (must include either the KTAP version line or
+ "# Subtest" header line)
- Example:
+ Example (preferred format with both KTAP version line and
+ "# Subtest" line):
+
+ KTAP version 1
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ Example (only "# Subtest" line):
# Subtest: name
1..3
[subtests]
ok 1 name
+ Example (only KTAP version line, compliant with KTAP v1 spec):
+
+ KTAP version 1
+ 1..3
+ [subtests]
+ ok 1 name
+
- Test result line
Example:
@@ -685,28 +708,29 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
expected_num - expected test number for test to be parsed
log - list of strings containing any preceding diagnostic lines
corresponding to the current test
+ is_subtest - boolean indicating whether test is a subtest
Return:
Test object populated with characteristics and any subtests
"""
test = Test()
test.log.extend(log)
- parent_test = False
- main = parse_ktap_header(lines, test)
- if main:
- # If KTAP/TAP header is found, attempt to parse
+ if not is_subtest:
+ # If parsing the main/top-level test, parse KTAP version line and
# test plan
test.name = "main"
+ ktap_line = parse_ktap_header(lines, test)
parse_test_plan(lines, test)
parent_test = True
else:
- # If KTAP/TAP header is not found, test must be subtest
- # header or test result line so parse attempt to parser
- # subtest header
- parent_test = parse_test_header(lines, test)
+ # If not the main test, attempt to parse a test header contatining
+ # the KTAP version line and/or subtest header line
+ ktap_line = parse_ktap_header(lines, test)
+ subtest_line = parse_test_header(lines, test)
+ parent_test = (ktap_line or subtest_line)
if parent_test:
- # If subtest header is found, attempt to parse
- # test plan and print header
+ # If KTAP version line and/or subtest header is found, attempt
+ # to parse test plan and print test header
parse_test_plan(lines, test)
print_test_header(test)
expected_count = test.expected_count
@@ -721,7 +745,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
sub_log = parse_diagnostic(lines)
sub_test = Test()
if not lines or (peek_test_name_match(lines, test) and
- not main):
+ is_subtest):
if expected_count and test_num <= expected_count:
# If parser reaches end of test before
# parsing expected number of subtests, print
@@ -735,20 +759,19 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
test.log.extend(sub_log)
break
else:
- sub_test = parse_test(lines, test_num, sub_log)
+ sub_test = parse_test(lines, test_num, sub_log, True)
subtests.append(sub_test)
test_num += 1
test.subtests = subtests
- if not main:
+ if is_subtest:
# If not main test, look for test result line
test.log.extend(parse_diagnostic(lines))
- if (parent_test and peek_test_name_match(lines, test)) or \
- not parent_test:
- parse_test_result(lines, test, expected_num)
- else:
+ if test.name != "" and not peek_test_name_match(lines, test):
test.add_error('missing subtest result line!')
+ else:
+ parse_test_result(lines, test, expected_num)
- # Check for there being no tests
+ # Check for there being no subtests within parent test
if parent_test and len(subtests) == 0:
# Don't override a bad status if this test had one reported.
# Assumption: no subtests means CRASHED is from Test.__init__()
@@ -758,11 +781,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# Add statuses to TestCounts attribute in Test object
bubble_up_test_results(test)
- if parent_test and not main:
+ if parent_test and is_subtest:
# If test has subtests and is not the main test object, print
# footer.
print_test_footer(test)
- elif not main:
+ elif is_subtest:
print_test_result(test)
return test
@@ -785,7 +808,7 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test:
test.add_error('could not find any KTAP output!')
test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- test = parse_test(lines, 0, [])
+ test = parse_test(lines, 0, [], False)
if test.status != TestStatus.NO_TESTS:
test.status = test.counts.get_status()
stdout.print_with_timestamp(DIVIDER)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 90c65b072be9..7c2e2a45f330 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -312,6 +312,14 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(kunit_parser._summarize_failed_tests(result),
'Failures: all_failed_suite, some_failed_suite.test2')
+ def test_ktap_format(self):
+ ktap_log = test_data_path('test_parse_ktap_output.log')
+ with open(ktap_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(result.counts, kunit_parser.TestCounts(passed=3))
+ self.assertEqual('suite', result.subtests[0].name)
+ self.assertEqual('case_1', result.subtests[0].subtests[0].name)
+ self.assertEqual('case_2', result.subtests[0].subtests[1].name)
def line_stream_from_strs(strs: Iterable[str]) -> kunit_parser.LineStream:
return kunit_parser.LineStream(enumerate(strs, start=1))
diff --git a/tools/testing/kunit/test_data/test_parse_ktap_output.log b/tools/testing/kunit/test_data/test_parse_ktap_output.log
new file mode 100644
index 000000000000..ccdf244e5303
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_parse_ktap_output.log
@@ -0,0 +1,8 @@
+KTAP version 1
+1..1
+ KTAP version 1
+ 1..3
+ ok 1 case_1
+ ok 2 case_2
+ ok 3 case_3
+ok 1 suite
base-commit: 6fe1ad4a156095859721fef85073df3ed43081d4
--
2.38.1.584.g0f3c55d4c2-goog
Install a cleanup function using the trap command for signals EXIT,
SIGINT, SIGQUIT and SIGABRT. The cleanup function will perform:
1. Online the CPUs that were made offline during the test.
2. Removing the cgroups created.
3. Restoring the original /sys/kernel/debug/sched/verbose value,
currently it's left turned on, irrespective of the original
configuration value.
the test performs steps 1 and 2, on the successful runs, but not during
all of the failed runs. With the cleanup(), the system will perform all
three steps during failed/passed test runs.
Signed-off-by: Kamalesh Babulal <kamalesh.babulal(a)oracle.com>
---
.../testing/selftests/cgroup/test_cpuset_prs.sh | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
index 526d2c42d870..b8ed82b55b1d 100755
--- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
@@ -16,7 +16,12 @@ skip_test() {
[[ $(id -u) -eq 0 ]] || skip_test "Test must be run as root!"
# Set sched verbose flag, if available
-[[ -d /sys/kernel/debug/sched ]] && echo Y > /sys/kernel/debug/sched/verbose
+if [[ -d /sys/kernel/debug/sched ]]
+then
+ # Used to restore the original setting during cleanup
+ SCHED_DEBUG=$(cat /sys/kernel/debug/sched/verbose)
+ echo Y > /sys/kernel/debug/sched/verbose
+fi
# Get wait_inotify location
WAIT_INOTIFY=$(cd $(dirname $0); pwd)/wait_inotify
@@ -54,6 +59,15 @@ echo +cpuset > cgroup.subtree_control
[[ -d test ]] || mkdir test
cd test
+cleanup()
+{
+ online_cpus
+ rmdir A1/A2/A3 A1/A2 A1 B1 > /dev/null 2>&1
+ cd ..
+ rmdir test > /dev/null 2>&1
+ echo "$SCHED_DEBUG" > /sys/kernel/debug/sched/verbose
+}
+
# Pause in ms
pause()
{
@@ -666,6 +680,7 @@ test_inotify()
fi
}
+trap cleanup 0 2 3 6
run_state_test TEST_MATRIX
test_isolated
test_inotify
--
2.34.3
Verify when a bond is configured with {up,down}delay and the link state
of slave members flaps if there are no remaining members up the bond
should immediately select a member to bring up. (from bonding.txt
section 13.1 paragraph 4)
Suggested-by: Liang Li <liali(a)redhat.com>
Signed-off-by: Jonathan Toppins <jtoppins(a)redhat.com>
---
Notes:
v2:
* reduced the number of icmp echos to two (-c 2) and removed the
unneeded `return 0` from function `test_bond_recovery`.
.../selftests/drivers/net/bonding/Makefile | 4 +-
.../selftests/drivers/net/bonding/lag_lib.sh | 106 ++++++++++++++++++
.../net/bonding/mode-1-recovery-updelay.sh | 45 ++++++++
.../net/bonding/mode-2-recovery-updelay.sh | 45 ++++++++
.../selftests/drivers/net/bonding/settings | 2 +-
5 files changed, 200 insertions(+), 2 deletions(-)
create mode 100755 tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh
create mode 100755 tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh
diff --git a/tools/testing/selftests/drivers/net/bonding/Makefile b/tools/testing/selftests/drivers/net/bonding/Makefile
index 6b8d2e2f23c2..0f3921908b07 100644
--- a/tools/testing/selftests/drivers/net/bonding/Makefile
+++ b/tools/testing/selftests/drivers/net/bonding/Makefile
@@ -5,7 +5,9 @@ TEST_PROGS := \
bond-arp-interval-causes-panic.sh \
bond-break-lacpdu-tx.sh \
bond-lladdr-target.sh \
- dev_addr_lists.sh
+ dev_addr_lists.sh \
+ mode-1-recovery-updelay.sh \
+ mode-2-recovery-updelay.sh
TEST_FILES := \
lag_lib.sh \
diff --git a/tools/testing/selftests/drivers/net/bonding/lag_lib.sh b/tools/testing/selftests/drivers/net/bonding/lag_lib.sh
index 16c7fb858ac1..2a268b17b61f 100644
--- a/tools/testing/selftests/drivers/net/bonding/lag_lib.sh
+++ b/tools/testing/selftests/drivers/net/bonding/lag_lib.sh
@@ -1,6 +1,8 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
+NAMESPACES=""
+
# Test that a link aggregation device (bonding, team) removes the hardware
# addresses that it adds on its underlying devices.
test_LAG_cleanup()
@@ -59,3 +61,107 @@ test_LAG_cleanup()
log_test "$driver cleanup mode $mode"
}
+
+# Build a generic 2 node net namespace with 2 connections
+# between the namespaces
+#
+# +-----------+ +-----------+
+# | node1 | | node2 |
+# | | | |
+# | | | |
+# | eth0 +-------+ eth0 |
+# | | | |
+# | eth1 +-------+ eth1 |
+# | | | |
+# +-----------+ +-----------+
+lag_setup2x2()
+{
+ local state=${1:-down}
+ local namespaces="lag_node1 lag_node2"
+
+ # create namespaces
+ for n in ${namespaces}; do
+ ip netns add ${n}
+ done
+
+ # wire up namespaces
+ ip link add name lag1 type veth peer name lag1-end
+ ip link set dev lag1 netns lag_node1 $state name eth0
+ ip link set dev lag1-end netns lag_node2 $state name eth0
+
+ ip link add name lag1 type veth peer name lag1-end
+ ip link set dev lag1 netns lag_node1 $state name eth1
+ ip link set dev lag1-end netns lag_node2 $state name eth1
+
+ NAMESPACES="${namespaces}"
+}
+
+# cleanup all lag related namespaces and remove the bonding module
+lag_cleanup()
+{
+ for n in ${NAMESPACES}; do
+ ip netns delete ${n} >/dev/null 2>&1 || true
+ done
+ modprobe -r bonding
+}
+
+SWITCH="lag_node1"
+CLIENT="lag_node2"
+CLIENTIP="172.20.2.1"
+SWITCHIP="172.20.2.2"
+
+lag_setup_network()
+{
+ lag_setup2x2 "down"
+
+ # create switch
+ ip netns exec ${SWITCH} ip link add br0 up type bridge
+ ip netns exec ${SWITCH} ip link set eth0 master br0 up
+ ip netns exec ${SWITCH} ip link set eth1 master br0 up
+ ip netns exec ${SWITCH} ip addr add ${SWITCHIP}/24 dev br0
+}
+
+lag_reset_network()
+{
+ ip netns exec ${CLIENT} ip link del bond0
+ ip netns exec ${SWITCH} ip link set eth0 up
+ ip netns exec ${SWITCH} ip link set eth1 up
+}
+
+create_bond()
+{
+ # create client
+ ip netns exec ${CLIENT} ip link set eth0 down
+ ip netns exec ${CLIENT} ip link set eth1 down
+
+ ip netns exec ${CLIENT} ip link add bond0 type bond $@
+ ip netns exec ${CLIENT} ip link set eth0 master bond0
+ ip netns exec ${CLIENT} ip link set eth1 master bond0
+ ip netns exec ${CLIENT} ip link set bond0 up
+ ip netns exec ${CLIENT} ip addr add ${CLIENTIP}/24 dev bond0
+}
+
+test_bond_recovery()
+{
+ RET=0
+
+ create_bond $@
+
+ # verify connectivity
+ ip netns exec ${CLIENT} ping ${SWITCHIP} -c 2 >/dev/null 2>&1
+ check_err $? "No connectivity"
+
+ # force the links of the bond down
+ ip netns exec ${SWITCH} ip link set eth0 down
+ sleep 2
+ ip netns exec ${SWITCH} ip link set eth0 up
+ ip netns exec ${SWITCH} ip link set eth1 down
+
+ # re-verify connectivity
+ ip netns exec ${CLIENT} ping ${SWITCHIP} -c 2 >/dev/null 2>&1
+
+ local rc=$?
+ check_err $rc "Bond failed to recover"
+ log_test "$1 ($2) bond recovery"
+ lag_reset_network
+}
diff --git a/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh b/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh
new file mode 100755
index 000000000000..ad4c845a4ac7
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh
@@ -0,0 +1,45 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+# Regression Test:
+# When the bond is configured with down/updelay and the link state of
+# slave members flaps if there are no remaining members up the bond
+# should immediately select a member to bring up. (from bonding.txt
+# section 13.1 paragraph 4)
+#
+# +-------------+ +-----------+
+# | client | | switch |
+# | | | |
+# | +--------| link1 |-----+ |
+# | | +-------+ | |
+# | | | | | |
+# | | +-------+ | |
+# | | bond | link2 | Br0 | |
+# +-------------+ +-----------+
+# 172.20.2.1 172.20.2.2
+
+
+REQUIRE_MZ=no
+REQUIRE_JQ=no
+NUM_NETIFS=0
+lib_dir=$(dirname "$0")
+source "$lib_dir"/net_forwarding_lib.sh
+source "$lib_dir"/lag_lib.sh
+
+cleanup()
+{
+ lag_cleanup
+}
+
+trap cleanup 0 1 2
+
+lag_setup_network
+test_bond_recovery mode 1 miimon 100 updelay 0
+test_bond_recovery mode 1 miimon 100 updelay 200
+test_bond_recovery mode 1 miimon 100 updelay 500
+test_bond_recovery mode 1 miimon 100 updelay 1000
+test_bond_recovery mode 1 miimon 100 updelay 2000
+test_bond_recovery mode 1 miimon 100 updelay 5000
+test_bond_recovery mode 1 miimon 100 updelay 10000
+
+exit "$EXIT_STATUS"
diff --git a/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh b/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh
new file mode 100755
index 000000000000..2330d37453f9
--- /dev/null
+++ b/tools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh
@@ -0,0 +1,45 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+# Regression Test:
+# When the bond is configured with down/updelay and the link state of
+# slave members flaps if there are no remaining members up the bond
+# should immediately select a member to bring up. (from bonding.txt
+# section 13.1 paragraph 4)
+#
+# +-------------+ +-----------+
+# | client | | switch |
+# | | | |
+# | +--------| link1 |-----+ |
+# | | +-------+ | |
+# | | | | | |
+# | | +-------+ | |
+# | | bond | link2 | Br0 | |
+# +-------------+ +-----------+
+# 172.20.2.1 172.20.2.2
+
+
+REQUIRE_MZ=no
+REQUIRE_JQ=no
+NUM_NETIFS=0
+lib_dir=$(dirname "$0")
+source "$lib_dir"/net_forwarding_lib.sh
+source "$lib_dir"/lag_lib.sh
+
+cleanup()
+{
+ lag_cleanup
+}
+
+trap cleanup 0 1 2
+
+lag_setup_network
+test_bond_recovery mode 2 miimon 100 updelay 0
+test_bond_recovery mode 2 miimon 100 updelay 200
+test_bond_recovery mode 2 miimon 100 updelay 500
+test_bond_recovery mode 2 miimon 100 updelay 1000
+test_bond_recovery mode 2 miimon 100 updelay 2000
+test_bond_recovery mode 2 miimon 100 updelay 5000
+test_bond_recovery mode 2 miimon 100 updelay 10000
+
+exit "$EXIT_STATUS"
diff --git a/tools/testing/selftests/drivers/net/bonding/settings b/tools/testing/selftests/drivers/net/bonding/settings
index 867e118223cd..6091b45d226b 100644
--- a/tools/testing/selftests/drivers/net/bonding/settings
+++ b/tools/testing/selftests/drivers/net/bonding/settings
@@ -1 +1 @@
-timeout=60
+timeout=120
--
2.31.1