Hello,
We ran automated tests on a recent commit from this kernel tree:
Kernel repo: git://git.kernel.org/pub/scm/linux/kernel/git/sashal/linux-stable.git Commit: 4b17a56708d9 - kcov: remote coverage support
The results of these automated tests are provided below.
Overall result: FAILED (see details below) Merge: OK Compile: OK Tests: FAILED
All kernel binaries, config files, and logs are available for download here:
https://artifacts.cki-project.org/pipelines/296781
One or more kernel tests failed:
ppc64le: ❌ LTP lite ❌ xfstests: ext4
We hope that these logs can help you find the problem quickly. For the full detail on our testing procedures, please scroll to the bottom of this message.
Please reply to this email if you have any questions about the tests that we ran or if you have any suggestions on how to make future tests more effective.
,-. ,-. ( C ) ( K ) Continuous `-',-.`-' Kernel ( I ) Integration `-' ______________________________________________________________________________
Compile testing ---------------
We compiled the kernel for 3 architectures:
aarch64: make options: -j30 INSTALL_MOD_STRIP=1 targz-pkg
ppc64le: make options: -j30 INSTALL_MOD_STRIP=1 targz-pkg
x86_64: make options: -j30 INSTALL_MOD_STRIP=1 targz-pkg
Hardware testing ---------------- We booted each kernel and ran the following tests:
aarch64: Host 1: ✅ Boot test ✅ Podman system integration test (as root) ✅ Podman system integration test (as user) ✅ LTP lite ✅ Loopdev Sanity ✅ jvm test suite ✅ Memory function: memfd_create ✅ Memory function: kaslr ✅ AMTU (Abstract Machine Test Utility) ✅ LTP: openposix test suite ✅ Networking bridge: sanity ✅ Ethernet drivers sanity ✅ Networking MACsec: sanity ✅ Networking socket: fuzz ✅ Networking sctp-auth: sockopts test ✅ Networking: igmp conformance test ✅ Networking route: pmtu ✅ Networking route_func: local ✅ Networking route_func: forward ✅ Networking TCP: keepalive test ✅ Networking UDP: socket ✅ Networking tunnel: geneve basic test ✅ Networking tunnel: gre basic ✅ L2TP basic test ✅ Networking tunnel: vxlan basic ✅ Networking ipsec: basic netns transport ✅ Networking ipsec: basic netns tunnel ✅ audit: audit testsuite test ✅ httpd: mod_ssl smoke sanity ✅ iotop: sanity ✅ tuned: tune-processes-through-perf ✅ ALSA PCM loopback test ✅ ALSA Control (mixer) Userspace Element test ✅ Usex - version 1.9-29 ✅ storage: SCSI VPD ✅ stress: stress-ng ✅ trace: ftrace/tracer 🚧 ✅ CIFS Connectathon 🚧 ✅ POSIX pjd-fstest suites 🚧 ✅ Networking vnic: ipvlan/basic 🚧 ✅ storage: dm/common
Host 2: ✅ Boot test ✅ xfstests: ext4 ✅ xfstests: xfs ✅ lvm thinp sanity ✅ storage: software RAID testing 🚧 ✅ selinux-policy: serge-testsuite 🚧 ✅ Storage blktests
ppc64le: Host 1: ✅ Boot test ✅ Podman system integration test (as root) ✅ Podman system integration test (as user) ❌ LTP lite ✅ Loopdev Sanity ✅ jvm test suite ✅ Memory function: memfd_create ✅ Memory function: kaslr ✅ AMTU (Abstract Machine Test Utility) ✅ LTP: openposix test suite ✅ Networking bridge: sanity ✅ Ethernet drivers sanity ✅ Networking MACsec: sanity ✅ Networking socket: fuzz ✅ Networking sctp-auth: sockopts test ✅ Networking route: pmtu ✅ Networking route_func: local ✅ Networking route_func: forward ✅ Networking TCP: keepalive test ✅ Networking UDP: socket ✅ Networking tunnel: geneve basic test ✅ Networking tunnel: gre basic ✅ L2TP basic test ✅ Networking tunnel: vxlan basic ✅ Networking ipsec: basic netns tunnel ✅ audit: audit testsuite test ✅ httpd: mod_ssl smoke sanity ✅ iotop: sanity ✅ tuned: tune-processes-through-perf ✅ ALSA PCM loopback test ✅ ALSA Control (mixer) Userspace Element test ✅ Usex - version 1.9-29 ✅ trace: ftrace/tracer 🚧 ✅ CIFS Connectathon 🚧 ✅ POSIX pjd-fstest suites 🚧 ✅ Networking vnic: ipvlan/basic 🚧 ✅ storage: dm/common
Host 2: ✅ Boot test ❌ xfstests: ext4 ✅ xfstests: xfs ✅ lvm thinp sanity ✅ storage: software RAID testing 🚧 ✅ selinux-policy: serge-testsuite 🚧 ✅ Storage blktests
x86_64: Host 1: ✅ Boot test ✅ Podman system integration test (as root) ✅ Podman system integration test (as user) ✅ LTP lite ✅ Loopdev Sanity ✅ jvm test suite ✅ Memory function: memfd_create ✅ Memory function: kaslr ✅ AMTU (Abstract Machine Test Utility) ✅ LTP: openposix test suite ✅ Networking bridge: sanity ✅ Ethernet drivers sanity ✅ Networking MACsec: sanity ✅ Networking socket: fuzz ✅ Networking sctp-auth: sockopts test ✅ Networking: igmp conformance test ✅ Networking route: pmtu ✅ Networking route_func: local ✅ Networking route_func: forward ✅ Networking TCP: keepalive test ✅ Networking UDP: socket ✅ Networking tunnel: geneve basic test ✅ Networking tunnel: gre basic ✅ L2TP basic test ✅ Networking tunnel: vxlan basic ✅ Networking ipsec: basic netns transport ✅ Networking ipsec: basic netns tunnel ✅ audit: audit testsuite test ✅ httpd: mod_ssl smoke sanity ✅ iotop: sanity ✅ tuned: tune-processes-through-perf ✅ pciutils: sanity smoke test ✅ ALSA PCM loopback test ✅ ALSA Control (mixer) Userspace Element test ✅ Usex - version 1.9-29 ✅ storage: SCSI VPD ✅ stress: stress-ng ✅ trace: ftrace/tracer 🚧 ✅ CIFS Connectathon 🚧 ✅ POSIX pjd-fstest suites 🚧 ✅ Networking vnic: ipvlan/basic 🚧 ✅ storage: dm/common
Host 2: ✅ Boot test ✅ Storage SAN device stress - mpt3sas driver
Host 3: ✅ Boot test 🚧 ✅ IPMI driver test 🚧 ✅ IPMItool loop stress test
Host 4: ✅ Boot test ✅ xfstests: ext4 ✅ xfstests: xfs ✅ lvm thinp sanity ✅ storage: software RAID testing 🚧 ✅ IOMMU boot test 🚧 ✅ selinux-policy: serge-testsuite 🚧 ✅ Storage blktests
Host 5: ✅ Boot test ✅ Storage SAN device stress - megaraid_sas
Test sources: https://github.com/CKI-project/tests-beaker 💚 Pull requests are welcome for new tests or improvements to existing tests!
Waived tests ------------ If the test run included waived tests, they are marked with 🚧. Such tests are executed but their results are not taken into account. Tests are waived when their results are not reliable enough, e.g. when they're just introduced or are being fixed.
Testing timeout --------------- We aim to provide a report within reasonable timeframe. Tests that haven't finished running are marked with ⏱. Reports for non-upstream kernels have a Beaker recipe linked to next to each host.
Hi!
One or more kernel tests failed:
ppc64le: ??? LTP lite ??? xfstests: ext4
Both logs shows missing files, that may be an infrastructure problem as well.
Also can we include links to the logfiles here? Bonus points for showing the snippet with the actual failure in the email as well. I takes a fair amount of time locating them manually in the pipeline repository, it would be much much easier just with the links to the right logfile...
On 11/20/19 6:35 AM, Cyril Hrubis wrote:
Hi!
One or more kernel tests failed:
ppc64le: ??? LTP lite ??? xfstests: ext4
Both logs shows missing files, that may be an infrastructure problem as well.
Also can we include links to the logfiles here? Bonus points for showing the snippet with the actual failure in the email as well. I takes a fair amount of time locating them manually in the pipeline repository, it would be much much easier just with the links to the right logfile...
Thanks for the feedback Cyril, we did have links to each failure listed before but we were told it made the email look cluttered especially if there are multiple failures.
The test logs are sorted by arch|host|TC, is there something we can do to make it easier to find related logs ? https://artifacts.cki-project.org/pipelines/296781/logs/
Maybe we can look into adding the linked logs to the bottom of the email with a reference id next to the failures in the summary, so for example:
ppc64le: ❌ LTP lite [1] ❌ xfstests: ext4 [2]
We could also look into merging the ltp run logs into a single file as well.
-Rachel
Hi!
One or more kernel tests failed:
ppc64le: ??? LTP lite ??? xfstests: ext4
Both logs shows missing files, that may be an infrastructure problem as well.
Also can we include links to the logfiles here? Bonus points for showing the snippet with the actual failure in the email as well. I takes a fair amount of time locating them manually in the pipeline repository, it would be much much easier just with the links to the right logfile...
Thanks for the feedback Cyril, we did have links to each failure listed before but we were told it made the email look cluttered especially if there are multiple failures.
So it's exactly how Dmitry described it, you can't please everyone..,
The test logs are sorted by arch|host|TC, is there something we can do to make it easier to find related logs ? https://artifacts.cki-project.org/pipelines/296781/logs/
Maybe we can look into adding the linked logs to the bottom of the email with a reference id next to the failures in the summary, so for example:
ppc64le: ??? LTP lite [1] ??? xfstests: ext4 [2]
That would work for me.
We could also look into merging the ltp run logs into a single file as well.
That would make it too big I guess. Actually the only part I'm interested in most of the time is the part of the log with the failing test. I would be quite happy if we had logs/failures file on the pipelines sever that would contain only failures extracted from different logfiles. The question is if that's feasible with your framework.
-----Original Message----- From: Cyril Hrubis
Hi!
One or more kernel tests failed:
ppc64le: ??? LTP lite ??? xfstests: ext4
Both logs shows missing files, that may be an infrastructure problem as well.
Also can we include links to the logfiles here? Bonus points for showing the snippet with the actual failure in the email as well. I takes a fair amount of time locating them manually in the pipeline repository, it would be much much easier just with the links to the right logfile...
My preference would be to include the failure snippet somewhere in the e-mail as well (as opposed to just a link).
Thanks for the feedback Cyril, we did have links to each failure listed before but we were told it made the email look cluttered especially if there are multiple failures.
So it's exactly how Dmitry described it, you can't please everyone..,
The test logs are sorted by arch|host|TC, is there something we can do to make it easier to find related logs ? https://artifacts.cki-project.org/pipelines/296781/logs/
Maybe we can look into adding the linked logs to the bottom of the email with a reference id next to the failures in the summary, so for example:
ppc64le: ??? LTP lite [1] ??? xfstests: ext4 [2]
That would work for me.
Maybe combine the 'footnote' idea with the 'inline' idea, and have the footnote include a link to the full log and a snippet with just the output from the failing testcase, from the full log?
We could also look into merging the ltp run logs into a single file as well.
That would make it too big I guess. Actually the only part I'm interested in most of the time is the part of the log with the failing test. I would be quite happy if we had logs/failures file on the pipelines sever that would contain only failures extracted from different logfiles. The question is if that's feasible with your framework.
Fuego has an LTP log-splitter and link generator. It's Fuego-specific and generates files referred to by links in the result tables that Fuego shows to users.
I don't know how CKI is generating it's data or storing it, but I can take a look and see if it could be applied to their use case. It's a python program that is fairly small.
See here: https://bitbucket.org/fuegotest/fuego-core/src/master/tests/Functional.LTP/p...
It might not be applicable, depending on whether CKI stores their LTP output similarly to how Fuego does, but IMHO it's worth taking a look. If there is sufficient interest, maybe this could be generalized and submitted to upstream LTP. The Fuego log-splitter produces individual files.
Another idea would be to write a program that takes an LTP log, and the name of a failing testcase, and outputs (on stdout) the snippet from the log for that testcase. I think this would be very easy to do, and might be suitable to use in multiple contexts: on the command line, in a report generator, or as a CGI script for a results server. -- Tim
On 11/21/19 5:58 AM, Tim.Bird@sony.com wrote:
-----Original Message----- From: Cyril Hrubis
Hi!
One or more kernel tests failed:
ppc64le: ??? LTP lite ??? xfstests: ext4
Both logs shows missing files, that may be an infrastructure problem as well.
Also can we include links to the logfiles here? Bonus points for showing the snippet with the actual failure in the email as well. I takes a fair amount of time locating them manually in the pipeline repository, it would be much much easier just with the links to the right logfile...
My preference would be to include the failure snippet somewhere in the e-mail as well (as opposed to just a link).
Thanks for the feedback Cyril, we did have links to each failure listed before but we were told it made the email look cluttered especially if there are multiple failures.
So it's exactly how Dmitry described it, you can't please everyone..,
The test logs are sorted by arch|host|TC, is there something we can do to make it easier to find related logs ? https://artifacts.cki-project.org/pipelines/296781/logs/
Maybe we can look into adding the linked logs to the bottom of the email with a reference id next to the failures in the summary, so for example:
ppc64le: ??? LTP lite [1] ??? xfstests: ext4 [2]
That would work for me.
Maybe combine the 'footnote' idea with the 'inline' idea, and have the footnote include a link to the full log and a snippet with just the output from the failing testcase, from the full log?
We could also look into merging the ltp run logs into a single file as well.
That would make it too big I guess. Actually the only part I'm interested in most of the time is the part of the log with the failing test. I would be quite happy if we had logs/failures file on the pipelines sever that would contain only failures extracted from different logfiles. The question is if that's feasible with your framework.
Fuego has an LTP log-splitter and link generator. It's Fuego-specific and generates files referred to by links in the result tables that Fuego shows to users.
I don't know how CKI is generating it's data or storing it, but I can take a look and see if it could be applied to their use case. It's a python program that is fairly small.
There is a summary log which captures overall results: https://artifacts.cki-project.org/pipelines/296781/logs/aarch64_host_1_LTP_l...
Then an individual log file for each LTP testsuite, e.g: https://artifacts.cki-project.org/pipelines/296781/logs/aarch64_host_1_LTP_l...
See here: https://bitbucket.org/fuegotest/fuego-core/src/master/tests/Functional.LTP/p...
Thanks!
It might not be applicable, depending on whether CKI stores their LTP output similarly to how Fuego does, but IMHO it's worth taking a look. If there is sufficient interest, maybe this could be generalized and submitted to upstream LTP. The Fuego log-splitter produces individual files.
I think it's a good idea, as long as it can be generic enough where someone could modify a config file for example to indicate the log path and naming convention.
Another idea would be to write a program that takes an LTP log, and the name of a failing testcase, and outputs (on stdout) the snippet from the log for that testcase. I think this would be very easy to do, and might be suitable to use in multiple contexts: on the command line, in a report generator, or as a CGI script for a results server.
I logged a few tickets so our team can take a closer look and discuss for both failure snippets and linking LTP logs directly. I'm also checking to see if we have anything internally.
Thanks for all the feedback.
-- Tim
linux-stable-mirror@lists.linaro.org