Don't print that 88 sub-tests are going to be executed, but then skip.
This is against TAP compliance. Instead check pre-requisites first
before printing total number of tests.
Old non-tap compliant output:
TAP version 13
1..88
ok 2 # SKIP all tests require euid == 0
# Planned tests != run tests (88 != 1)
# Totals: pass:0 fail:0 xfail:0 xpass:0 skip:1 error:0
New and correct output:
TAP version 13
1..0 # SKIP all tests require euid == 0
Signed-off-by: Muhammad Usama Anjum <usama.anjum(a)collabora.com>
---
Changes since v1:
- Remove simplifying if condition lines
- Update the patch message
---
tools/testing/selftests/openat2/resolve_test.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/openat2/resolve_test.c b/tools/testing/selftests/openat2/resolve_test.c
index bbafad440893c..85a4c64ee950d 100644
--- a/tools/testing/selftests/openat2/resolve_test.c
+++ b/tools/testing/selftests/openat2/resolve_test.c
@@ -508,12 +508,13 @@ void test_openat2_opath_tests(void)
int main(int argc, char **argv)
{
ksft_print_header();
- ksft_set_plan(NUM_TESTS);
/* NOTE: We should be checking for CAP_SYS_ADMIN here... */
if (geteuid() != 0)
ksft_exit_skip("all tests require euid == 0\n");
+ ksft_set_plan(NUM_TESTS);
+
test_openat2_opath_tests();
if (ksft_get_fail_cnt() + ksft_get_error_cnt() > 0)
--
2.39.2
A small bugfix for "run-user XARCH=ppc64le" and run-user support for
run-tests.sh.
Signed-off-by: Thomas Weißschuh <linux(a)weissschuh.net>
---
Thomas Weißschuh (2):
selftests/nolibc: introduce QEMU_ARCH_USER
selftests/nolibc: run-tests.sh: enable testing via qemu-user
tools/testing/selftests/nolibc/Makefile | 5 ++++-
tools/testing/selftests/nolibc/run-tests.sh | 22 +++++++++++++++++++---
2 files changed, 23 insertions(+), 4 deletions(-)
---
base-commit: ba335752620565c25c3028fff9496bb8ef373602
change-id: 20770915-nolibc-run-user-845375a3ec4f
Best regards,
--
Thomas Weißschuh <linux(a)weissschuh.net>
Extend pmu_counters_test to AMD CPUs.
As the AMD PMU is quite different from Intel with different events and
feature sets, this series introduces a new code path to test it,
specifically focusing on the core counters including the
PerfCtrExtCore and PerfMonV2 features. Northbridge counters and cache
counters exist, but are not as important and can be deferred to a
later series.
The first patch is a bug fix that could be submitted separately.
The series has been tested on both Intel and AMD machines, but I have
not found an AMD machine old enough to lack PerfCtrExtCore. I have
made efforts that no part of the code has any dependency on its
presence.
I am aware of similar work in this direction done by Jinrong Liang
[1]. He told me he is not working on it currently and I am not
intruding by making my own submission.
[1] https://lore.kernel.org/kvm/20231121115457.76269-1-cloudliang@tencent.com/
Colton Lewis (6):
KVM: x86: selftests: Fix typos in macro variable use
KVM: x86: selftests: Define AMD PMU CPUID leaves
KVM: x86: selftests: Set up AMD VM in pmu_counters_test
KVM: x86: selftests: Test read/write core counters
KVM: x86: selftests: Test core events
KVM: x86: selftests: Test PerfMonV2
.../selftests/kvm/include/x86_64/processor.h | 7 +
.../selftests/kvm/x86_64/pmu_counters_test.c | 267 ++++++++++++++++--
2 files changed, 249 insertions(+), 25 deletions(-)
--
2.46.0.rc2.264.g509ed76dc8-goog
Introduce a new test to identify regressions causing devices to go
missing on the system.
For each bus and class on the system the test checks the number of
devices present against a reference file, which needs to have been
generated by the program at a previous point on a known-good kernel, and
if there are missing devices they are reported.
Signed-off-by: Nícolas F. R. A. Prado <nfraprado(a)collabora.com>
---
Hi,
Key points about this test:
* Goal: Identify regressions causing devices to go missing on the system
* Focus:
* Ease of maintenance: the reference file is generated programatically
* Minimum of false-positives: the script makes as few assumptions as possible
about the stability of device identifiers to ensure renames/refactors don't
trigger false-positives
* How it works: For each bus and class on the system the test checks the number
of devices present against a reference file, which needs to have been
generated by the program at a previous point on a known-good kernel, and if
there are missing devices they are reported.
* Comparison to other tests: It might be possible(*) to replace the discoverable
devices test [1] with this. The benefits of this test is that it's easier
to setup and maintain and has wider coverage of devices.
Additional detail:
* Having more devices on the running system than the reference does not cause a
failure, but a warning is printed in that case to suggest that the reference
be updated.
* Missing devices are detected per bus/class based on the number of devices.
When the test fails, the known metadata for each of the expected and detected
devices is printed and some simple similitarity comparison is done to suggest
the devices that are the most likely to be missing.
* The proposed place to store the generated reference files is the
'platform-test-parameters' repository in KernelCI [2].
Example output: This is an example of a failing test case when one of the two
devices in the nvmem bus went missing:
# Missing devices for subsystem 'nvmem': 1 (Expected 2, found 1)
# =================
# Devices expected:
#
# uevent:
# OF_NAME=efuse
# OF_FULLNAME=/soc/efuse@11c10000
# OF_COMPATIBLE_0=mediatek,mt8195-efuse
# OF_COMPATIBLE_1=mediatek,efuse
# OF_COMPATIBLE_N=2
#
# uevent:
# OF_NAME=flash
# OF_FULLNAME=/soc/spi@1132c000/flash@0
# OF_COMPATIBLE_0=jedec,spi-nor
# OF_COMPATIBLE_N=1
#
# -----------------
# Devices found:
#
# uevent:
# OF_NAME=efuse
# OF_FULLNAME=/soc/efuse@11c10000
# OF_COMPATIBLE_0=mediatek,mt8195-efuse
# OF_COMPATIBLE_1=mediatek,efuse
# OF_COMPATIBLE_N=2
#
# -----------------
# Devices missing (best guess):
#
# uevent:
# OF_NAME=flash
# OF_FULLNAME=/soc/spi@1132c000/flash@0
# OF_COMPATIBLE_0=jedec,spi-nor
# OF_COMPATIBLE_N=1
#
# =================
not ok 19 bus.nvmem
Example of how the data for these devices is encoded in the reference file:
bus:
...
nvmem:
count: 2
devices:
- info:
uevent: 'OF_NAME=efuse
OF_FULLNAME=/soc/efuse@11c10000
OF_COMPATIBLE_0=mediatek,mt8195-efuse
OF_COMPATIBLE_1=mediatek,efuse
OF_COMPATIBLE_N=2
'
- info:
uevent: 'OF_NAME=flash
OF_FULLNAME=/soc/spi@1132c000/flash@0
OF_COMPATIBLE_0=jedec,spi-nor
OF_COMPATIBLE_N=1
'
(Full reference file: http://0x0.st/Xp60.yaml;)
Caveat: Relying only on the count of devices in a subsystem makes the test
susceptible to false-negatives eg. if a device goes missing and another in the
same subsystem is added the count will be the same so this regression won't be
reported. In order to avoid this we may include properties that must match
individual devices, but we must be very careful (and it's why I haven't done it)
since matching against properties that aren't guaranteed to be stable will
introduce false-positives (ie. detecting false regressions) due to eventual
renames.
Some things to improve in the near future / gather feedback on:
* (*): Currently this test only checks for the existence of devices. We could
extend it to also encode into the reference which devices are bound to drivers
to be able to completely replace the discoverable devices probe kselftest [1].
* Expanding identifying properties: Currently the properties that are stored
(when present) in the reference for each device to be used for identification
in the result output are uevent, device/uevent, firmware_node/uevent and name.
Suggestions of others properties to add are welcome.
* Adding more filtering to reduce noise:
* Ignoring buses/classes: Currently the devlink class is ignored by the test
since it seems like a kernel internal detail that userspace doesn't actually
care about. We should add others that are similar.
* Ignoring non-devices: There can be entries in /sys/class/ that aren't
devices. For now we're filtering down to only symlinks, but there might be a
better way.
* As mentioned in the caveat section above we may want to add actual matching
of devices based on properties to avoid false-negatives if we identify
suitable properties.
* It would be nice to have an option in the program to compare a newer reference
to an older one to make it easier for the user to see the differences and
decide if the new reference is ok.
* Since the reference file is not supposed to be manually edited, JSON might be
a better choice than YAML since it is included in the python standard library.
Let me know your thoughts.
Thanks,
Nícolas
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/too…
[2] https://github.com/kernelci/platform-test-parameters
---
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/devices/exist/Makefile | 3 +
tools/testing/selftests/devices/exist/exist.py | 268 +++++++++++++++++++++++++
3 files changed, 272 insertions(+)
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index bc8fe9e8f7f2..9c49b5ec5bef 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -14,6 +14,7 @@ TARGETS += cpufreq
TARGETS += cpu-hotplug
TARGETS += damon
TARGETS += devices/error_logs
+TARGETS += devices/exist
TARGETS += devices/probe
TARGETS += dmabuf-heaps
TARGETS += drivers/dma-buf
diff --git a/tools/testing/selftests/devices/exist/Makefile b/tools/testing/selftests/devices/exist/Makefile
new file mode 100644
index 000000000000..3075cac32092
--- /dev/null
+++ b/tools/testing/selftests/devices/exist/Makefile
@@ -0,0 +1,3 @@
+TEST_PROGS := exist.py
+
+include ../../lib.mk
diff --git a/tools/testing/selftests/devices/exist/exist.py b/tools/testing/selftests/devices/exist/exist.py
new file mode 100755
index 000000000000..8241b2fabc8e
--- /dev/null
+++ b/tools/testing/selftests/devices/exist/exist.py
@@ -0,0 +1,268 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2024 Collabora Ltd
+
+# * Goal: Identify regressions causing devices to go missing on the system
+# * Focus:
+# * Ease of maintenance: the reference file is generated programatically
+# * Minimum of false-positives: the script makes as few assumptions as
+# possible about the stability of device identifiers to ensure
+# renames/refactors don't trigger false-positives
+# * How it works: For each bus and class on the system the test checks the
+# number of devices present against a reference file, which needs to have been
+# generated by the program at a previous point on a known-good kernel, and if
+# there are missing devices they are reported.
+
+import os
+import sys
+import argparse
+
+import yaml
+
+# Allow ksft module to be imported from different directory
+this_dir = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.join(this_dir, "../../kselftest/"))
+
+import ksft
+
+
+def generate_devs_obj():
+ obj = {}
+
+ device_sources = [
+ {
+ "base_dir": "/sys/class",
+ "add_path": "",
+ "key_name": "class",
+ "ignored": ["devlink"],
+ },
+ {
+ "base_dir": "/sys/bus",
+ "add_path": "devices",
+ "key_name": "bus",
+ "ignored": [],
+ },
+ ]
+
+ properties = sorted(["uevent", "device/uevent", "firmware_node/uevent", "name"])
+
+ for source in device_sources:
+ source_subsystems = {}
+ for subsystem in sorted(os.listdir(source["base_dir"])):
+ if subsystem in source["ignored"]:
+ continue
+
+ devs_path = os.path.join(source["base_dir"], subsystem, source["add_path"])
+ dev_dirs = [dev for dev in os.scandir(devs_path) if dev.is_symlink()]
+ devs_data = []
+ for dev_dir in dev_dirs:
+ dev_path = os.path.join(devs_path, dev_dir)
+ dev_data = {"info": {}}
+ for prop in properties:
+ if os.path.isfile(os.path.join(dev_path, prop)):
+ with open(os.path.join(dev_path, prop)) as f:
+ dev_data["info"][prop] = f.read()
+ devs_data.append(dev_data)
+ if len(dev_dirs):
+ source_subsystems[subsystem] = {
+ "count": len(dev_dirs),
+ "devices": devs_data,
+ }
+ obj[source["key_name"]] = source_subsystems
+
+ return obj
+
+
+def commented(s):
+ return s.replace("\n", "\n# ")
+
+
+def indented(s, n):
+ return " " * n + s.replace("\n", "\n" + " " * n)
+
+
+def stripped(s):
+ return s.strip("\n")
+
+
+def devices_difference(dev1, dev2):
+ difference = 0
+
+ for prop in dev1["info"].keys():
+ for l1, l2 in zip(
+ dev1["info"].get(prop, "").split("\n"),
+ dev2["info"].get(prop, "").split("\n"),
+ ):
+ if l1 != l2:
+ difference += 1
+ return difference
+
+
+def guess_missing_devices(cur_devs_subsystem, ref_devs_subsystem):
+ # Detect what devices on the current system are the most similar to devices
+ # on the reference one by one until the leftovers are the most dissimilar
+ # devices and therefore most likely the missing ones.
+ found_count = cur_devs_subsystem["count"]
+ expected_count = ref_devs_subsystem["count"]
+ missing_count = found_count - expected_count
+
+ diffs = []
+ for cur_d in cur_devs_subsystem["devices"]:
+ for ref_d in ref_devs_subsystem["devices"]:
+ diffs.append((devices_difference(cur_d, ref_d), cur_d, ref_d))
+
+ diffs.sort(key=lambda x: x[0])
+
+ assigned_ref_devs = []
+ assigned_cur_devs = []
+ for diff in diffs:
+ if len(assigned_ref_devs) >= expected_count - missing_count:
+ break
+ if diff[1] in assigned_cur_devs or diff[2] in assigned_ref_devs:
+ continue
+ assigned_cur_devs.append(diff[1])
+ assigned_ref_devs.append(diff[2])
+
+ missing_devices = []
+ for d in ref_devs_subsystem["devices"]:
+ if d not in assigned_ref_devs:
+ missing_devices.append(d)
+
+ return missing_devices
+
+
+def dump_devices_info(cur_devs_subsystem, ref_devs_subsystem):
+ def dump_device_info(dev):
+ for name, val in dev["info"].items():
+ ksft.print_msg(indented(name + ":", 2))
+ val = stripped(val)
+ if val:
+ ksft.print_msg(commented(indented(val, 4)))
+ ksft.print_msg("")
+
+ ksft.print_msg("=================")
+ ksft.print_msg("Devices expected:")
+ ksft.print_msg("")
+ for d in ref_devs_subsystem["devices"]:
+ dump_device_info(d)
+ ksft.print_msg("-----------------")
+ ksft.print_msg("Devices found:")
+ ksft.print_msg("")
+ for d in cur_devs_subsystem["devices"]:
+ dump_device_info(d)
+ ksft.print_msg("-----------------")
+ ksft.print_msg("Devices missing (best guess):")
+ ksft.print_msg("")
+ missing_devices = guess_missing_devices(cur_devs_subsystem, ref_devs_subsystem)
+ for d in missing_devices:
+ dump_device_info(d)
+ ksft.print_msg("=================")
+
+
+def run_test(ref_filename):
+ ksft.print_msg(f"Using reference file: {ref_filename}")
+
+ with open(ref_filename) as f:
+ ref_devs_obj = yaml.safe_load(f)
+
+ num_tests = 0
+ for dev_source in ref_devs_obj.values():
+ num_tests += len(dev_source)
+ ksft.set_plan(num_tests)
+
+ cur_devs_obj = generate_devs_obj()
+
+ reference_outdated = False
+
+ for source, ref_devs_source_obj in ref_devs_obj.items():
+ for subsystem, ref_devs_subsystem_obj in ref_devs_source_obj.items():
+ test_name = f"{source}.{subsystem}"
+ if not (
+ cur_devs_obj.get(source) and cur_devs_obj.get(source).get(subsystem)
+ ):
+ ksft.print_msg(f"Device subsystem '{subsystem}' missing")
+ ksft.test_result_fail(test_name)
+ continue
+ cur_devs_subsystem_obj = cur_devs_obj[source][subsystem]
+
+ found_count = cur_devs_subsystem_obj["count"]
+ expected_count = ref_devs_subsystem_obj["count"]
+ if found_count < expected_count:
+ ksft.print_msg(
+ f"Missing devices for subsystem '{subsystem}': {expected_count - found_count} (Expected {expected_count}, found {found_count})"
+ )
+ dump_devices_info(cur_devs_subsystem_obj, ref_devs_subsystem_obj)
+ ksft.test_result_fail(test_name)
+ else:
+ ksft.test_result_pass(test_name)
+ if found_count > expected_count:
+ reference_outdated = True
+
+ if len(cur_devs_obj[source]) > len(ref_devs_source_obj):
+ reference_outdated = True
+
+ if reference_outdated:
+ ksft.print_msg(
+ "Warning: The current system contains more devices and/or subsystems than the reference. Updating the reference is recommended."
+ )
+
+
+def get_possible_ref_filenames():
+ filenames = []
+
+ dt_board_compatible_file = "/proc/device-tree/compatible"
+ if os.path.exists(dt_board_compatible_file):
+ with open(dt_board_compatible_file) as f:
+ for line in f:
+ compatibles = [compat for compat in line.split("\0") if compat]
+ filenames.extend(compatibles)
+ else:
+ dmi_id_dir = "/sys/devices/virtual/dmi/id"
+ vendor_dmi_file = os.path.join(dmi_id_dir, "sys_vendor")
+ product_dmi_file = os.path.join(dmi_id_dir, "product_name")
+
+ with open(vendor_dmi_file) as f:
+ vendor = f.read().replace("\n", "")
+ with open(product_dmi_file) as f:
+ product = f.read().replace("\n", "")
+
+ filenames = [vendor + "," + product]
+
+ return filenames
+
+
+def get_ref_filename(ref_dir):
+ chosen_ref_filename = ""
+ full_ref_paths = [os.path.join(ref_dir, f + ".yaml") for f in get_possible_ref_filenames()]
+ for path in full_ref_paths:
+ if os.path.exists(path):
+ chosen_ref_filename = path
+ break
+
+ if not chosen_ref_filename:
+ tried_paths = ",".join(["'" + p + "'" for p in full_ref_paths])
+ ksft.print_msg(f"No matching reference file found (tried {tried_paths})")
+ ksft.exit_fail()
+
+ return chosen_ref_filename
+
+
+parser = argparse.ArgumentParser()
+parser.add_argument(
+ "--reference-dir", default=".", help="Directory containing the reference files"
+)
+parser.add_argument("--generate-reference", action="store_true", help="Generate a reference file with the devices on the running system")
+args = parser.parse_args()
+
+if args.generate_reference:
+ print(f"# Kernel version: {os.uname().release}")
+ print(yaml.dump(generate_devs_obj()))
+ sys.exit(0)
+
+ksft.print_header()
+
+ref_filename = get_ref_filename(args.reference_dir)
+
+run_test(ref_filename)
+
+ksft.finished()
---
base-commit: 73399b58e5e5a1b28a04baf42e321cfcfc663c2f
change-id: 20240724-kselftest-dev-exist-bb1bcf884654
Best regards,
--
Nícolas F. R. A. Prado <nfraprado(a)collabora.com>
Hello Davide,
In the following commit:
commit ca22da2fbd693b54dc8e3b7b54ccc9f7e9ba3640
Author: Davide Caratti <dcaratti(a)redhat.com>
Date: Fri Jan 20 18:01:40 2023 +0100
act_mirred: use the backlog for nested calls to mirred ingress
you added the mirred_egress_to_ingress_tcp_test kselftest that, I noticed, hangs with "Ncat: TIMEOUT." if openvswitch module is loaded.
Is this the right behaviour or was such configuration not intended?
Regards,
Lenar Khannanov
Adds a selftest that creates two virtual interfaces, assigns one to a
new namespace, and assigns IP addresses to both.
It listens on the destination interface using netcat and configures a
dynamic target on netconsole, pointing to the destination IP address.
The test then checks if the message was received properly on the
destination interface.
Signed-off-by: Breno Leitao <leitao(a)debian.org>
---
MAINTAINERS | 1 +
.../net/netconsole/basic_integration_test.sh | 153 ++++++++++++++++++
2 files changed, 154 insertions(+)
create mode 100755 tools/testing/selftests/net/netconsole/basic_integration_test.sh
diff --git a/MAINTAINERS b/MAINTAINERS
index c0a3d9e93689..59207365c9f5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15768,6 +15768,7 @@ M: Breno Leitao <leitao(a)debian.org>
S: Maintained
F: Documentation/networking/netconsole.rst
F: drivers/net/netconsole.c
+F: tools/testing/selftests/net/netconsole/
NETDEVSIM
M: Jakub Kicinski <kuba(a)kernel.org>
diff --git a/tools/testing/selftests/net/netconsole/basic_integration_test.sh b/tools/testing/selftests/net/netconsole/basic_integration_test.sh
new file mode 100755
index 000000000000..fbabbc633451
--- /dev/null
+++ b/tools/testing/selftests/net/netconsole/basic_integration_test.sh
@@ -0,0 +1,153 @@
+#!/usr/bin/env bash
+# SPDX-License-Identifier: GPL-2.0
+
+# This test creates two virtual interfaces, assigns one of them (the "destination
+# interface") to a new namespace, and assigns IP addresses to both interfaces.
+#
+# It listens on the destination interface using netcat (nc) and configures a
+# dynamic target on netconsole, pointing to the destination IP address.
+#
+# Finally, it checks whether the message was received properly on the
+# destination interface. Note that this test may pollute the kernel log buffer
+# (dmesg) and relies on dynamic configuration and namespaces being configured."
+#
+# Author: Breno Leitao <leitao(a)debian.org>
+
+SCRIPTDIR=$(dirname "$(readlink -e "${BASH_SOURCE[0]}")")
+
+# Simple script to test dynamic targets in netconsole
+SRCIF="veth0"
+SRCIP=192.168.1.1
+
+DSTIF="veth1"
+DSTIP=192.168.1.2
+
+PORT="6666"
+MSG="netconsole selftest"
+TARGET=$(mktemp -u netcons_XXXXX)
+NETCONS_CONFIGFS="/sys/kernel/config/netconsole"
+FULLPATH="${NETCONS_CONFIGFS}"/"${TARGET}"
+# This will be have some tmp values appened to it in set_network()
+NAMESPACE="netconsns"
+
+# Used to create and delete namespaces
+source "${SCRIPTDIR}"/../lib.sh
+
+function set_network() {
+ # this is coming from lib.sh
+ setup_ns "${NAMESPACE}"
+ NAMESPACE=${NS_LIST[0]}
+ ip link add "${SRCIF}" type veth peer name "${DSTIF}"
+
+ # "${DSTIF}"
+ ip link set "${DSTIF}" netns "${NAMESPACE}"
+ ip netns exec "${NAMESPACE}" ip addr add "${DSTIP}"/24 dev "${DSTIF}"
+ ip netns exec "${NAMESPACE}" ip link set "${DSTIF}" up
+
+ # later, configure "${SRCIF}"
+ ip addr add "${SRCIP}"/24 dev "${SRCIF}"
+ ip link set "${SRCIF}" up
+}
+
+function create_dynamic_target() {
+ DSTMAC=$(ip netns exec "${NAMESPACE}" ip link show "${DSTIF}" | awk '/ether/ {print $2}')
+
+ # Create a dynamic target
+ mkdir ${FULLPATH}
+
+ echo ${DSTIP} > ${FULLPATH}/remote_ip
+ echo ${SRCIP} > ${FULLPATH}/local_ip
+ echo ${DSTMAC} > ${FULLPATH}/remote_mac
+ echo "${SRCIF}" > ${FULLPATH}/dev_name
+
+ echo 1 > ${FULLPATH}/enabled
+}
+
+function cleanup() {
+ echo 0 > "${FULLPATH}"/enabled
+ rmdir "${FULLPATH}"
+ # This will delete DSTIF also
+ ip link del "${SRCIF}"
+ # this is coming from lib.sh
+ cleanup_all_ns
+}
+
+function listen_port() {
+ OUTPUT=${1}
+ echo "Saving content in ${OUTPUT}"
+ timeout 2 ip netns exec "${NAMESPACE}" nc -u -l "${PORT}" | sed '/^$/q' > ${OUTPUT}
+}
+
+function validate_result() {
+ TMPFILENAME=/tmp/"${TARGET}"
+
+ # sleep until the file isc reated
+ sleep 1
+ # Check if the file exists
+ if [ ! -f "$TMPFILENAME" ]; then
+ echo "FAIL: File was not generate." >&2
+ return ${ksft_fail}
+ fi
+
+ if ! grep -q "${MSG}" "${TMPFILENAME}"; then
+ echo "FAIL: ${MSG} not found in ${TMPFILENAME}" >&2
+ cat ${TMPFILENAME} >&2
+ return ${ksft_fail}
+ fi
+
+ rm ${TMPFILENAME}
+ return ${ksft_pass}
+}
+
+function check_for_dependencies() {
+ if [ "$(id -u)" -ne 0 ]; then
+ echo "This script must be run as root" >&2
+ exit 1
+ fi
+
+ if ! which nc > /dev/null ; then
+ echo "SKIP: nc(1) is not available" >&2
+ exit ${ksft_skip}
+ fi
+
+ if ! which ip > /dev/null ; then
+ echo "SKIP: ip(1) is not available" >&2
+ exit ${ksft_skip}
+ fi
+
+ if [ ! -d "${NETCONS_CONFIGFS}" ]; then
+ echo "SKIP: directory ${NETCONS_CONFIGFS} does not exist. Check if NETCONSOLE_DYNAMIC is enabled" >&2
+ exit ${ksft_skip}
+ fi
+
+ if ip link show veth0 2> /dev/null; then
+ echo "SKIP: interface veth0 exists in the system. Not overwriting it."
+ exit ${ksft_skip}
+ fi
+}
+
+# ========== #
+# Start here #
+# ========== #
+
+# Check for basic system dependency and exit if not found
+check_for_dependencies
+# Create one namespace and two interfaces
+set_network
+# Create a dynamic target for netconsole
+create_dynamic_target
+# Listed for netconsole port inside the namespace and destination interface
+listen_port /tmp/"${TARGET}" &
+# Wait for nc to start and listen to the port.
+sleep 1
+
+# Send the message
+echo "${MSG}: ${TARGET}" > /dev/kmsg
+
+# Make sure the message was received in the dst part
+validate_result
+ret=$?
+# Remove the namespace, interfaces and netconsole target
+cleanup
+
+exit ${ret}
--
2.43.0