During the course of implementing FEAT_LPA2 within the arm64 KVM port, I found a
couple of issues within the KVM selftest code, which I thought were worth
posting independently. The LPA2 patches, for which I will post v2 in the next
few days, depend on these fixes for its testing.
Ryan Roberts (2):
KVM: selftests: Fixup config fragment for access_tracking_perf_test
KVM: selftests: arm64: Fix pte encode/decode for PA bits > 48
tools/testing/selftests/kvm/config | 1 +
.../selftests/kvm/lib/aarch64/processor.c | 32 ++++++++++++++-----
2 files changed, 25 insertions(+), 8 deletions(-)
--
2.25.1
This patchset updates __reg_combine_64_into_32 function to set 32-bit bounds
when lower 32-bit value is not wrapping, and add cases to for it.
Xu Kuohai (2):
bpf: update 32-bit bounds when the lower 32-bit value is not wrapping
selftests/bpf: check bounds not in the 32-bit range
kernel/bpf/verifier.c | 27 ++--
tools/testing/selftests/bpf/verifier/bounds.c | 121 ++++++++++++++++++
2 files changed, 132 insertions(+), 16 deletions(-)
--
2.30.2
These patches help make the logs a bit more friendly to work with by
adding human readable names for cards and controls alongside the numbers
assigned to them even when things are working well.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
Mark Brown (2):
kselftest/alsa - mixer: Always log control names
kselftest/alsa: Log card names during startup
tools/testing/selftests/alsa/mixer-test.c | 13 +++++++++++++
tools/testing/selftests/alsa/pcm-test.c | 10 ++++++++++
2 files changed, 23 insertions(+)
---
base-commit: fe15c26ee26efa11741a7b632e9f23b01aca4cc6
change-id: 20230223-alsa-log-ctl-name-fb07f30d7217
Best regards,
--
Mark Brown <broonie(a)kernel.org>
If a control has an invalid default value then we might fail to set it
when restoring the default value after our write tests, for example due to
correctly implemented range checks in put() operations. Currently this
causes us to report the tests we were running as failed even when the
operation we were trying to test is successful, making it look like there
are problems where none really exist. Stop doing this, only reporting any
issues during the actual test.
We already have validation for the initial readback being in spec and for
writing the default value back so failed tests will be reported for these
controls, and we log an error on the operation that failed when we write so
there will be a diagnostic warning the user that there is a problem.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/alsa/mixer-test.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/alsa/mixer-test.c b/tools/testing/selftests/alsa/mixer-test.c
index 05f1749ae19d..ac5efa42d488 100644
--- a/tools/testing/selftests/alsa/mixer-test.c
+++ b/tools/testing/selftests/alsa/mixer-test.c
@@ -755,7 +755,6 @@ static bool test_ctl_write_valid_enumerated(struct ctl_data *ctl)
static void test_ctl_write_valid(struct ctl_data *ctl)
{
bool pass;
- int err;
/* If the control is turned off let's be polite */
if (snd_ctl_elem_info_is_inactive(ctl->info)) {
@@ -797,9 +796,7 @@ static void test_ctl_write_valid(struct ctl_data *ctl)
}
/* Restore the default value to minimise disruption */
- err = write_and_verify(ctl, ctl->def_val, NULL);
- if (err < 0)
- pass = false;
+ write_and_verify(ctl, ctl->def_val, NULL);
ksft_test_result(pass, "write_valid.%d.%d\n",
ctl->card->card, ctl->elem);
@@ -1015,9 +1012,7 @@ static void test_ctl_write_invalid(struct ctl_data *ctl)
}
/* Restore the default value to minimise disruption */
- err = write_and_verify(ctl, ctl->def_val, NULL);
- if (err < 0)
- pass = false;
+ write_and_verify(ctl, ctl->def_val, NULL);
ksft_test_result(pass, "write_invalid.%d.%d\n",
ctl->card->card, ctl->elem);
---
base-commit: fe15c26ee26efa11741a7b632e9f23b01aca4cc6
change-id: 20230224-alsa-mixer-test-restore-invalid-2a57b98aeb7f
Best regards,
--
Mark Brown <broonie(a)kernel.org>
Fix bug in debugfs logs that causes individual parameterized results to not
appear because the log is reinitialized (cleared) when each parameter is
run.
Ensure these results appear in the debugfs logs, increase log size to
allow for the size of parameterized results. As a result, append lines to
the log directly rather than using an intermediate variable that can cause
stack size warnings due to the increased log size.
Here is the debugfs log of ext4_inode_test which uses parameterized tests
before the fix:
KTAP version 1
# Subtest: ext4_inode_test
1..1
# Totals: pass:16 fail:0 skip:0 total:16
ok 1 ext4_inode_test
As you can see, this log does not include any of the individual
parametrized results.
After (in combination with the next two fixes to remove extra empty line
and ensure KTAP valid format):
KTAP version 1
1..1
KTAP version 1
# Subtest: ext4_inode_test
1..1
KTAP version 1
# Subtest: inode_test_xtimestamp_decoding
ok 1 1901-12-13 Lower bound of 32bit < 0 timestamp, no extra bits
... (the rest of the individual parameterized tests)
ok 16 2446-05-10 Upper bound of 32bit >=0 timestamp. All extra
# inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16
ok 1 inode_test_xtimestamp_decoding
# Totals: pass:16 fail:0 skip:0 total:16
ok 1 ext4_inode_test
Signed-off-by: Rae Moar <rmoar(a)google.com>
Reviewed-by: David Gow <davidgow(a)google.com>
---
Changes from v2 -> v3:
- Fix a off-by-one bug in the kunit_log_append method.
Changes from v1 -> v2:
- Remove the use of the line variable in kunit_log_append that was causing
stack size warnings.
- Add before and after to the commit message.
include/kunit/test.h | 2 +-
lib/kunit/test.c | 18 ++++++++++++------
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 08d3559dd703..0668d29f3453 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -34,7 +34,7 @@ DECLARE_STATIC_KEY_FALSE(kunit_running);
struct kunit;
/* Size of log associated with test. */
-#define KUNIT_LOG_SIZE 512
+#define KUNIT_LOG_SIZE 1500
/* Maximum size of parameter description string. */
#define KUNIT_PARAM_DESC_SIZE 128
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index c9e15bb60058..c4d6304edd61 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -114,22 +114,27 @@ static void kunit_print_test_stats(struct kunit *test,
*/
void kunit_log_append(char *log, const char *fmt, ...)
{
- char line[KUNIT_LOG_SIZE];
va_list args;
- int len_left;
+ int len, log_len, len_left;
if (!log)
return;
- len_left = KUNIT_LOG_SIZE - strlen(log) - 1;
+ log_len = strlen(log);
+ len_left = KUNIT_LOG_SIZE - log_len - 1;
if (len_left <= 0)
return;
+ /* Evaluate length of line to add to log */
va_start(args, fmt);
- vsnprintf(line, sizeof(line), fmt, args);
+ len = vsnprintf(NULL, 0, fmt, args) + 1;
+ va_end(args);
+
+ /* Print formatted line to the log */
+ va_start(args, fmt);
+ vsnprintf(log + log_len, min(len, len_left), fmt, args);
va_end(args);
- strncat(log, line, len_left);
}
EXPORT_SYMBOL_GPL(kunit_log_append);
@@ -437,7 +442,6 @@ static void kunit_run_case_catch_errors(struct kunit_suite *suite,
struct kunit_try_catch_context context;
struct kunit_try_catch *try_catch;
- kunit_init_test(test, test_case->name, test_case->log);
try_catch = &test->try_catch;
kunit_try_catch_init(try_catch,
@@ -533,6 +537,8 @@ int kunit_run_tests(struct kunit_suite *suite)
struct kunit_result_stats param_stats = { 0 };
test_case->status = KUNIT_SKIPPED;
+ kunit_init_test(&test, test_case->name, test_case->log);
+
if (!test_case->generate_params) {
/* Non-parameterised test. */
kunit_run_case_catch_errors(suite, test_case, &test);
base-commit: 60684c2bd35064043360e6f716d1b7c20e967b7d
--
2.40.0.rc0.216.gc4246ad0f0-goog
On Mon, Feb 27, 2023 at 5:57 PM Daniel Xu <dxu(a)dxuuu.xyz> wrote:
>
> Hi Alexei,
>
> On Mon, Feb 27, 2023 at 03:03:38PM -0800, Alexei Starovoitov wrote:
> > On Mon, Feb 27, 2023 at 12:51:02PM -0700, Daniel Xu wrote:
> > > === Context ===
> > >
> > > In the context of a middlebox, fragmented packets are tricky to handle.
> > > The full 5-tuple of a packet is often only available in the first
> > > fragment which makes enforcing consistent policy difficult. There are
> > > really only two stateless options, neither of which are very nice:
> > >
> > > 1. Enforce policy on first fragment and accept all subsequent fragments.
> > > This works but may let in certain attacks or allow data exfiltration.
> > >
> > > 2. Enforce policy on first fragment and drop all subsequent fragments.
> > > This does not really work b/c some protocols may rely on
> > > fragmentation. For example, DNS may rely on oversized UDP packets for
> > > large responses.
> > >
> > > So stateful tracking is the only sane option. RFC 8900 [0] calls this
> > > out as well in section 6.3:
> > >
> > > Middleboxes [...] should process IP fragments in a manner that is
> > > consistent with [RFC0791] and [RFC8200]. In many cases, middleboxes
> > > must maintain state in order to achieve this goal.
> > >
> > > === BPF related bits ===
> > >
> > > However, when policy is enforced through BPF, the prog is run before the
> > > kernel reassembles fragmented packets. This leaves BPF developers in a
> > > awkward place: implement reassembly (possibly poorly) or use a stateless
> > > method as described above.
> > >
> > > Fortunately, the kernel has robust support for fragmented IP packets.
> > > This patchset wraps the existing defragmentation facilities in kfuncs so
> > > that BPF progs running on middleboxes can reassemble fragmented packets
> > > before applying policy.
> > >
> > > === Patchset details ===
> > >
> > > This patchset is (hopefully) relatively straightforward from BPF perspective.
> > > One thing I'd like to call out is the skb_copy()ing of the prog skb. I
> > > did this to maintain the invariant that the ctx remains valid after prog
> > > has run. This is relevant b/c ip_defrag() and ip_check_defrag() may
> > > consume the skb if the skb is a fragment.
> >
> > Instead of doing all that with extra skb copy can you hook bpf prog after
> > the networking stack already handled ip defrag?
> > What kind of middle box are you doing? Why does it have to run at TC layer?
>
> Unless I'm missing something, the only other relevant hooks would be
> socket hooks, right?
>
> Unfortunately I don't think my use case can do that. We are running the
> kernel as a router, so no sockets are involved.
Are you using bpf_fib_lookup and populating kernel routing
table and doing everything on your own including neigh ?
Have you considered to skb redirect to another netdev that does ip defrag?
Like macvlan does it under some conditions. This can be generalized.
Recently Florian proposed to allow calling bpf progs from all existing
netfilter hooks.
You can pretend to local deliver and hook in NF_INET_LOCAL_IN ?
I feel it would be so much cleaner if stack does ip_defrag normally.
The general issue of skb ownership between bpf prog and defrag logic
isn't really solved with skb_copy. It's still an issue.