As a part of the effort to start running kvm selftests nested, this patch
series contains several fixes to the dirty_log_test, which allows this test
to run nested very well.
I also included a mostly nop change to KVM, to reverse the order in which
the PML log is read to align more closely to the hardware. It should
not affect regular users of the dirty logging but it fixes a unit test
specific assumption in the dirty_log_test dirty-ring mode.
Patch 4 fixes a very rare problem, which is hard to reproduce with standard
test parameters, but due to some weird timing issue, it
actually happened a few times on my machine which prompted me to investigate
it.
The issue can be reproduced well by running the test nested
(without patch 4 applied) with a very short iteration time and with a
few iterations in a loop like this:
while ./dirty_log_test -i 10 -I 1 -M dirty-ring ; do true ; done
Or even better, it's possible to manually patch the test to not wait at all
(effectively setting iteration time to 0), then it fails pretty fast.
Best regards,
Maxim Levitsky
Maxim Levitsky (4):
KVM: VMX: read the PML log in the same order as it was written
KVM: selftests: dirty_log_test: Limit s390x workaround to s390x
KVM: selftests: dirty_log_test: run the guest until some dirty ring
entries were harvested
KVM: selftests: dirty_log_test: support multiple write retires
arch/x86/kvm/vmx/vmx.c | 32 +++++---
arch/x86/kvm/vmx/vmx.h | 1 +
tools/testing/selftests/kvm/dirty_log_test.c | 79 +++++++++++++++++---
3 files changed, 91 insertions(+), 21 deletions(-)
--
2.26.3
This series:
1. makes the behavior of_find_device_by_node(),
bus_find_device_by_of_node(), bus_find_device_by_fwnode(), etc., more
consistent when provided with a NULL node/handle;
2. adds kunit tests to validate the new NULL-argument behavior; and
3. makes some related improvements and refactoring for the drivers/base/
kunit tests.
This series aims to prevent problems like the ones resolved in commit
5c8418cf4025 ("PCI/pwrctrl: Unregister platform device only if one
actually exists").
Changes in v2:
* Add Rob's Reviewed-by
* CC LKML (oops!)
* Keep "devm" and "match" tests in separate suites
Brian Norris (3):
drivers: base: Don't match devices with NULL of_node/fwnode/etc
drivers: base: test: Enable device model tests with KUNIT_ALL_TESTS
drivers: base: test: Add ...find_device_by...(... NULL) tests
drivers/base/core.c | 8 ++---
drivers/base/test/Kconfig | 1 +
drivers/base/test/platform-device-test.c | 42 +++++++++++++++++++++++-
3 files changed, 46 insertions(+), 5 deletions(-)
--
2.47.0.338.g60cca15819-goog
Currently, the situation when guest accesses MMIO during vectoring is
handled differently on VMX and SVM: on VMX KVM returns internal error,
when SVM goes into infinite loop trying to deliver an event again and
again.
This patch series eliminates this difference by returning a KVM internal
error when guest performs MMIO during vectoring for both VMX and SVM.
Also, introduce a selftest test case which covers the error handling
mentioned above.
V1 -> V2:
- Make commit messages more brief, avoid using pronouns
- Extract SVM error handling into a separate commit
- Introduce a new X86EMUL_ return type and detect the unhandleable
vectoring error in vendor-specific check_emulate_instruction instead of
handling it in the common MMU code (which is specific for cached MMIO)
Ivan Orlov (6):
KVM: x86: Add function for vectoring error generation
KVM: x86: Add emulation status for vectoring during MMIO
KVM: VMX: Handle vectoring error in check_emulate_instruction
KVM: SVM: Handle MMIO during vectroing error
selftests: KVM: extract lidt into helper function
selftests: KVM: Add test case for MMIO during vectoring
arch/x86/include/asm/kvm_host.h | 12 ++++-
arch/x86/kvm/kvm_emulate.h | 2 +
arch/x86/kvm/svm/svm.c | 9 +++-
arch/x86/kvm/vmx/vmx.c | 33 +++++-------
arch/x86/kvm/x86.c | 27 ++++++++++
.../selftests/kvm/include/x86_64/processor.h | 7 +++
.../selftests/kvm/set_memory_region_test.c | 53 ++++++++++++++++++-
.../selftests/kvm/x86_64/sev_smoke_test.c | 2 +-
8 files changed, 119 insertions(+), 26 deletions(-)
--
2.43.0
This patch series implements a new char misc driver, /dev/ntsync, which is used
to implement Windows NT synchronization primitives.
NT synchronization primitives are unique in that the wait functions both are
vectored, operate on multiple types of object with different behaviour (mutex,
semaphore, event), and affect the state of the objects they wait on. This model
is not compatible with existing kernel synchronization objects or interfaces,
and therefore the ntsync driver implements its own wait queues and locking.
This patch series is rebased against the "char-misc-next" branch of
gregkh/char-misc.git.
== Background ==
The Wine project emulates the Windows API in user space. One particular part of
that API, namely the NT synchronization primitives, have historically been
implemented via RPC to a dedicated "kernel" process. However, more recent
applications use these APIs more strenuously, and the overhead of RPC has become
a bottleneck.
The NT synchronization APIs are too complex to implement on top of existing
primitives without sacrificing correctness. Certain operations, such as
NtPulseEvent() or the "wait-for-all" mode of NtWaitForMultipleObjects(), require
direct control over the underlying wait queue, and implementing a wait queue
sufficiently robust for Wine in user space is not possible. This proposed
driver, therefore, implements the problematic interfaces directly in the Linux
kernel.
This driver was presented at Linux Plumbers Conference 2023. For those further
interested in the history of synchronization in Wine and past attempts to solve
this problem in user space, a recording of the presentation can be viewed here:
https://www.youtube.com/watch?v=NjU4nyWyhU8
== Performance ==
The performance measurements described below are copied from earlier versions of
the patch set. While some of the code has changed, I do not currently anticipate
that it has changed drastically enough to affect those measurements.
The gain in performance varies wildly depending on the application in question
and the user's hardware. For some games NT synchronization is not a bottleneck
and no change can be observed, but for others frame rate improvements of 50 to
150 percent are not atypical. The following table lists frame rate measurements
from a variety of games on a variety of hardware, taken by users Dmitry
Skvortsov, FuzzyQuils, OnMars, and myself:
Game Upstream ntsync improvement
===========================================================================
Anger Foot 69 99 43%
Call of Juarez 99.8 224.1 125%
Dirt 3 110.6 860.7 678%
Forza Horizon 5 108 160 48%
Lara Croft: Temple of Osiris 141 326 131%
Metro 2033 164.4 199.2 21%
Resident Evil 2 26 77 196%
The Crew 26 51 96%
Tiny Tina's Wonderlands 130 360 177%
Total War Saga: Troy 109 146 34%
===========================================================================
== Patches ==
The intended semantics of the patches are broadly intended to match those of the
corresponding Windows functions. For those not already familiar with the Windows
functions (or their undocumented behaviour), patch 27/28 provides a detailed
specification, and individual patches also include a brief description of the
API they are implementing.
The patches making use of this driver in Wine can be retrieved or browsed here:
https://repo.or.cz/wine/zf.git/shortlog/refs/heads/ntsync7
== Previous versions ==
Changes from v6:
* rename NTSYNC_IOC_SEM_POST to NTSYNC_IOC_SEM_RELEASE (matching the NT
terminology instead of POSIX),
* change object creation ioctls to return the fds directly in the return value
instead of through the args struct, which simplifies the API a bit.
* Link to v6: https://lore.kernel.org/lkml/20241209185904.507350-1-zfigura@codeweavers.co…
* Link to v5: https://lore.kernel.org/lkml/20240519202454.1192826-1-zfigura@codeweavers.c…
* Link to v4: https://lore.kernel.org/lkml/20240416010837.333694-1-zfigura@codeweavers.co…
* Link to v3: https://lore.kernel.org/lkml/20240329000621.148791-1-zfigura@codeweavers.co…
* Link to v2: https://lore.kernel.org/lkml/20240219223833.95710-1-zfigura@codeweavers.com/
* Link to v1: https://lore.kernel.org/lkml/20240214233645.9273-1-zfigura@codeweavers.com/
* Link to RFC v2: https://lore.kernel.org/lkml/20240131021356.10322-1-zfigura@codeweavers.com/
* Link to RFC v1: https://lore.kernel.org/lkml/20240124004028.16826-1-zfigura@codeweavers.com/
Elizabeth Figura (30):
ntsync: Return the fd from NTSYNC_IOC_CREATE_SEM.
ntsync: Rename NTSYNC_IOC_SEM_POST to NTSYNC_IOC_SEM_RELEASE.
ntsync: Introduce NTSYNC_IOC_WAIT_ANY.
ntsync: Introduce NTSYNC_IOC_WAIT_ALL.
ntsync: Introduce NTSYNC_IOC_CREATE_MUTEX.
ntsync: Introduce NTSYNC_IOC_MUTEX_UNLOCK.
ntsync: Introduce NTSYNC_IOC_MUTEX_KILL.
ntsync: Introduce NTSYNC_IOC_CREATE_EVENT.
ntsync: Introduce NTSYNC_IOC_EVENT_SET.
ntsync: Introduce NTSYNC_IOC_EVENT_RESET.
ntsync: Introduce NTSYNC_IOC_EVENT_PULSE.
ntsync: Introduce NTSYNC_IOC_SEM_READ.
ntsync: Introduce NTSYNC_IOC_MUTEX_READ.
ntsync: Introduce NTSYNC_IOC_EVENT_READ.
ntsync: Introduce alertable waits.
selftests: ntsync: Add some tests for semaphore state.
selftests: ntsync: Add some tests for mutex state.
selftests: ntsync: Add some tests for NTSYNC_IOC_WAIT_ANY.
selftests: ntsync: Add some tests for NTSYNC_IOC_WAIT_ALL.
selftests: ntsync: Add some tests for wakeup signaling with
WINESYNC_IOC_WAIT_ANY.
selftests: ntsync: Add some tests for wakeup signaling with
WINESYNC_IOC_WAIT_ALL.
selftests: ntsync: Add some tests for manual-reset event state.
selftests: ntsync: Add some tests for auto-reset event state.
selftests: ntsync: Add some tests for wakeup signaling with events.
selftests: ntsync: Add tests for alertable waits.
selftests: ntsync: Add some tests for wakeup signaling via alerts.
selftests: ntsync: Add a stress test for contended waits.
maintainers: Add an entry for ntsync.
docs: ntsync: Add documentation for the ntsync uAPI.
ntsync: No longer depend on BROKEN.
Documentation/userspace-api/index.rst | 1 +
Documentation/userspace-api/ntsync.rst | 385 +++++
MAINTAINERS | 9 +
drivers/misc/Kconfig | 1 -
drivers/misc/ntsync.c | 992 +++++++++++-
include/uapi/linux/ntsync.h | 42 +-
tools/testing/selftests/Makefile | 1 +
.../selftests/drivers/ntsync/.gitignore | 1 +
.../testing/selftests/drivers/ntsync/Makefile | 7 +
tools/testing/selftests/drivers/ntsync/config | 1 +
.../testing/selftests/drivers/ntsync/ntsync.c | 1343 +++++++++++++++++
11 files changed, 2767 insertions(+), 16 deletions(-)
create mode 100644 Documentation/userspace-api/ntsync.rst
create mode 100644 tools/testing/selftests/drivers/ntsync/.gitignore
create mode 100644 tools/testing/selftests/drivers/ntsync/Makefile
create mode 100644 tools/testing/selftests/drivers/ntsync/config
create mode 100644 tools/testing/selftests/drivers/ntsync/ntsync.c
base-commit: cdd30ebb1b9f36159d66f088b61aee264e649d7a
--
2.45.2
This patchset moves the task_mm_cid_work to a preemptible and migratable
context. This reduces the impact of this task to the scheduling latency
of real time tasks.
The change makes the recurrence of the task a bit more predictable.
We also add optimisation and fixes to make sure the task_mm_cid_work
works as intended.
Patch 1 contains the main changes, removing the task_work on the
scheduler tick and using a delayed_work instead.
Patch 2 adds some optimisations to the approach, since we rely
on a delayed_work, it is no longer required to check that the minimum
interval passed since execution, we however terminate the call
immediately if we see that no mm_cid is actually active, which could
happen on processes sleeping for long time or which exited but whose mm
has not been freed yet.
Patch 3 allows the mm_cids to be actually compacted when a process
reduces its number of threads, which was not the case since the same
mm_cids were reused to improve cache locality, more details in [1].
Patch 4 adds a selftest to validate the functionality of the
task_mm_cid_work (i.e. to compact the mm_cids), this test requires patch
3 to be applied.
Changes since V1 [1]:
* Re-arm the delayed_work at each invocation
* Cancel the work synchronously at mmdrop
* Remove next scan fields and completely rely on the delayed_work
* Shrink mm_cid allocation with nr thread/affinity (Mathieu Desnoyers)
* Add self test
OVERHEAD COMPARISON
In this section, I'm going to refer to head as the current state
upstream without my patch applied, patch is the same head with these
patches applied. Likewise, I'm going to refer to task_mm_cid_work as
either the task or the function. The experiments are run on an aarch64
machine with 128 cores. The kernel has a bare configuration with
PREEMPT_RT enabled.
- Memory
The patch introduces some memory overhead:
* head uses a callback_head per thread (16 bytes)
* patch relies on a delayed work per mm but drops a long (80 bytes net)
Tasks with 5 threads or less have lower memory footprint with the
current approach.
Considering a task_struct can be 7-13 kB and an mm_struct is about 1.4
kB, the overhead should be acceptable.
- Boot time
I tested the patch booting a virtual machine with vng[2], both head and
patch get similar boot times (around 8s).
- Runtime
I run some rather demanding tests to show what could possibly be a worst
case in the approach introduced by this patch. The following tests are
running again in vng to have a plain system, running mostly the
stressors (if there). Unless differently specified, time is in us. All
tests run for 30s.
The stress-ng tests were run with 128 stressors, I will omit from the
table for clarity.
No load head patch
running processes(threads): 12(12) 12(12)
duration(avg,max,sum): 75,426,987 2,42,45ms
invocations: 13 20k
stress-ng --cpu-load 80 head patch
running processes(threads): 129(129) 129(129)
duration(avg,max,sum): 20,2ms,740ms 7,774,280ms
invocations: 36k 39k
stress-ng --fork head patch
running processes(threads): 3.6k(3.6k) 4k(4k)
duration(avg,max,sum): 34,41,720 19,457,880ms
invocations: 21 46k
stress-ng --pthread-max 4 head patch
running processes(threads): 129(4k) 129(4k)
duration(avg,max,sum): 31,195,41ms 21,1ms,830ms
invocations: 1290 38k
It is important to note that some of those stressors run for a very
short period of time to just fork/create a thread, this heavily favours
head since the task won't simply run as often.
Moreover, the duration time needs to be read carefully, since the task
can now be preempted by threads, I tried to exclude that from the
computation, but to keep the probes simple, I didn't exclude
interference caused by interrupts.
On the same system while isolated, the task runs in about 30-35ms, it is
hence highly likely that much larger values are only due to
interruptions, rather than the function actually running that long.
I will post another email with the scripts used to retrieve the data and
more details about the runtime distribution.
[1] - https://lore.kernel.org/linux-kernel/20241205083110.180134-2-gmonaco@redhat…
[2] - https://github.com/arighi/virtme-ng
Gabriele Monaco (3):
sched: Move task_mm_cid_work to mm delayed work
sched: Remove mm_cid_next_scan as obsolete
rseq/selftests: Add test for mm_cid compaction
Mathieu Desnoyers (1):
sched: Compact RSEQ concurrency IDs with reduced threads and affinity
include/linux/mm_types.h | 23 ++-
include/linux/sched.h | 1 -
kernel/sched/core.c | 66 +-------
kernel/sched/sched.h | 32 ++--
tools/testing/selftests/rseq/.gitignore | 1 +
tools/testing/selftests/rseq/Makefile | 2 +-
.../selftests/rseq/mm_cid_compaction_test.c | 157 ++++++++++++++++++
7 files changed, 203 insertions(+), 79 deletions(-)
create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c
base-commit: 231825b2e1ff6ba799c5eaf396d3ab2354e37c6b
--
2.47.1
Hi all,
v5 here is a small set of fixes and a rebase of the previous versions.
If there are no major issues, I'd like to land this soon so it can be
used and tested ready for 6.14.
This series was originally written by José Expósito, and has been
modified and updated by Matt Gilbride and myself. The original version
can be found here:
https://github.com/Rust-for-Linux/linux/pull/950
Add support for writing KUnit tests in Rust. While Rust doctests are
already converted to KUnit tests and run, they're really better suited
for examples, rather than as first-class unit tests.
This series implements a series of direct Rust bindings for KUnit tests,
as well as a new macro which allows KUnit tests to be written using a
close variant of normal Rust unit test syntax. The only change required
is replacing '#[cfg(test)]' with '#[kunit_tests(kunit_test_suite_name)]'
An example test would look like:
#[kunit_tests(rust_kernel_hid_driver)]
mod tests {
use super::*;
use crate::{c_str, driver, hid, prelude::*};
use core::ptr;
struct SimpleTestDriver;
impl Driver for SimpleTestDriver {
type Data = ();
}
#[test]
fn rust_test_hid_driver_adapter() {
let mut hid = bindings::hid_driver::default();
let name = c_str!("SimpleTestDriver");
static MODULE: ThisModule = unsafe { ThisModule::from_ptr(ptr::null_mut()) };
let res = unsafe {
<hid::Adapter<SimpleTestDriver> as driver::DriverOps>::register(&mut hid, name, &MODULE)
};
assert_eq!(res, Err(ENODEV)); // The mock returns -19
}
}
Please give this a go, and make sure I haven't broken it! There's almost
certainly a lot of improvements which can be made -- and there's a fair
case to be made for replacing some of this with generated C code which
can use the C macros -- but this is hopefully an adequate implementation
for now, and the interface can (with luck) remain the same even if the
implementation changes.
A few small notable missing features:
- Attributes (like the speed of a test) are hardcoded to the default
value.
- Similarly, the module name attribute is hardcoded to NULL. In C, we
use the KBUILD_MODNAME macro, but I couldn't find a way to use this
from Rust which wasn't more ugly than just disabling it.
- Assertions are not automatically rewritten to use KUnit assertions.
---
Changes since v4:
https://lore.kernel.org/linux-kselftest/20241101064505.3820737-1-davidgow@g…
- Rebased against 6.13-rc1
- Allowed an unused_unsafe warning after the behaviour of addr_of_mut!()
changed in Rust 1.82. (Thanks Boqun, Miguel)
- "Expect" that the sample assert_eq!(1+1, 2) produces a clippy warning
due to a redundant assertion. (Thanks Boqun, Miguel)
- Fix some missing safety comments, and remove some unneeded 'unsafe'
blocks. (Thanks Boqun)
- Fix a couple of minor rustfmt issues which were triggering checkpatch
warnings.
Changes since v3:
https://lore.kernel.org/linux-kselftest/20241030045719.3085147-2-davidgow@g…
- The kunit_unsafe_test_suite!() macro now panic!s if the suite name is
too long, triggering a compile error. (Thanks, Alice!)
- The #[kunit_tests()] macro now preserves span information, so
errors can be better reported. (Thanks, Boqun!)
- The example tests have been updated to no longer use assert_eq!() with
a constant bool argument (which triggered a clippy warning now we
have the span info).
Changes since v2:
https://lore.kernel.org/linux-kselftest/20241029092422.2884505-1-davidgow@g…
- Include missing rust/macros/kunit.rs file from v2. (Thanks Boqun!)
- The kunit_unsafe_test_suite!() macro will truncate the name of the
suite if it is too long. (Thanks Alice!)
- The proc macro now emits an error if the suite name is too long.
- We no longer needlessly use UnsafeCell<> in
kunit_unsafe_test_suite!(). (Thanks Alice!)
Changes since v1:
https://lore.kernel.org/lkml/20230720-rustbind-v1-0-c80db349e3b5@google.com…
- Rebase on top of the latest rust-next (commit 718c4069896c)
- Make kunit_case a const fn, rather than a macro (Thanks Boqun)
- As a result, the null terminator is now created with
kernel::kunit::kunit_case_null()
- Use the C kunit_get_current_test() function to implement
in_kunit_test(), rather than re-implementing it (less efficiently)
ourselves.
Changes since the GitHub PR:
- Rebased on top of kselftest/kunit
- Add const_mut_refs feature
This may conflict with https://lore.kernel.org/lkml/20230503090708.2524310-6-nmi@metaspace.dk/
- Add rust/macros/kunit.rs to the KUnit MAINTAINERS entry
---
José Expósito (3):
rust: kunit: add KUnit case and suite macros
rust: macros: add macro to easily run KUnit tests
rust: kunit: allow to know if we are in a test
MAINTAINERS | 1 +
rust/kernel/kunit.rs | 207 +++++++++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
rust/macros/kunit.rs | 168 +++++++++++++++++++++++++++++++++++
rust/macros/lib.rs | 29 ++++++
5 files changed, 406 insertions(+)
create mode 100644 rust/macros/kunit.rs
--
2.47.1.613.gc27f4b7a9f-goog
I am Tomasz Chmielewski, a Portfolio Manager and Chartered
Financial Analyst affiliated with Iwoca Poland Sp. Z OO in
Poland. I have the privilege of working with distinguished
investors who are eager to support your company's current
initiatives, thereby broadening their investment portfolios. If
this proposal aligns with your interests, I invite you to
respond, and I will gladly share more information to assist you.
Yours sincerely,
Tomasz Chmielewski Warsaw, Mazowieckie,
Poland.