On Thu, 2024-12-12 at 22:19 -0800, Sean Christopherson wrote:
On Thu, Dec 12, 2024, Maxim Levitsky wrote:
On Wed, 2024-12-11 at 16:44 -0800, Sean Christopherson wrote:
But, I can't help but wonder why KVM bothers emulating PML. I can appreciate that avoiding exits to L1 would be beneficial, but what use case actually cares about dirty logging performance in L1?
It does help with performance by a lot and the implementation is emulated and simple.
Yeah, it's not a lot of complexity, but it's architecturally flawed. And I get that it helps with performance, I'm just stumped as to the use case for dirty logging in a nested VM in the first place.
Do you have any comments for the rest of the patch series? If not then I'll send v2 of the patch series.
*sigh*
I do. Through no fault of your own. I was trying to figure out a way to ensure the vCPU made meaningful progress, versus just guaranteeing at least one write, and stumbled onto a plethora of flaws and unnecessary complexity in the test.
Can you post this patch as a standalone v2? I'd like to do a more agressive cleanup of the selftest, but I don't want to hold this up, and there's no hard dependency.
As for the issues I encountered with the selftest:
Tracing how many pages have been written for the current iteration with a guest side counter doesn't work without more fixes, because the test doesn't collect all dirty entries for the current iterations. For the dirty ring, this results in a vCPU *starting* an iteration with a full dirty ring, and the test hangs because the guest can't make forward progress until log_mode_collect_dirty_pages() is called.
The test presumably doesn't collect all dirty entries because of the weird and unnecessary kick in dirty_ring_collect_dirty_pages(), and all the synchronization that comes with it. The kick is "justified" with a comment saying "This makes sure that hardware PML cache flushed", but there's no reason to do *if* pages that the test collects dirty pages *after* stopping the vCPU. Which is easy to do while also collecting while the vCPU is running, if the kick+synchronization is eliminated (i.e. it's a self-inflicted wound of sorts).
dirty_ring_after_vcpu_run() doesn't honor vcpu_sync_stop_requested, and so every other iteration runs until the ring is full. Testing the "soft full" logic is interesting, but not _that_ interesting, and filling the dirty ring basically ignores the "interval". Fixing this reduces the runtime by a significant amount, especially on nested, at the cost of providing less coverage for the dirty ring with default interval in a nested VM (but if someone cares about testing the dirty ring soft full in a nested VM, they can darn well bump the interval).
Fixing the test to collect all dirty entries for the current iteration exposes another flaw. The bitmaps (not dirty ring) start with all bits set. And so the first iteration can see "dirty" pages that have never been written, but only when applying your fix to limit the hack to s390.
"iteration" is synched to the guest *after* the vCPU is restarted, i.e. the guest could see a stale iteration if the main thread is delayed.
host_bmap_track and all of the weird exemptions for writes from previous iterations goes away if all entries are collected for the current iteration (though a second bitmap is needed to handle the second collection; KVM's "get" of the bitmap clobbers the previous value).
I have everything more or less coded up, but I need to split it into patches, write changelogs, and interleave it with your fixes. Hopefully I'll get to that tomorrow.
Hi!
I will take a look at your patch series once you post it. I also think that the logic in the test is somewhat broken, but then this also serves as a way to cause as much havoc as possible.
The fact that not all dirty pages are collected is because the ring harvest happens at the same time the guest continues dirtying the pages, adding more entries to the ring, simulating what would happen during real-life migration.
kicking the guest just before ring harvest is also IMHO a good thing as it also simulates the IRQ load that would happen.
we can avoid kicking the guest if it is already stopped due to dirty ring, in fact, the fact that we still kick it, delays the kick to the point where we resume the guest and wait for it to stop again before the do the verify step, which makes it often exit not due to log full event.
I did this but this makes the test be way less random, and the whole point of this test is to cause as much havoc as possible.
I do think that we don't need to stop the guest during verify for the dirty-ring case, this is probably a code that only dirty bitmap part of the test needs.
I added Peter Xu to CC to hear his option about this as well.
Best regards, Maxim Levitsky