On Tue, Feb 28, 2023 at 05:21:20PM +0000, Nadav Amit wrote:
On Feb 28, 2023, at 7:55 AM, Peter Xu peterx@redhat.com wrote:
!! External Email
On Mon, Feb 27, 2023 at 11:09:12PM +0000, Nadav Amit wrote:
On Feb 27, 2023, at 1:18 PM, Peter Xu peterx@redhat.com wrote:
!! External Email
On Thu, Feb 23, 2023 at 05:11:11PM +0000, Nadav Amit wrote:
From my experience with UFFD, proper ordering of events is crucial, although it is not always done well. Therefore, we should aim for improvement, not regression. I believe that utilizing the pagemap-based mechanism for WP'ing might be a step in the wrong direction. I think that it would have been better to emit a 'UFFD_FEATURE_WP_ASYNC' WP-log (and ordered) with UFFD #PF and events. The 'UFFD_FEATURE_WP_ASYNC'-log may not need to wake waiters on the file descriptor unless the log is full.
Yes this is an interesting question to think about..
Keeping the data in the pgtable has one good thing that it doesn't need any complexity on maintaining the log, and no possibility of "log full".
I understand your concern, but I think that eventually it might be simpler to maintain, since the logic of how to process the log is moved to userspace.
At the same time, handling inputs from pagemap and uffd handlers and sync’ing them would not be too easy for userspace.
I do not expect a common uffd-wp async user to provide a fault handler at all. In my imagination it's in most cases used standalone from other uffd modes; it means all the faults will still be handled by the kernel. Here we only leverage the accuracy of userfaultfd comparing to soft-dirty, so not really real "user"-faults.
If that is the only use-case, it might make sense. But I guess most users would most likely use some library (and not syscalls directly). So slightly complicating the API for better generality may be reasonable.
But yes, allocation on the heap for userfaultfd_wait_queue-like entries would be needed, and there are some issues of ordering the events (I think all #PF and other events should be ordered regardless) and how not to traverse all async-userfaultfd_wait_queue’s (except those that block if the log is full) when a wakeup is needed.
Will there be an ordering requirement for an async mode? Considering it should be async to whatever else, I would think it's not a problem, but maybe I missed something.
You may be right, but I am not sure. I am still not sure what use-cases are targeted in this patch-set. For CRIU checkpoint use-case (when the app is not running), I guess the current interface makes sense. But if there are use-cases in which this you do care about UFFD-events this can become an issue.
But even in some obvious use-cases, this might be the wrong interface for major performance issues. If we think about some incremental copying of modified pages (a-la pre-copy live-migration or to create point-in-time snapshots), it seems to me much more efficient for application to have a log than traversing all the page-tables.
IMHO snapshots may not need a log at all - it needs CoW before the write happens. Nor is the case for swapping with userfaults, IIUC. IOW in those cases people don't care which page got dirtied, but care on data not being modified until the app allows it to.
But I get the point, and I agree collecting by scanning is slower.
If there's possible "log full" then the next question is whether we should let the worker wait the monitor if the monitor is not fast enough to collect those data. It adds some slight dependency on the two threads, I think it can make the tracking harder or impossible in latency sensitive workloads.
Again, I understand your concern. But this model that I propose is not new. It is used with PML (page-modification logging) and KVM, and IIRC there is a similar interface between KVM and QEMU to provide this information. There are endless other examples for similar producer-consumer mechanisms that might lead to stall in extreme cases.
Yes, I'm not against thinking of using similar structures here. It's just that it's definitely more complicated on the interface, at least we need yet one more interface to setup the rings and define its interfaces.
Note that although Muhammud is defining another new interface here too for pagemap, I don't think it's strictly needed for uffd-wp async mode. One can use uffd-wp async mode with PM_UFFD_WP which is with current pagemap interface already.
So what Muhammud is proposing here are two things to me: (1) uffd-wp async, plus (2) a new pagemap interface (which will closely work with (1) only if we need atomicity on get-dirty and reprotect).
Defining new interface for uffd-wp async mode will be something extra, so IMHO besides the heap allocation on the rings, we need to also justify whether that is needed. That's why I think it's fine to go with what Muhammud proposed, because it's a minimum changeset at least for userfault to support an async mode, and anything else can be done on top if necessary.
Going a bit back to the "lead to stall in extreme cases" above, just also want to mention that the VM use case is slightly different - dirty tracking is only heavily used during migration afaict, and it's a short period. Not a lot of people will complain performance degrades during that period because that's just rare. And, even without the ring the perf is really bad during migration anyway... Especially when huge pages are used to back the guest RAM.
Here it's slightly different to me: it's about tracking dirty pages during any possible workload, and it can be monitored periodically and frequently. So IMHO stricter than a VM use case where migration is the only period to use it.
I still don’t get the use-cases. "monitored periodically and frequently” is not a use-case. And as I said before, actually, monitoring frequently is more performant with a log than with scanning all the page-tables.
Feel free to ignore this part if we're not taking about using a ring structure. My previous comment was mostly for that. Bitmaps won't have this issue. Here I see a bitmap as one way to implement a log, where it's recorded by one bit per page. My comment was that we should be careful on using rings.
Side note: actually kvm dirty ring is even trickier; see the soft-full (kvm_dirty_ring.soft_limit) besides the hard-full event to make sure hard-full won't really trigger (or we're prone to lose dirty bits). I don't think we'll have the same issue here so we can trigger hard-full, but it's still unwanted to halt the threads being tracked for dirty pages. I don't know whether there'll be other side effects by the ring, though..
The other thing is we can also make the log "never gonna full" by making it a bitmap covering any registered ranges, but I don't either know whether it'll be worth it for the effort.
I do not see a benefit of half-log half-scan. It tries to take the data-structure of one format and combine it with another.
What I'm saying here is not half-log / half-scan, but use a single bitmap to store what page is dirty, just like KVM_GET_DIRTY_LOG. I think it avoids any above "stall" issue.
Oh, I never went into the KVM details before - stupid me. If that’s what eventually was proven to work for KVM/QEMU, then it really sounds like the pagemap solution that Muhammad proposed.
But still not convoluting pagemap with userfaultfd (and especially uffd-wp) can be beneficial. Linus already threw some comments here and there about disliking uffd-wp, and I’m not sure adding uffd-wp specific stuff to pagemap would be welcomed.
Yes I also don't know.. As I mentioned I'm not super happy with the interface either, but that's the simplest I can think of so far.
IOW, from an "userfaultfd-side reviewer" POV I'm fine if someone wants to leverage the concepts of uffd-wp and its internals using a separate but very light weighted patch just to impl async mode of uffd-wp. But I'm always open to any suggestions too. It's just that when there're multiple options and when we're not confident on either way, I normally prefer the simplest and cleanest (even if less efficient).
Anyhow, thanks for all the explanations. Eventually, I understand that using bitmaps can be more efficient than a log if the bits are condensed.
Note that I think what Muhammad (sorry, Muhammad! I think I spelled your name wrongly before starting from some email..) proposed is not a bitmap, but an array of ranges that can coalesce the result into very condensed form. Pros and cons.
Again, I can't comment much on that API, but since there're a bunch of other developers looking at that and they're also potential future users, I'll trust their judgement and just focus more on the other side of things.
Thanks,