On Fri, 17 Mar 2023 at 13:44, Muhammad Usama Anjum usama.anjum@collabora.com wrote:
On 3/17/23 2:28 AM, Michał Mirosław wrote:
On Thu, 16 Mar 2023 at 18:53, Muhammad Usama Anjum usama.anjum@collabora.com wrote:
On 3/13/23 9:02 PM, Michał Mirosław wrote:
On Thu, 9 Mar 2023 at 14:58, Muhammad Usama Anjum usama.anjum@collabora.com wrote:
[...]
--- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c
[...]
+static int pagemap_scan_output(bool wt, bool file, bool pres, bool swap,
[...]
The `cur->len` test seems redundant: is it possible to have `cur->start == addr` in that case (I guess it would have to get `n_pages == 0` in an earlier invocation)?
No, both wouldn't work. cur->len == 0 means that it has only garbage. It is essential to check the validity from cur->len before performing other checks. Also cur->start can never be equal to addr as we are walking over page addressing in serial manner. We want to see here if the current address matches the previous data by finding the ending address of last stored data (cur->start + cur->len * PAGE_SIZE).
If cur->len == 0, then it doesn't matter if it gets merged or not - it can be filtered out during the flush (see below). [...]
} else if ((!p->vec_index) ||
((p->vec_index + 1) < p->vec_len)) {
Can you explain this test? Why not just `p->vec_index < p->vec_len`? Or better:
if (vec_index >= p->vec_len) return -ENOSPC;
No, it'll not work. Lets leave it as it is. :)
It has gotten somewhat complex, but I don't have any other way to make it simpler which works. First note the following points:
- We walk over 512 page or 1 thp at a time to not over allocate memory in
kernel (p->vec). 2) We also want to merge the consecutive pages with the same flags into one struct page_region. p->vec of current walk may merge with next walk. So we cannot write to user memory until we find the results of the next walk.
So most recent data is put into p->cur. When non-intersecting or mergeable data is found, we move p->cur to p->vec[p->index] inside the page walk. After the page walk, p->vec[0 to p->index] is moved to arg->vec. After all the walks are over. We move the p->cur to arg->vec. It completes the data transfer to user buffer.
[...]
I'm so sorry that it has gotten this much complex. It was way simpler when we were walking over all the memory in one go. But then we needed an unbounded memory from the kernel which we don't want.
[...]
I've gone through and hopefully understood the code. I'm not sure this needs to be so complicated: when traversing a single PMD you can always copy p->cur to p->vec[p->vec_index++] because you can have at most pages_per_PMD non-merges (in the worst case the last page always is left in p->cur and whole p->vec is used). After each PMD p->vec needs a flush if p->vec_index > 0, skipping the dummy entry at front (len == 0; if present). (This is mostly how it is implemented now, but I propose to remove the "overflow" check and do the starting guard removal only every PMD.)
Sorry, unable to understand where to remove the guard?
Instead of checking for it in pagemap_scan_output() for each page you can skip it in do_pagemap_cmd() when doing the flush.
BTW#2, I think the ENOSPC return in pagemap_scan_output() should happen later - only if the pages would match and that caused the count to exceed the limit. For THP n_pages should be truncated to the limit (and ENOSPC returned right away) only after the pages were verified to match.
We have 2 counters here:
- the p->max_pages optionally can be set to find out only N pages of
interest. So p->found_pages is counting this. We need to return early if the page limit is complete.
- the p->vec_index keeps track of output buffer array size
I think I get how the limits are supposed to work, but I also think the implementation is not optimal. An example (assuming max_pages = 1 and vec_len = 1): - a matching page has been found - a second - non-matching - is tried but results in immediate -ENOSPC. -> In this case I'd expect the early return to happen just after the first page is found so that non A similar problem occurs for hugepage: when the limit is hit (we found
= max_pages, n_pages is possibly truncated), but the scan continues
until next page / PMD.
[...]
if (!arg->required_mask && !arg->anyof_mask &&
!arg->excluded_mask)
return false;
Is there an assumption in the code that those checks are needed? I'd expect that no selection criteria makes a valid page set?
In my view, selection criterion must be specified for the ioctl to work. If there is no criterio, user should go and read pagemap file directly. So the assumption is that at least one selection criterion must be specified.
Yes. I'm not sure we need to prevent multiple ways of doing the same thing. But doesn't pagemap reading lack the range aggregation feature?
Yeah, correct. But note that we are supporting only selective 4 flags in this ioctl, not all pagemap flags. So it is useful for only those users who depend only on these 4 flags. Out pagemap_ioctl interface is not so much generic that we can cater anyone. Its interface is specific and we are adding only those cases which are of our interest. So if someone wants range aggregation from pagemap_ioctl, he'll need to add that flag in the IOCTL first. When IOCTL support is added, he can specify the selection criterion etc.
The available flag set is not a problem. An example usecase: dumping the memory state for debugging: ioctl(return_mask=ALL) returns a conveniently compact vector of ranges of pages that are actually used by the process (not only having reserved the virtual space). This is actually something that helps dumping processes with using tools like AddressSanitizer that create huge sparse mappings.
Best Regards Michał Mirosław