On Thu, Jul 27, 2023 at 2:37 AM Muhammad Usama Anjum usama.anjum@collabora.com wrote:
<snip>
+static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg) +{
unsigned long walk_start, walk_end;struct mmu_notifier_range range;struct pagemap_scan_private p;size_t n_ranges_out = 0;int ret;memset(&p, 0, sizeof(p));ret = pagemap_scan_get_args(&p.arg, uarg);if (ret)return ret;ret = pagemap_scan_init_bounce_buffer(&p);if (ret)return ret;/* Protection change for the range is going to happen. */if (p.arg.flags & PM_SCAN_WP_MATCHING) {mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,mm, p.arg.start, p.arg.end);mmu_notifier_invalidate_range_start(&range);}walk_start = walk_end = p.arg.start;for (; walk_end != p.arg.end; walk_start = walk_end) {int n_out;walk_end = min_t(unsigned long,(walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK,p.arg.end);
This approach has performance implications. The basic program that scans its address space takes around 20-30 seconds, but it has just a few small mappings. The first optimization that comes to mind is to remove the PAGEMAP_WALK_SIZE limit and instead halt walk_page_range when the bounce buffer is full. After draining the buffer, the walk_page_range function can be restarted.
The test program and perf data can be found here: https://gist.github.com/avagin/c5a22f3c78f8cb34281602dfe9c43d10
ret = mmap_read_lock_killable(mm);if (ret)break;ret = walk_page_range(mm, walk_start, walk_end,&pagemap_scan_ops, &p);mmap_read_unlock(mm);n_out = pagemap_scan_flush_buffer(&p);if (n_out < 0)ret = n_out;elsen_ranges_out += n_out;if (ret)break;}
Thanks, Andrei