Reading /proc/pid/maps requires read-locking mmap_lock which prevents any other task from concurrently modifying the address space. This guarantees coherent reporting of virtual address ranges, however it can block important updates from happening. Oftentimes /proc/pid/maps readers are low priority monitoring tasks and them blocking high priority tasks results in priority inversion.
Locking the entire address space is required to present fully coherent picture of the address space, however even current implementation does not strictly guarantee that by outputting vmas in page-size chunks and dropping mmap_lock in between each chunk. Address space modifications are possible while mmap_lock is dropped and userspace reading the content is expected to deal with possible concurrent address space modifications. Considering these relaxed rules, holding mmap_lock is not strictly needed as long as we can guarantee that a concurrently modified vma is reported either in its original form or after it was modified.
This patchset switches from holding mmap_lock while reading /proc/pid/maps to taking per-vma locks as we walk the vma tree. This reduces the contention with tasks modifying the address space because they would have to contend for the same vma as opposed to the entire address space. Same is done for PROCMAP_QUERY ioctl which locks only the vma that fell into the requested range instead of the entire address space. Previous version of this patchset [1] tried to perform /proc/pid/maps reading under RCU, however its implementation is quite complex and the results are worse than the new version because it still relied on mmap_lock speculation which retries if any part of the address space gets modified. New implementaion is both simpler and results in less contention. Note that similar approach would not work for /proc/pid/smaps reading as it also walks the page table and that's not RCU-safe.
Paul McKenney's designed a test [2] to measure mmap/munmap latencies while concurrently reading /proc/pid/maps. The test has a pair of processes scanning /proc/PID/maps, and another process unmapping and remapping 4K pages from a 128MB range of anonymous memory. At the end of each 10 second run, the latency of each mmap() or munmap() operation is measured, and for each run the maximum and mean latency is printed. The map/unmap process is started first, its PID is passed to the scanners, and then the map/unmap process waits until both scanners are running before starting its timed test. The scanners keep scanning until the specified /proc/PID/maps file disappears. This test registered close to 10x improvement in update latencies:
Before the change: ./run-proc-vs-map.sh --nsamples 100 --rawdata -- --busyduration 2 0.011 0.008 0.455 0.011 0.008 0.472 0.011 0.008 0.535 0.011 0.009 0.545 ... 0.011 0.014 2.875 0.011 0.014 2.913 0.011 0.014 3.007 0.011 0.015 3.018
After the change: ./run-proc-vs-map.sh --nsamples 100 --rawdata -- --busyduration 2 0.006 0.005 0.036 0.006 0.005 0.039 0.006 0.005 0.039 0.006 0.005 0.039 ... 0.006 0.006 0.403 0.006 0.006 0.474 0.006 0.006 0.479 0.006 0.006 0.498
The patchset also adds a number of tests to check for /proc/pid/maps data coherency. They are designed to detect any unexpected data tearing while performing some common address space modifications (vma split, resize and remap). Even before these changes, reading /proc/pid/maps might have inconsistent data because the file is read page-by-page with mmap_lock being dropped between the pages. An example of user-visible inconsistency can be that the same vma is printed twice: once before it was modified and then after the modifications. For example if vma was extended, it might be found and reported twice. What is not expected is to see a gap where there should have been a vma both before and after modification. This patchset increases the chances of such tearing, therefore it's event more important now to test for unexpected inconsistencies.
[1] https://lore.kernel.org/all/20250418174959.1431962-1-surenb@google.com/ [2] https://github.com/paulmckrcu/proc-mmap_sem-test
Suren Baghdasaryan (7): selftests/proc: add /proc/pid/maps tearing from vma split test selftests/proc: extend /proc/pid/maps tearing test to include vma resizing selftests/proc: extend /proc/pid/maps tearing test to include vma remapping selftests/proc: test PROCMAP_QUERY ioctl while vma is concurrently modified selftests/proc: add verbose more for tests to facilitate debugging mm/maps: read proc/pid/maps under per-vma lock mm/maps: execute PROCMAP_QUERY ioctl under per-vma locks
fs/proc/internal.h | 6 + fs/proc/task_mmu.c | 233 +++++- tools/testing/selftests/proc/proc-pid-vm.c | 793 ++++++++++++++++++++- 3 files changed, 1011 insertions(+), 21 deletions(-)
base-commit: 2d0c297637e7d59771c1533847c666cdddc19884
The /proc/pid/maps file is generated page by page, with the mmap_lock released between pages. This can lead to inconsistent reads if the underlying vmas are concurrently modified. For instance, if a vma split or merge occurs at a page boundary while /proc/pid/maps is being read, the same vma might be seen twice: once before and once after the change. This duplication is considered acceptable for userspace handling. However, observing a "hole" where a vma should be (e.g., due to a vma being replaced and the space temporarily being empty) is unacceptable.
Implement a test that: 1. Forks a child process which continuously modifies its address space, specifically targeting a vma at the boundary between two pages. 2. The parent process repeatedly reads the child's /proc/pid/maps. 3. The parent process checks the last vma of the first page and the first vma of the second page for consistency, looking for the effects of vma splits or merges.
The test duration is configurable via the -d command-line parameter in seconds to increase the likelihood of catching the race condition. The default test duration is 5 seconds.
Example Command: proc-pid-vm -d 10
Signed-off-by: Suren Baghdasaryan surenb@google.com --- tools/testing/selftests/proc/proc-pid-vm.c | 430 ++++++++++++++++++++- 1 file changed, 429 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c index d04685771952..6e3f06376a1f 100644 --- a/tools/testing/selftests/proc/proc-pid-vm.c +++ b/tools/testing/selftests/proc/proc-pid-vm.c @@ -27,6 +27,7 @@ #undef NDEBUG #include <assert.h> #include <errno.h> +#include <pthread.h> #include <sched.h> #include <signal.h> #include <stdbool.h> @@ -34,6 +35,7 @@ #include <stdio.h> #include <string.h> #include <stdlib.h> +#include <sys/mman.h> #include <sys/mount.h> #include <sys/types.h> #include <sys/stat.h> @@ -70,6 +72,8 @@ static void make_private_tmp(void) } }
+static unsigned long test_duration_sec = 5UL; +static int page_size; static pid_t pid = -1; static void ate(void) { @@ -281,11 +285,431 @@ static void vsyscall(void) } }
-int main(void) +/* /proc/pid/maps parsing routines */ +struct page_content { + char *data; + ssize_t size; +}; + +#define LINE_MAX_SIZE 256 + +struct line_content { + char text[LINE_MAX_SIZE]; + unsigned long start_addr; + unsigned long end_addr; +}; + +static void read_two_pages(int maps_fd, struct page_content *page1, + struct page_content *page2) +{ + ssize_t bytes_read; + + assert(lseek(maps_fd, 0, SEEK_SET) >= 0); + bytes_read = read(maps_fd, page1->data, page_size); + assert(bytes_read > 0 && bytes_read < page_size); + page1->size = bytes_read; + + bytes_read = read(maps_fd, page2->data, page_size); + assert(bytes_read > 0 && bytes_read < page_size); + page2->size = bytes_read; +} + +static void copy_first_line(struct page_content *page, char *first_line) +{ + char *pos = strchr(page->data, '\n'); + + strncpy(first_line, page->data, pos - page->data); + first_line[pos - page->data] = '\0'; +} + +static void copy_last_line(struct page_content *page, char *last_line) +{ + /* Get the last line in the first page */ + const char *end = page->data + page->size - 1; + /* skip last newline */ + const char *pos = end - 1; + + /* search previous newline */ + while (pos[-1] != '\n') + pos--; + strncpy(last_line, pos, end - pos); + last_line[end - pos] = '\0'; +} + +/* Read the last line of the first page and the first line of the second page */ +static void read_boundary_lines(int maps_fd, struct page_content *page1, + struct page_content *page2, + struct line_content *last_line, + struct line_content *first_line) +{ + read_two_pages(maps_fd, page1, page2); + + copy_last_line(page1, last_line->text); + copy_first_line(page2, first_line->text); + + assert(sscanf(last_line->text, "%lx-%lx", &last_line->start_addr, + &last_line->end_addr) == 2); + assert(sscanf(first_line->text, "%lx-%lx", &first_line->start_addr, + &first_line->end_addr) == 2); +} + +/* Thread synchronization routines */ +enum test_state { + INIT, + CHILD_READY, + PARENT_READY, + SETUP_READY, + SETUP_MODIFY_MAPS, + SETUP_MAPS_MODIFIED, + SETUP_RESTORE_MAPS, + SETUP_MAPS_RESTORED, + TEST_READY, + TEST_DONE, +}; + +struct vma_modifier_info; + +typedef void (*vma_modifier_op)(const struct vma_modifier_info *mod_info); +typedef void (*vma_mod_result_check_op)(struct line_content *mod_last_line, + struct line_content *mod_first_line, + struct line_content *restored_last_line, + struct line_content *restored_first_line); + +struct vma_modifier_info { + int vma_count; + void *addr; + int prot; + void *next_addr; + vma_modifier_op vma_modify; + vma_modifier_op vma_restore; + vma_mod_result_check_op vma_mod_check; + pthread_mutex_t sync_lock; + pthread_cond_t sync_cond; + enum test_state curr_state; + bool exit; + void *child_mapped_addr[]; +}; + +static void wait_for_state(struct vma_modifier_info *mod_info, enum test_state state) +{ + pthread_mutex_lock(&mod_info->sync_lock); + while (mod_info->curr_state != state) + pthread_cond_wait(&mod_info->sync_cond, &mod_info->sync_lock); + pthread_mutex_unlock(&mod_info->sync_lock); +} + +static void signal_state(struct vma_modifier_info *mod_info, enum test_state state) +{ + pthread_mutex_lock(&mod_info->sync_lock); + mod_info->curr_state = state; + pthread_cond_signal(&mod_info->sync_cond); + pthread_mutex_unlock(&mod_info->sync_lock); +} + +/* VMA modification routines */ +static void *child_vma_modifier(struct vma_modifier_info *mod_info) +{ + int prot = PROT_READ | PROT_WRITE; + int i; + + for (i = 0; i < mod_info->vma_count; i++) { + mod_info->child_mapped_addr[i] = mmap(NULL, page_size * 3, prot, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + assert(mod_info->child_mapped_addr[i] != MAP_FAILED); + /* change protection in adjacent maps to prevent merging */ + prot ^= PROT_WRITE; + } + signal_state(mod_info, CHILD_READY); + wait_for_state(mod_info, PARENT_READY); + while (true) { + signal_state(mod_info, SETUP_READY); + wait_for_state(mod_info, SETUP_MODIFY_MAPS); + if (mod_info->exit) + break; + + mod_info->vma_modify(mod_info); + signal_state(mod_info, SETUP_MAPS_MODIFIED); + wait_for_state(mod_info, SETUP_RESTORE_MAPS); + mod_info->vma_restore(mod_info); + signal_state(mod_info, SETUP_MAPS_RESTORED); + + wait_for_state(mod_info, TEST_READY); + while (mod_info->curr_state != TEST_DONE) { + mod_info->vma_modify(mod_info); + mod_info->vma_restore(mod_info); + } + } + for (i = 0; i < mod_info->vma_count; i++) + munmap(mod_info->child_mapped_addr[i], page_size * 3); + + return NULL; +} + +static void stop_vma_modifier(struct vma_modifier_info *mod_info) +{ + wait_for_state(mod_info, SETUP_READY); + mod_info->exit = true; + signal_state(mod_info, SETUP_MODIFY_MAPS); +} + +static void capture_mod_pattern(int maps_fd, + struct vma_modifier_info *mod_info, + struct page_content *page1, + struct page_content *page2, + struct line_content *last_line, + struct line_content *first_line, + struct line_content *mod_last_line, + struct line_content *mod_first_line, + struct line_content *restored_last_line, + struct line_content *restored_first_line) +{ + signal_state(mod_info, SETUP_MODIFY_MAPS); + wait_for_state(mod_info, SETUP_MAPS_MODIFIED); + + /* Copy last line of the first page and first line of the last page */ + read_boundary_lines(maps_fd, page1, page2, mod_last_line, mod_first_line); + + signal_state(mod_info, SETUP_RESTORE_MAPS); + wait_for_state(mod_info, SETUP_MAPS_RESTORED); + + /* Copy last line of the first page and first line of the last page */ + read_boundary_lines(maps_fd, page1, page2, restored_last_line, restored_first_line); + + mod_info->vma_mod_check(mod_last_line, mod_first_line, + restored_last_line, restored_first_line); + + /* + * The content of these lines after modify+resore should be the same + * as the original. + */ + assert(strcmp(restored_last_line->text, last_line->text) == 0); + assert(strcmp(restored_first_line->text, first_line->text) == 0); +} + +static inline void split_vma(const struct vma_modifier_info *mod_info) +{ + assert(mmap(mod_info->addr, page_size, mod_info->prot | PROT_EXEC, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, + -1, 0) != MAP_FAILED); +} + +static inline void merge_vma(const struct vma_modifier_info *mod_info) +{ + assert(mmap(mod_info->addr, page_size, mod_info->prot, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, + -1, 0) != MAP_FAILED); +} + +static inline void check_split_result(struct line_content *mod_last_line, + struct line_content *mod_first_line, + struct line_content *restored_last_line, + struct line_content *restored_first_line) +{ + /* Make sure vmas at the boundaries are changing */ + assert(strcmp(mod_last_line->text, restored_last_line->text) != 0); + assert(strcmp(mod_first_line->text, restored_first_line->text) != 0); +} + +static void test_maps_tearing_from_split(int maps_fd, + struct vma_modifier_info *mod_info, + struct page_content *page1, + struct page_content *page2, + struct line_content *last_line, + struct line_content *first_line) +{ + struct line_content split_last_line; + struct line_content split_first_line; + struct line_content restored_last_line; + struct line_content restored_first_line; + + wait_for_state(mod_info, SETUP_READY); + + /* re-read the file to avoid using stale data from previous test */ + read_boundary_lines(maps_fd, page1, page2, last_line, first_line); + + mod_info->vma_modify = split_vma; + mod_info->vma_restore = merge_vma; + mod_info->vma_mod_check = check_split_result; + + capture_mod_pattern(maps_fd, mod_info, page1, page2, last_line, first_line, + &split_last_line, &split_first_line, + &restored_last_line, &restored_first_line); + + /* Now start concurrent modifications for test_duration_sec */ + signal_state(mod_info, TEST_READY); + + struct line_content new_last_line; + struct line_content new_first_line; + struct timespec start_ts, end_ts; + + clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); + do { + bool last_line_changed; + bool first_line_changed; + + read_boundary_lines(maps_fd, page1, page2, &new_last_line, &new_first_line); + + /* Check if we read vmas after split */ + if (!strcmp(new_last_line.text, split_last_line.text)) { + /* + * The vmas should be consistent with split results, + * however if vma was concurrently restored after a + * split, it can be reported twice (first the original + * split one, then the same vma but extended after the + * merge) because we found it as the next vma again. + * In that case new first line will be the same as the + * last restored line. + */ + assert(!strcmp(new_first_line.text, split_first_line.text) || + !strcmp(new_first_line.text, restored_last_line.text)); + } else { + /* The vmas should be consistent with merge results */ + assert(!strcmp(new_last_line.text, restored_last_line.text) && + !strcmp(new_first_line.text, restored_first_line.text)); + } + /* + * First and last lines should change in unison. If the last + * line changed then the first line should change as well and + * vice versa. + */ + last_line_changed = strcmp(new_last_line.text, last_line->text) != 0; + first_line_changed = strcmp(new_first_line.text, first_line->text) != 0; + assert(last_line_changed == first_line_changed); + + clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); + } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec); + + /* Signal the modifyer thread to stop and wait until it exits */ + signal_state(mod_info, TEST_DONE); +} + +static int test_maps_tearing(void) +{ + struct vma_modifier_info *mod_info; + pthread_mutexattr_t mutex_attr; + pthread_condattr_t cond_attr; + int shared_mem_size; + char fname[32]; + int vma_count; + int maps_fd; + int status; + pid_t pid; + + /* + * Have to map enough vmas for /proc/pid/maps to containt more than one + * page worth of vmas. Assume at least 32 bytes per line in maps output + */ + vma_count = page_size / 32 + 1; + shared_mem_size = sizeof(struct vma_modifier_info) + vma_count * sizeof(void *); + + /* map shared memory for communication with the child process */ + mod_info = (struct vma_modifier_info *)mmap(NULL, shared_mem_size, + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); + + assert(mod_info != MAP_FAILED); + + /* Initialize shared members */ + pthread_mutexattr_init(&mutex_attr); + pthread_mutexattr_setpshared(&mutex_attr, PTHREAD_PROCESS_SHARED); + assert(!pthread_mutex_init(&mod_info->sync_lock, &mutex_attr)); + pthread_condattr_init(&cond_attr); + pthread_condattr_setpshared(&cond_attr, PTHREAD_PROCESS_SHARED); + assert(!pthread_cond_init(&mod_info->sync_cond, &cond_attr)); + mod_info->vma_count = vma_count; + mod_info->curr_state = INIT; + mod_info->exit = false; + + pid = fork(); + if (!pid) { + /* Child process */ + child_vma_modifier(mod_info); + return 0; + } + + sprintf(fname, "/proc/%d/maps", pid); + maps_fd = open(fname, O_RDONLY); + assert(maps_fd != -1); + + /* Wait for the child to map the VMAs */ + wait_for_state(mod_info, CHILD_READY); + + /* Read first two pages */ + struct page_content page1; + struct page_content page2; + + page1.data = malloc(page_size); + assert(page1.data); + page2.data = malloc(page_size); + assert(page2.data); + + struct line_content last_line; + struct line_content first_line; + + read_boundary_lines(maps_fd, &page1, &page2, &last_line, &first_line); + + /* + * Find the addresses corresponding to the last line in the first page + * and the first line in the last page. + */ + mod_info->addr = NULL; + mod_info->next_addr = NULL; + for (int i = 0; i < mod_info->vma_count; i++) { + if (mod_info->child_mapped_addr[i] == (void *)last_line.start_addr) { + mod_info->addr = mod_info->child_mapped_addr[i]; + mod_info->prot = PROT_READ; + /* Even VMAs have write permission */ + if ((i % 2) == 0) + mod_info->prot |= PROT_WRITE; + } else if (mod_info->child_mapped_addr[i] == (void *)first_line.start_addr) { + mod_info->next_addr = mod_info->child_mapped_addr[i]; + } + + if (mod_info->addr && mod_info->next_addr) + break; + } + assert(mod_info->addr && mod_info->next_addr); + + signal_state(mod_info, PARENT_READY); + + test_maps_tearing_from_split(maps_fd, mod_info, &page1, &page2, + &last_line, &first_line); + + stop_vma_modifier(mod_info); + + free(page2.data); + free(page1.data); + + for (int i = 0; i < vma_count; i++) + munmap(mod_info->child_mapped_addr[i], page_size); + close(maps_fd); + waitpid(pid, &status, 0); + munmap(mod_info, shared_mem_size); + + return 0; +} + +int usage(void) +{ + fprintf(stderr, "Userland /proc/pid/{s}maps test cases\n"); + fprintf(stderr, " -d: Duration for time-consuming tests\n"); + fprintf(stderr, " -h: Help screen\n"); + exit(-1); +} + +int main(int argc, char **argv) { int pipefd[2]; int exec_fd; + int opt; + + while ((opt = getopt(argc, argv, "d:h")) != -1) { + if (opt == 'd') + test_duration_sec = strtoul(optarg, NULL, 0); + else if (opt == 'h') + usage(); + }
+ page_size = sysconf(_SC_PAGESIZE); vsyscall(); switch (g_vsyscall) { case 0: @@ -578,6 +1002,10 @@ int main(void) assert(err == -ENOENT); }
+ /* Test tearing in /proc/$PID/maps */ + if (test_maps_tearing()) + return 1; + return 0; } #else
Test that /proc/pid/maps does not report unexpected holes in the address space when a vma at the edge of the page is being concurrently remapped. This remapping results in the vma shrinking and expanding from under the reader. We should always see either shrunk or expanded (original) version of the vma.
Signed-off-by: Suren Baghdasaryan surenb@google.com --- tools/testing/selftests/proc/proc-pid-vm.c | 83 ++++++++++++++++++++++ 1 file changed, 83 insertions(+)
diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c index 6e3f06376a1f..39842e4ec45f 100644 --- a/tools/testing/selftests/proc/proc-pid-vm.c +++ b/tools/testing/selftests/proc/proc-pid-vm.c @@ -583,6 +583,86 @@ static void test_maps_tearing_from_split(int maps_fd, signal_state(mod_info, TEST_DONE); }
+static inline void shrink_vma(const struct vma_modifier_info *mod_info) +{ + assert(mremap(mod_info->addr, page_size * 3, page_size, 0) != MAP_FAILED); +} + +static inline void expand_vma(const struct vma_modifier_info *mod_info) +{ + assert(mremap(mod_info->addr, page_size, page_size * 3, 0) != MAP_FAILED); +} + +static inline void check_shrink_result(struct line_content *mod_last_line, + struct line_content *mod_first_line, + struct line_content *restored_last_line, + struct line_content *restored_first_line) +{ + /* Make sure only the last vma of the first page is changing */ + assert(strcmp(mod_last_line->text, restored_last_line->text) != 0); + assert(strcmp(mod_first_line->text, restored_first_line->text) == 0); +} + +static void test_maps_tearing_from_resize(int maps_fd, + struct vma_modifier_info *mod_info, + struct page_content *page1, + struct page_content *page2, + struct line_content *last_line, + struct line_content *first_line) +{ + struct line_content shrunk_last_line; + struct line_content shrunk_first_line; + struct line_content restored_last_line; + struct line_content restored_first_line; + + wait_for_state(mod_info, SETUP_READY); + + /* re-read the file to avoid using stale data from previous test */ + read_boundary_lines(maps_fd, page1, page2, last_line, first_line); + + mod_info->vma_modify = shrink_vma; + mod_info->vma_restore = expand_vma; + mod_info->vma_mod_check = check_shrink_result; + + capture_mod_pattern(maps_fd, mod_info, page1, page2, last_line, first_line, + &shrunk_last_line, &shrunk_first_line, + &restored_last_line, &restored_first_line); + + /* Now start concurrent modifications for test_duration_sec */ + signal_state(mod_info, TEST_READY); + + struct line_content new_last_line; + struct line_content new_first_line; + struct timespec start_ts, end_ts; + + clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); + do { + read_boundary_lines(maps_fd, page1, page2, &new_last_line, &new_first_line); + + /* Check if we read vmas after shrinking it */ + if (!strcmp(new_last_line.text, shrunk_last_line.text)) { + /* + * The vmas should be consistent with shrunk results, + * however if the vma was concurrently restored, it + * can be reported twice (first as shrunk one, then + * as restored one) because we found it as the next vma + * again. In that case new first line will be the same + * as the last restored line. + */ + assert(!strcmp(new_first_line.text, shrunk_first_line.text) || + !strcmp(new_first_line.text, restored_last_line.text)); + } else { + /* The vmas should be consistent with the original/resored state */ + assert(!strcmp(new_last_line.text, restored_last_line.text) && + !strcmp(new_first_line.text, restored_first_line.text)); + } + clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); + } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec); + + /* Signal the modifyer thread to stop and wait until it exits */ + signal_state(mod_info, TEST_DONE); +} + static int test_maps_tearing(void) { struct vma_modifier_info *mod_info; @@ -674,6 +754,9 @@ static int test_maps_tearing(void) test_maps_tearing_from_split(maps_fd, mod_info, &page1, &page2, &last_line, &first_line);
+ test_maps_tearing_from_resize(maps_fd, mod_info, &page1, &page2, + &last_line, &first_line); + stop_vma_modifier(mod_info);
free(page2.data);
Test that /proc/pid/maps does not report unexpected holes in the address space when we concurrently remap a part of a vma into the middle of another vma. This remapping results in the destination vma being split into three parts and the part in the middle being patched back from, all done concurrently from under the reader. We should always see either original vma or the split one with no holes.
Signed-off-by: Suren Baghdasaryan surenb@google.com --- tools/testing/selftests/proc/proc-pid-vm.c | 92 ++++++++++++++++++++++ 1 file changed, 92 insertions(+)
diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c index 39842e4ec45f..1aef2db7e893 100644 --- a/tools/testing/selftests/proc/proc-pid-vm.c +++ b/tools/testing/selftests/proc/proc-pid-vm.c @@ -663,6 +663,95 @@ static void test_maps_tearing_from_resize(int maps_fd, signal_state(mod_info, TEST_DONE); }
+static inline void remap_vma(const struct vma_modifier_info *mod_info) +{ + /* + * Remap the last page of the next vma into the middle of the vma. + * This splits the current vma and the first and middle parts (the + * parts at lower addresses) become the last vma objserved in the + * first page and the first vma observed in the last page. + */ + assert(mremap(mod_info->next_addr + page_size * 2, page_size, + page_size, MREMAP_FIXED | MREMAP_MAYMOVE | MREMAP_DONTUNMAP, + mod_info->addr + page_size) != MAP_FAILED); +} + +static inline void patch_vma(const struct vma_modifier_info *mod_info) +{ + assert(!mprotect(mod_info->addr + page_size, page_size, + mod_info->prot)); +} + +static inline void check_remap_result(struct line_content *mod_last_line, + struct line_content *mod_first_line, + struct line_content *restored_last_line, + struct line_content *restored_first_line) +{ + /* Make sure vmas at the boundaries are changing */ + assert(strcmp(mod_last_line->text, restored_last_line->text) != 0); + assert(strcmp(mod_first_line->text, restored_first_line->text) != 0); +} + +static void test_maps_tearing_from_remap(int maps_fd, + struct vma_modifier_info *mod_info, + struct page_content *page1, + struct page_content *page2, + struct line_content *last_line, + struct line_content *first_line) +{ + struct line_content remapped_last_line; + struct line_content remapped_first_line; + struct line_content restored_last_line; + struct line_content restored_first_line; + + wait_for_state(mod_info, SETUP_READY); + + /* re-read the file to avoid using stale data from previous test */ + read_boundary_lines(maps_fd, page1, page2, last_line, first_line); + + mod_info->vma_modify = remap_vma; + mod_info->vma_restore = patch_vma; + mod_info->vma_mod_check = check_remap_result; + + capture_mod_pattern(maps_fd, mod_info, page1, page2, last_line, first_line, + &remapped_last_line, &remapped_first_line, + &restored_last_line, &restored_first_line); + + /* Now start concurrent modifications for test_duration_sec */ + signal_state(mod_info, TEST_READY); + + struct line_content new_last_line; + struct line_content new_first_line; + struct timespec start_ts, end_ts; + + clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); + do { + read_boundary_lines(maps_fd, page1, page2, &new_last_line, &new_first_line); + + /* Check if we read vmas after remapping it */ + if (!strcmp(new_last_line.text, remapped_last_line.text)) { + /* + * The vmas should be consistent with remap results, + * however if the vma was concurrently restored, it + * can be reported twice (first as split one, then + * as restored one) because we found it as the next vma + * again. In that case new first line will be the same + * as the last restored line. + */ + assert(!strcmp(new_first_line.text, remapped_first_line.text) || + !strcmp(new_first_line.text, restored_last_line.text)); + } else { + /* The vmas should be consistent with the original/resored state */ + assert(!strcmp(new_last_line.text, restored_last_line.text) && + !strcmp(new_first_line.text, restored_first_line.text)); + } + clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); + } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec); + + /* Signal the modifyer thread to stop and wait until it exits */ + signal_state(mod_info, TEST_DONE); +} + static int test_maps_tearing(void) { struct vma_modifier_info *mod_info; @@ -757,6 +846,9 @@ static int test_maps_tearing(void) test_maps_tearing_from_resize(maps_fd, mod_info, &page1, &page2, &last_line, &first_line);
+ test_maps_tearing_from_remap(maps_fd, mod_info, &page1, &page2, + &last_line, &first_line); + stop_vma_modifier(mod_info);
free(page2.data);
Extend /proc/pid/maps tearing test to verify PROCMAP_QUERY ioctl operation correctness while the vma is being concurrently modified.
Signed-off-by: Suren Baghdasaryan surenb@google.com --- tools/testing/selftests/proc/proc-pid-vm.c | 60 ++++++++++++++++++++++ 1 file changed, 60 insertions(+)
diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c index 1aef2db7e893..b582f40851fb 100644 --- a/tools/testing/selftests/proc/proc-pid-vm.c +++ b/tools/testing/selftests/proc/proc-pid-vm.c @@ -486,6 +486,21 @@ static void capture_mod_pattern(int maps_fd, assert(strcmp(restored_first_line->text, first_line->text) == 0); }
+static void query_addr_at(int maps_fd, void *addr, + unsigned long *vma_start, unsigned long *vma_end) +{ + struct procmap_query q; + + memset(&q, 0, sizeof(q)); + q.size = sizeof(q); + /* Find the VMA at the split address */ + q.query_addr = (unsigned long long)addr; + q.query_flags = 0; + assert(!ioctl(maps_fd, PROCMAP_QUERY, &q)); + *vma_start = q.vma_start; + *vma_end = q.vma_end; +} + static inline void split_vma(const struct vma_modifier_info *mod_info) { assert(mmap(mod_info->addr, page_size, mod_info->prot | PROT_EXEC, @@ -546,6 +561,8 @@ static void test_maps_tearing_from_split(int maps_fd, do { bool last_line_changed; bool first_line_changed; + unsigned long vma_start; + unsigned long vma_end;
read_boundary_lines(maps_fd, page1, page2, &new_last_line, &new_first_line);
@@ -576,6 +593,19 @@ static void test_maps_tearing_from_split(int maps_fd, first_line_changed = strcmp(new_first_line.text, first_line->text) != 0; assert(last_line_changed == first_line_changed);
+ /* Check if PROCMAP_QUERY ioclt() finds the right VMA */ + query_addr_at(maps_fd, mod_info->addr + page_size, + &vma_start, &vma_end); + /* + * The vma at the split address can be either the same as + * original one (if read before the split) or the same as the + * first line in the second page (if read after the split). + */ + assert((vma_start == last_line->start_addr && + vma_end == last_line->end_addr) || + (vma_start == split_first_line.start_addr && + vma_end == split_first_line.end_addr)); + clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec);
@@ -637,6 +667,9 @@ static void test_maps_tearing_from_resize(int maps_fd,
clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); do { + unsigned long vma_start; + unsigned long vma_end; + read_boundary_lines(maps_fd, page1, page2, &new_last_line, &new_first_line);
/* Check if we read vmas after shrinking it */ @@ -656,6 +689,17 @@ static void test_maps_tearing_from_resize(int maps_fd, assert(!strcmp(new_last_line.text, restored_last_line.text) && !strcmp(new_first_line.text, restored_first_line.text)); } + + /* Check if PROCMAP_QUERY ioclt() finds the right VMA */ + query_addr_at(maps_fd, mod_info->addr, &vma_start, &vma_end); + /* + * The vma should stay at the same address and have either the + * original size of 3 pages or 1 page if read after shrinking. + */ + assert(vma_start == last_line->start_addr && + (vma_end - vma_start == page_size * 3 || + vma_end - vma_start == page_size)); + clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec);
@@ -726,6 +770,9 @@ static void test_maps_tearing_from_remap(int maps_fd,
clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); do { + unsigned long vma_start; + unsigned long vma_end; + read_boundary_lines(maps_fd, page1, page2, &new_last_line, &new_first_line);
/* Check if we read vmas after remapping it */ @@ -745,6 +792,19 @@ static void test_maps_tearing_from_remap(int maps_fd, assert(!strcmp(new_last_line.text, restored_last_line.text) && !strcmp(new_first_line.text, restored_first_line.text)); } + + /* Check if PROCMAP_QUERY ioclt() finds the right VMA */ + query_addr_at(maps_fd, mod_info->addr + page_size, &vma_start, &vma_end); + /* + * The vma should either stay at the same address and have the + * original size of 3 pages or we should find the remapped vma + * at the remap destination address with size of 1 page. + */ + assert((vma_start == last_line->start_addr && + vma_end - vma_start == page_size * 3) || + (vma_start == last_line->start_addr + page_size && + vma_end - vma_start == page_size)); + clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec);
Add verbose mode to the proc tests to print debugging information. Usage: proc-pid-vm -v
Signed-off-by: Suren Baghdasaryan surenb@google.com --- tools/testing/selftests/proc/proc-pid-vm.c | 154 +++++++++++++++++++-- 1 file changed, 141 insertions(+), 13 deletions(-)
diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c index b582f40851fb..97017f48cd70 100644 --- a/tools/testing/selftests/proc/proc-pid-vm.c +++ b/tools/testing/selftests/proc/proc-pid-vm.c @@ -73,6 +73,7 @@ static void make_private_tmp(void) }
static unsigned long test_duration_sec = 5UL; +static bool verbose; static int page_size; static pid_t pid = -1; static void ate(void) @@ -452,6 +453,99 @@ static void stop_vma_modifier(struct vma_modifier_info *mod_info) signal_state(mod_info, SETUP_MODIFY_MAPS); }
+static void print_first_lines(char *text, int nr) +{ + const char *end = text; + + while (nr && (end = strchr(end, '\n')) != NULL) { + nr--; + end++; + } + + if (end) { + int offs = end - text; + + text[offs] = '\0'; + printf(text); + text[offs] = '\n'; + printf("\n"); + } else { + printf(text); + } +} + +static void print_last_lines(char *text, int nr) +{ + const char *start = text + strlen(text); + + nr++; /* to ignore the last newline */ + while (nr) { + while (start > text && *start != '\n') + start--; + nr--; + start--; + } + printf(start); +} + +static void print_boundaries(const char *title, + struct page_content *page1, + struct page_content *page2) +{ + if (!verbose) + return; + + printf("%s", title); + /* Print 3 boundary lines from each page */ + print_last_lines(page1->data, 3); + printf("-----------------page boundary-----------------\n"); + print_first_lines(page2->data, 3); +} + +static bool print_boundaries_on(bool condition, const char *title, + struct page_content *page1, + struct page_content *page2) +{ + if (verbose && condition) + print_boundaries(title, page1, page2); + + return condition; +} + +static void report_test_start(const char *name) +{ + if (verbose) + printf("==== %s ====\n", name); +} + +static struct timespec print_ts; + +static void start_test_loop(struct timespec *ts) +{ + if (verbose) + print_ts.tv_sec = ts->tv_sec; +} + +static void end_test_iteration(struct timespec *ts) +{ + if (!verbose) + return; + + /* Update every second */ + if (print_ts.tv_sec == ts->tv_sec) + return; + + printf("."); + fflush(stdout); + print_ts.tv_sec = ts->tv_sec; +} + +static void end_test_loop(void) +{ + if (verbose) + printf("\n"); +} + static void capture_mod_pattern(int maps_fd, struct vma_modifier_info *mod_info, struct page_content *page1, @@ -463,18 +557,24 @@ static void capture_mod_pattern(int maps_fd, struct line_content *restored_last_line, struct line_content *restored_first_line) { + print_boundaries("Before modification", page1, page2); + signal_state(mod_info, SETUP_MODIFY_MAPS); wait_for_state(mod_info, SETUP_MAPS_MODIFIED);
/* Copy last line of the first page and first line of the last page */ read_boundary_lines(maps_fd, page1, page2, mod_last_line, mod_first_line);
+ print_boundaries("After modification", page1, page2); + signal_state(mod_info, SETUP_RESTORE_MAPS); wait_for_state(mod_info, SETUP_MAPS_RESTORED);
/* Copy last line of the first page and first line of the last page */ read_boundary_lines(maps_fd, page1, page2, restored_last_line, restored_first_line);
+ print_boundaries("After restore", page1, page2); + mod_info->vma_mod_check(mod_last_line, mod_first_line, restored_last_line, restored_first_line);
@@ -546,6 +646,7 @@ static void test_maps_tearing_from_split(int maps_fd, mod_info->vma_restore = merge_vma; mod_info->vma_mod_check = check_split_result;
+ report_test_start("Tearing from split"); capture_mod_pattern(maps_fd, mod_info, page1, page2, last_line, first_line, &split_last_line, &split_first_line, &restored_last_line, &restored_first_line); @@ -558,6 +659,7 @@ static void test_maps_tearing_from_split(int maps_fd, struct timespec start_ts, end_ts;
clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); + start_test_loop(&start_ts); do { bool last_line_changed; bool first_line_changed; @@ -577,12 +679,17 @@ static void test_maps_tearing_from_split(int maps_fd, * In that case new first line will be the same as the * last restored line. */ - assert(!strcmp(new_first_line.text, split_first_line.text) || - !strcmp(new_first_line.text, restored_last_line.text)); + assert(!print_boundaries_on( + strcmp(new_first_line.text, split_first_line.text) && + strcmp(new_first_line.text, restored_last_line.text), + "Split result invalid", page1, page2)); + } else { /* The vmas should be consistent with merge results */ - assert(!strcmp(new_last_line.text, restored_last_line.text) && - !strcmp(new_first_line.text, restored_first_line.text)); + assert(!print_boundaries_on( + strcmp(new_last_line.text, restored_last_line.text) || + strcmp(new_first_line.text, restored_first_line.text), + "Merge result invalid", page1, page2)); } /* * First and last lines should change in unison. If the last @@ -607,7 +714,9 @@ static void test_maps_tearing_from_split(int maps_fd, vma_end == split_first_line.end_addr));
clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); + end_test_iteration(&end_ts); } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec); + end_test_loop();
/* Signal the modifyer thread to stop and wait until it exits */ signal_state(mod_info, TEST_DONE); @@ -654,6 +763,7 @@ static void test_maps_tearing_from_resize(int maps_fd, mod_info->vma_restore = expand_vma; mod_info->vma_mod_check = check_shrink_result;
+ report_test_start("Tearing from resize"); capture_mod_pattern(maps_fd, mod_info, page1, page2, last_line, first_line, &shrunk_last_line, &shrunk_first_line, &restored_last_line, &restored_first_line); @@ -666,6 +776,7 @@ static void test_maps_tearing_from_resize(int maps_fd, struct timespec start_ts, end_ts;
clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); + start_test_loop(&start_ts); do { unsigned long vma_start; unsigned long vma_end; @@ -682,12 +793,16 @@ static void test_maps_tearing_from_resize(int maps_fd, * again. In that case new first line will be the same * as the last restored line. */ - assert(!strcmp(new_first_line.text, shrunk_first_line.text) || - !strcmp(new_first_line.text, restored_last_line.text)); + assert(!print_boundaries_on( + strcmp(new_first_line.text, shrunk_first_line.text) && + strcmp(new_first_line.text, restored_last_line.text), + "Shrink result invalid", page1, page2)); } else { /* The vmas should be consistent with the original/resored state */ - assert(!strcmp(new_last_line.text, restored_last_line.text) && - !strcmp(new_first_line.text, restored_first_line.text)); + assert(!print_boundaries_on( + strcmp(new_last_line.text, restored_last_line.text) || + strcmp(new_first_line.text, restored_first_line.text), + "Expand result invalid", page1, page2)); }
/* Check if PROCMAP_QUERY ioclt() finds the right VMA */ @@ -701,7 +816,9 @@ static void test_maps_tearing_from_resize(int maps_fd, vma_end - vma_start == page_size));
clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); + end_test_iteration(&end_ts); } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec); + end_test_loop();
/* Signal the modifyer thread to stop and wait until it exits */ signal_state(mod_info, TEST_DONE); @@ -757,6 +874,7 @@ static void test_maps_tearing_from_remap(int maps_fd, mod_info->vma_restore = patch_vma; mod_info->vma_mod_check = check_remap_result;
+ report_test_start("Tearing from remap"); capture_mod_pattern(maps_fd, mod_info, page1, page2, last_line, first_line, &remapped_last_line, &remapped_first_line, &restored_last_line, &restored_first_line); @@ -769,6 +887,7 @@ static void test_maps_tearing_from_remap(int maps_fd, struct timespec start_ts, end_ts;
clock_gettime(CLOCK_MONOTONIC_COARSE, &start_ts); + start_test_loop(&start_ts); do { unsigned long vma_start; unsigned long vma_end; @@ -785,12 +904,16 @@ static void test_maps_tearing_from_remap(int maps_fd, * again. In that case new first line will be the same * as the last restored line. */ - assert(!strcmp(new_first_line.text, remapped_first_line.text) || - !strcmp(new_first_line.text, restored_last_line.text)); + assert(!print_boundaries_on( + strcmp(new_first_line.text, remapped_first_line.text) && + strcmp(new_first_line.text, restored_last_line.text), + "Remap result invalid", page1, page2)); } else { /* The vmas should be consistent with the original/resored state */ - assert(!strcmp(new_last_line.text, restored_last_line.text) && - !strcmp(new_first_line.text, restored_first_line.text)); + assert(!print_boundaries_on( + strcmp(new_last_line.text, restored_last_line.text) || + strcmp(new_first_line.text, restored_first_line.text), + "Remap restore result invalid", page1, page2)); }
/* Check if PROCMAP_QUERY ioclt() finds the right VMA */ @@ -806,7 +929,9 @@ static void test_maps_tearing_from_remap(int maps_fd, vma_end - vma_start == page_size));
clock_gettime(CLOCK_MONOTONIC_COARSE, &end_ts); + end_test_iteration(&end_ts); } while (end_ts.tv_sec - start_ts.tv_sec < test_duration_sec); + end_test_loop();
/* Signal the modifyer thread to stop and wait until it exits */ signal_state(mod_info, TEST_DONE); @@ -927,6 +1052,7 @@ int usage(void) { fprintf(stderr, "Userland /proc/pid/{s}maps test cases\n"); fprintf(stderr, " -d: Duration for time-consuming tests\n"); + fprintf(stderr, " -v: Verbose mode\n"); fprintf(stderr, " -h: Help screen\n"); exit(-1); } @@ -937,9 +1063,11 @@ int main(int argc, char **argv) int exec_fd; int opt;
- while ((opt = getopt(argc, argv, "d:h")) != -1) { + while ((opt = getopt(argc, argv, "d:vh")) != -1) { if (opt == 'd') test_duration_sec = strtoul(optarg, NULL, 0); + else if (opt == 'v') + verbose = true; else if (opt == 'h') usage(); }
With maple_tree supporting vma tree traversal under RCU and per-vma locks, /proc/pid/maps can be read while holding individual vma locks instead of locking the entire address space. Completely lockless approach would be quite complex with the main issue being get_vma_name() using callbacks which might not work correctly with a stable vma copy, requiring original (unstable) vma. When per-vma lock acquisition fails, we take the mmap_lock for reading, lock the vma, release the mmap_lock and continue. This guarantees the reader to make forward progress even during lock contention. This will interfere with the writer but for a very short time while we are acquiring the per-vma lock and only when there was contention on the vma reader is interested in. One case requiring special handling is when vma changes between the time it was found and the time it got locked. A problematic case would be if vma got shrunk so that it's start moved higher in the address space and a new vma was installed at the beginning:
reader found: |--------VMA A--------| VMA is modified: |-VMA B-|----VMA A----| reader locks modified VMA A reader reports VMA A: | gap |----VMA A----|
This would result in reporting a gap in the address space that does not exist. To prevent this we retry the lookup after locking the vma, however we do that only when we identify a gap and detect that the address space was changed after we found the vma. This change is designed to reduce mmap_lock contention and prevent a process reading /proc/pid/maps files (often a low priority task, such as monitoring/data collection services) from blocking address space updates. Note that this change has a userspace visible disadvantage: it allows for sub-page data tearing as opposed to the previous mechanism where data tearing could happen only between pages of generated output data. Since current userspace considers data tearing between pages to be acceptable, we assume is will be able to handle sub-page data tearing as well.
Signed-off-by: Suren Baghdasaryan surenb@google.com --- fs/proc/internal.h | 6 ++ fs/proc/task_mmu.c | 177 +++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 175 insertions(+), 8 deletions(-)
diff --git a/fs/proc/internal.h b/fs/proc/internal.h index 96122e91c645..3728c9012687 100644 --- a/fs/proc/internal.h +++ b/fs/proc/internal.h @@ -379,6 +379,12 @@ struct proc_maps_private { struct task_struct *task; struct mm_struct *mm; struct vma_iterator iter; + loff_t last_pos; +#ifdef CONFIG_PER_VMA_LOCK + bool mmap_locked; + unsigned int mm_wr_seq; + struct vm_area_struct *locked_vma; +#endif #ifdef CONFIG_NUMA struct mempolicy *task_mempolicy; #endif diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 27972c0749e7..36d883c4f394 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -127,13 +127,172 @@ static void release_task_mempolicy(struct proc_maps_private *priv) } #endif
-static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv, - loff_t *ppos) +#ifdef CONFIG_PER_VMA_LOCK + +static struct vm_area_struct *trylock_vma(struct proc_maps_private *priv, + struct vm_area_struct *vma, + unsigned long last_pos, + bool mm_unstable) +{ + vma = vma_start_read(priv->mm, vma); + if (IS_ERR_OR_NULL(vma)) + return NULL; + + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm != priv->mm)) + goto err; + + /* vma should not be ahead of the last search position. */ + if (unlikely(last_pos >= vma->vm_end)) + goto err; + + /* + * vma ahead of last search position is possible but we need to + * verify that it was not shrunk after we found it, and another + * vma has not been installed ahead of it. Otherwise we might + * observe a gap that should not be there. + */ + if (mm_unstable && last_pos < vma->vm_start) { + /* Verify only if the address space changed since vma lookup. */ + if ((priv->mm_wr_seq & 1) || + mmap_lock_speculate_retry(priv->mm, priv->mm_wr_seq)) { + vma_iter_init(&priv->iter, priv->mm, last_pos); + if (vma != vma_next(&priv->iter)) + goto err; + } + } + + priv->locked_vma = vma; + + return vma; +err: + vma_end_read(vma); + return NULL; +} + + +static void unlock_vma(struct proc_maps_private *priv) +{ + if (priv->locked_vma) { + vma_end_read(priv->locked_vma); + priv->locked_vma = NULL; + } +} + +static const struct seq_operations proc_pid_maps_op; + +static inline bool lock_content(struct seq_file *m, + struct proc_maps_private *priv) +{ + /* + * smaps and numa_maps perform page table walk, therefore require + * mmap_lock but maps can be read with locked vma only. + */ + if (m->op != &proc_pid_maps_op) { + if (mmap_read_lock_killable(priv->mm)) + return false; + + priv->mmap_locked = true; + } else { + rcu_read_lock(); + priv->locked_vma = NULL; + priv->mmap_locked = false; + } + + return true; +} + +static inline void unlock_content(struct proc_maps_private *priv) +{ + if (priv->mmap_locked) { + mmap_read_unlock(priv->mm); + } else { + unlock_vma(priv); + rcu_read_unlock(); + } +} + +static struct vm_area_struct *get_next_vma(struct proc_maps_private *priv, + loff_t last_pos) { - struct vm_area_struct *vma = vma_next(&priv->iter); + struct vm_area_struct *vma; + int ret; + + if (priv->mmap_locked) + return vma_next(&priv->iter); + + unlock_vma(priv); + /* + * Record sequence number ahead of vma lookup. + * Odd seqcount means address space modification is in progress. + */ + mmap_lock_speculate_try_begin(priv->mm, &priv->mm_wr_seq); + vma = vma_next(&priv->iter); + if (!vma) + return NULL; + + vma = trylock_vma(priv, vma, last_pos, true); + if (vma) + return vma; + + /* Address space got modified, vma might be stale. Re-lock and retry */ + rcu_read_unlock(); + ret = mmap_read_lock_killable(priv->mm); + rcu_read_lock(); + if (ret) + return ERR_PTR(ret); + + /* Lookup the vma at the last position again under mmap_read_lock */ + vma_iter_init(&priv->iter, priv->mm, last_pos); + vma = vma_next(&priv->iter); + if (vma) { + vma = trylock_vma(priv, vma, last_pos, false); + WARN_ON(!vma); /* mm is stable, has to succeed */ + } + mmap_read_unlock(priv->mm); + + return vma; +} + +#else /* CONFIG_PER_VMA_LOCK */
+static inline bool lock_content(struct seq_file *m, + struct proc_maps_private *priv) +{ + return mmap_read_lock_killable(priv->mm) == 0; +} + +static inline void unlock_content(struct proc_maps_private *priv) +{ + mmap_read_unlock(priv->mm); +} + +static struct vm_area_struct *get_next_vma(struct proc_maps_private *priv, + loff_t last_pos) +{ + return vma_next(&priv->iter); +} + +#endif /* CONFIG_PER_VMA_LOCK */ + +static struct vm_area_struct *proc_get_vma(struct seq_file *m, loff_t *ppos) +{ + struct proc_maps_private *priv = m->private; + struct vm_area_struct *vma; + + vma = get_next_vma(priv, *ppos); + if (IS_ERR(vma)) + return vma; + + /* Store previous position to be able to restart if needed */ + priv->last_pos = *ppos; if (vma) { - *ppos = vma->vm_start; + /* + * Track the end of the reported vma to ensure position changes + * even if previous vma was merged with the next vma and we + * found the extended vma with the same vm_start. + */ + *ppos = vma->vm_end; } else { *ppos = -2UL; vma = get_gate_vma(priv->mm); @@ -163,19 +322,21 @@ static void *m_start(struct seq_file *m, loff_t *ppos) return NULL; }
- if (mmap_read_lock_killable(mm)) { + if (!lock_content(m, priv)) { mmput(mm); put_task_struct(priv->task); priv->task = NULL; return ERR_PTR(-EINTR); }
+ if (last_addr > 0) + *ppos = last_addr = priv->last_pos; vma_iter_init(&priv->iter, mm, last_addr); hold_task_mempolicy(priv); if (last_addr == -2UL) return get_gate_vma(mm);
- return proc_get_vma(priv, ppos); + return proc_get_vma(m, ppos); }
static void *m_next(struct seq_file *m, void *v, loff_t *ppos) @@ -184,7 +345,7 @@ static void *m_next(struct seq_file *m, void *v, loff_t *ppos) *ppos = -1UL; return NULL; } - return proc_get_vma(m->private, ppos); + return proc_get_vma(m, ppos); }
static void m_stop(struct seq_file *m, void *v) @@ -196,7 +357,7 @@ static void m_stop(struct seq_file *m, void *v) return;
release_task_mempolicy(priv); - mmap_read_unlock(mm); + unlock_content(priv); mmput(mm); put_task_struct(priv->task); priv->task = NULL;
Utilize per-vma locks to stabilize vma after lookup without taking mmap_lock during PROCMAP_QUERY ioctl execution. While we might take mmap_lock for reading during contention, we do that momentarily only to lock the vma. This change is designed to reduce mmap_lock contention and prevent PROCMAP_QUERY ioctl calls from blocking address space updates.
Signed-off-by: Suren Baghdasaryan surenb@google.com --- fs/proc/task_mmu.c | 56 ++++++++++++++++++++++++++++++++++++---------- 1 file changed, 44 insertions(+), 12 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 36d883c4f394..93ba35a84975 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -550,28 +550,60 @@ static int pid_maps_open(struct inode *inode, struct file *file) PROCMAP_QUERY_VMA_FLAGS \ )
-static int query_vma_setup(struct mm_struct *mm) +#ifdef CONFIG_PER_VMA_LOCK + +static int query_vma_setup(struct proc_maps_private *priv) { - return mmap_read_lock_killable(mm); + rcu_read_lock(); + priv->locked_vma = NULL; + priv->mmap_locked = false; + + return 0; }
-static void query_vma_teardown(struct mm_struct *mm, struct vm_area_struct *vma) +static void query_vma_teardown(struct proc_maps_private *priv) { - mmap_read_unlock(mm); + unlock_vma(priv); + rcu_read_unlock(); +} + +static struct vm_area_struct *query_vma_find_by_addr(struct proc_maps_private *priv, + unsigned long addr) +{ + vma_iter_init(&priv->iter, priv->mm, addr); + return get_next_vma(priv, addr); +} + +#else /* CONFIG_PER_VMA_LOCK */ + +static int query_vma_setup(struct proc_maps_private *priv) +{ + return mmap_read_lock_killable(priv->mm); +} + +static void query_vma_teardown(struct proc_maps_private *priv) +{ + mmap_read_unlock(priv->mm); }
-static struct vm_area_struct *query_vma_find_by_addr(struct mm_struct *mm, unsigned long addr) +static struct vm_area_struct *query_vma_find_by_addr(struct proc_maps_private *priv, + unsigned long addr) { - return find_vma(mm, addr); + return find_vma(priv->mm, addr); }
-static struct vm_area_struct *query_matching_vma(struct mm_struct *mm, +#endif /* CONFIG_PER_VMA_LOCK */ + +static struct vm_area_struct *query_matching_vma(struct proc_maps_private *priv, unsigned long addr, u32 flags) { struct vm_area_struct *vma;
next_vma: - vma = query_vma_find_by_addr(mm, addr); + vma = query_vma_find_by_addr(priv, addr); + if (IS_ERR(vma)) + return vma; + if (!vma) goto no_vma;
@@ -647,13 +679,13 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg) if (!mm || !mmget_not_zero(mm)) return -ESRCH;
- err = query_vma_setup(mm); + err = query_vma_setup(priv); if (err) { mmput(mm); return err; }
- vma = query_matching_vma(mm, karg.query_addr, karg.query_flags); + vma = query_matching_vma(priv, karg.query_addr, karg.query_flags); if (IS_ERR(vma)) { err = PTR_ERR(vma); vma = NULL; @@ -738,7 +770,7 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg) }
/* unlock vma or mmap_lock, and put mm_struct before copying data to user */ - query_vma_teardown(mm, vma); + query_vma_teardown(priv); mmput(mm);
if (karg.vma_name_size && copy_to_user(u64_to_user_ptr(karg.vma_name_addr), @@ -758,7 +790,7 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg) return 0;
out: - query_vma_teardown(mm, vma); + query_vma_teardown(priv); mmput(mm); kfree(name_buf); return err;
linux-kselftest-mirror@lists.linaro.org