This patch series adds a memory.reclaim proactive reclaim interface. The rationale behind the interface and how it works are in the first patch.
---
Changes in V4: mm/memcontrol.c: - Return -EINTR on signal_pending(). - On the final retry, drain percpu lru caches hoping that it might introduce some evictable pages for reclaim. - Simplified the retry loop as suggested by Dan Schatzberg.
selftests: - Always return -errno on failure from cg_write() (whether open() or write() fail), also update cg_read() and read_text() to return -errno as well for consistency. Also make sure to correctly check that the whole buffer was written in cg_write(). - Added a maximum number of retries for the reclaim selftest.
Changes in V3: - Fix cg_write() (in patch 2) to properly return -1 if open() fails and not fail if len == errno. - Remove debug printf() in patch 3.
Changes in V2: - Add the interface to root as well. - Added a selftest. - Documented the interface as a nested-keyed interface, which makes adding optional arguments in the future easier (see doc updates in the first patch). - Modified the commit message to reflect changes and added a timeout argument as a suggested possible extension - Return -EAGAIN if the kernel fails to reclaim the full requested amount.
---
Shakeel Butt (1): memcg: introduce per-memcg reclaim interface
Yosry Ahmed (3): selftests: cgroup: return -errno from cg_read()/cg_write() on failure selftests: cgroup: fix alloc_anon_noexit() instantly freeing memory selftests: cgroup: add a selftest for memory.reclaim
Documentation/admin-guide/cgroup-v2.rst | 21 +++++ mm/memcontrol.c | 44 +++++++++ tools/testing/selftests/cgroup/cgroup_util.c | 44 ++++----- .../selftests/cgroup/test_memcontrol.c | 94 ++++++++++++++++++- 4 files changed, 176 insertions(+), 27 deletions(-)
From: Shakeel Butt shakeelb@google.com
Introduce a memcg interface to trigger memory reclaim on a memory cgroup.
Use case: Proactive Reclaim ---------------------------
A userspace proactive reclaimer can continuously probe the memcg to reclaim a small amount of memory. This gives more accurate and up-to-date workingset estimation as the LRUs are continuously sorted and can potentially provide more deterministic memory overcommit behavior. The memory overcommit controller can provide more proactive response to the changing behavior of the running applications instead of being reactive.
A userspace reclaimer's purpose in this case is not a complete replacement for kswapd or direct reclaim, it is to proactively identify memory savings opportunities and reclaim some amount of cold pages set by the policy to free up the memory for more demanding jobs or scheduling new jobs.
A user space proactive reclaimer is used in Google data centers. Additionally, Meta's TMO paper recently referenced a very similar interface used for user space proactive reclaim: https://dl.acm.org/doi/pdf/10.1145/3503222.3507731
Benefits of a user space reclaimer: -----------------------------------
1) More flexible on who should be charged for the cpu of the memory reclaim. For proactive reclaim, it makes more sense to be centralized.
2) More flexible on dedicating the resources (like cpu). The memory overcommit controller can balance the cost between the cpu usage and the memory reclaimed.
3) Provides a way to the applications to keep their LRUs sorted, so, under memory pressure better reclaim candidates are selected. This also gives more accurate and uptodate notion of working set for an application.
Why memory.high is not enough? ------------------------------
- memory.high can be used to trigger reclaim in a memcg and can potentially be used for proactive reclaim. However there is a big downside in using memory.high. It can potentially introduce high reclaim stalls in the target application as the allocations from the processes or the threads of the application can hit the temporary memory.high limit.
- Userspace proactive reclaimers usually use feedback loops to decide how much memory to proactively reclaim from a workload. The metrics used for this are usually either refaults or PSI, and these metrics will become messy if the application gets throttled by hitting the high limit.
- memory.high is a stateful interface, if the userspace proactive reclaimer crashes for any reason while triggering reclaim it can leave the application in a bad state.
- If a workload is rapidly expanding, setting memory.high to proactively reclaim memory can result in actually reclaiming more memory than intended.
The benefits of such interface and shortcomings of existing interface were further discussed in this RFC thread: https://lore.kernel.org/linux-mm/5df21376-7dd1-bf81-8414-32a73cea45dd@google...
Interface: ----------
Introducing a very simple memcg interface 'echo 10M > memory.reclaim' to trigger reclaim in the target memory cgroup.
The interface is introduced as a nested-keyed file to allow for future optional arguments to be easily added to configure the behavior of reclaim.
Possible Extensions: --------------------
- This interface can be extended with an additional parameter or flags to allow specifying one or more types of memory to reclaim from (e.g. file, anon, ..).
- The interface can also be extended with a node mask to reclaim from specific nodes. This has use cases for reclaim-based demotion in memory tiering systens.
- A similar per-node interface can also be added to support proactive reclaim and reclaim-based demotion in systems without memcg.
- Add a timeout parameter to make it easier for user space to call the interface without worrying about being blocked for an undefined amount of time.
For now, let's keep things simple by adding the basic functionality.
[yosryahmed@google.com: refreshed to current master, updated commit message based on recent discussions and use cases] Signed-off-by: Shakeel Butt shakeelb@google.com Signed-off-by: Yosry Ahmed yosryahmed@google.com Acked-by: Johannes Weiner hannes@cmpxchg.org Acked-by: Michal Hocko mhocko@suse.com Acked-by: Wei Xu weixugc@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev --- Documentation/admin-guide/cgroup-v2.rst | 21 ++++++++++++ mm/memcontrol.c | 44 +++++++++++++++++++++++++ 2 files changed, 65 insertions(+)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 69d7a6983f78..19bcd73cad03 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1208,6 +1208,27 @@ PAGE_SIZE multiple when read back. high limit is used and monitored properly, this limit's utility is limited to providing the final safety net.
+ memory.reclaim + A write-only nested-keyed file which exists for all cgroups. + + This is a simple interface to trigger memory reclaim in the + target cgroup. + + This file accepts a single key, the number of bytes to reclaim. + No nested keys are currently supported. + + Example:: + + echo "1G" > memory.reclaim + + The interface can be later extended with nested keys to + configure the reclaim behavior. For example, specify the + type of memory to reclaim from (anon, file, ..). + + Please note that the kernel can over or under reclaim from + the target cgroup. If less bytes are reclaimed than the + specified amount, -EAGAIN is returned. + memory.oom.group A read-write single value file which exists on non-root cgroups. The default value is "0". diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 725f76723220..041c17847769 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6355,6 +6355,45 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of, return nbytes; }
+static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, + size_t nbytes, loff_t off) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + unsigned int nr_retries = MAX_RECLAIM_RETRIES; + unsigned long nr_to_reclaim, nr_reclaimed = 0; + int err; + + buf = strstrip(buf); + err = page_counter_memparse(buf, "", &nr_to_reclaim); + if (err) + return err; + + while (nr_reclaimed < nr_to_reclaim) { + unsigned long reclaimed; + + if (signal_pending(current)) + return -EINTR; + + /* This is the final attempt, drain percpu lru caches in the + * hope of introducing more evictable pages for + * try_to_free_mem_cgroup_pages(). + */ + if (!nr_retries) + lru_add_drain_all(); + + reclaimed = try_to_free_mem_cgroup_pages(memcg, + nr_to_reclaim - nr_reclaimed, + GFP_KERNEL, true); + + if (!reclaimed && !nr_retries--) + return -EAGAIN; + + nr_reclaimed += reclaimed; + } + + return nbytes; +} + static struct cftype memory_files[] = { { .name = "current", @@ -6413,6 +6452,11 @@ static struct cftype memory_files[] = { .seq_show = memory_oom_group_show, .write = memory_oom_group_write, }, + { + .name = "reclaim", + .flags = CFTYPE_NS_DELEGATABLE, + .write = memory_reclaim, + }, { } /* terminate */ };
On Thu, Apr 21, 2022 at 11:44:23PM +0000, Yosry Ahmed wrote:
From: Shakeel Butt shakeelb@google.com
[...]
[yosryahmed@google.com: refreshed to current master, updated commit message based on recent discussions and use cases] Signed-off-by: Shakeel Butt shakeelb@google.com Signed-off-by: Yosry Ahmed yosryahmed@google.com
You should add "Co-developed-by" tag for yourself here.
Acked-by: Johannes Weiner hannes@cmpxchg.org Acked-by: Michal Hocko mhocko@suse.com Acked-by: Wei Xu weixugc@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev
[...]
+static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
+{
- struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
- unsigned int nr_retries = MAX_RECLAIM_RETRIES;
- unsigned long nr_to_reclaim, nr_reclaimed = 0;
- int err;
- buf = strstrip(buf);
- err = page_counter_memparse(buf, "", &nr_to_reclaim);
- if (err)
return err;
- while (nr_reclaimed < nr_to_reclaim) {
unsigned long reclaimed;
if (signal_pending(current))
return -EINTR;
/* This is the final attempt, drain percpu lru caches in the
Fix the comment format. "/*" should be on its own line.
* hope of introducing more evictable pages for
* try_to_free_mem_cgroup_pages().
*/
No need to send a new version if Andrew can fix these in the mm tree.
On Sat, Apr 23, 2022 at 6:30 AM Shakeel Butt shakeelb@google.com wrote:
On Thu, Apr 21, 2022 at 11:44:23PM +0000, Yosry Ahmed wrote:
From: Shakeel Butt shakeelb@google.com
[...]
[yosryahmed@google.com: refreshed to current master, updated commit message based on recent discussions and use cases] Signed-off-by: Shakeel Butt shakeelb@google.com Signed-off-by: Yosry Ahmed yosryahmed@google.com
You should add "Co-developed-by" tag for yourself here.
Acked-by: Johannes Weiner hannes@cmpxchg.org Acked-by: Michal Hocko mhocko@suse.com Acked-by: Wei Xu weixugc@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev
[...]
+static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
+{
struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
unsigned int nr_retries = MAX_RECLAIM_RETRIES;
unsigned long nr_to_reclaim, nr_reclaimed = 0;
int err;
buf = strstrip(buf);
err = page_counter_memparse(buf, "", &nr_to_reclaim);
if (err)
return err;
while (nr_reclaimed < nr_to_reclaim) {
unsigned long reclaimed;
if (signal_pending(current))
return -EINTR;
/* This is the final attempt, drain percpu lru caches in the
Fix the comment format. "/*" should be on its own line.
* hope of introducing more evictable pages for
* try_to_free_mem_cgroup_pages().
*/
No need to send a new version if Andrew can fix these in the mm tree.
I will be sending v5 anyway to address your review comments on the last patch. I will fix these as well. Thanks!
Currently, cg_read()/cg_write() returns 0 on success and -1 on failure. Modify them to return the -errno on failure.
Signed-off-by: Yosry Ahmed yosryahmed@google.com --- tools/testing/selftests/cgroup/cgroup_util.c | 44 +++++++++----------- 1 file changed, 19 insertions(+), 25 deletions(-)
diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c index dbaa7aabbb4a..e6f3679cdcc0 100644 --- a/tools/testing/selftests/cgroup/cgroup_util.c +++ b/tools/testing/selftests/cgroup/cgroup_util.c @@ -19,6 +19,7 @@ #include "cgroup_util.h" #include "../clone3/clone3_selftests.h"
+/* Returns read len on success, or -errno on failure. */ static ssize_t read_text(const char *path, char *buf, size_t max_len) { ssize_t len; @@ -26,35 +27,29 @@ static ssize_t read_text(const char *path, char *buf, size_t max_len)
fd = open(path, O_RDONLY); if (fd < 0) - return fd; + return -errno;
len = read(fd, buf, max_len - 1); - if (len < 0) - goto out;
- buf[len] = 0; -out: + if (len >= 0) + buf[len] = 0; + close(fd); - return len; + return len < 0 ? -errno : len; }
+/* Returns written len on success, or -errno on failure. */ static ssize_t write_text(const char *path, char *buf, ssize_t len) { int fd;
fd = open(path, O_WRONLY | O_APPEND); if (fd < 0) - return fd; + return -errno;
len = write(fd, buf, len); - if (len < 0) { - close(fd); - return len; - } - close(fd); - - return len; + return len < 0 ? -errno : len; }
char *cg_name(const char *root, const char *name) @@ -87,16 +82,16 @@ char *cg_control(const char *cgroup, const char *control) return ret; }
+/* Returns 0 on success, or -errno on failure. */ int cg_read(const char *cgroup, const char *control, char *buf, size_t len) { char path[PATH_MAX]; + ssize_t ret;
snprintf(path, sizeof(path), "%s/%s", cgroup, control);
- if (read_text(path, buf, len) >= 0) - return 0; - - return -1; + ret = read_text(path, buf, len); + return ret >= 0 ? 0 : ret; }
int cg_read_strcmp(const char *cgroup, const char *control, @@ -177,17 +172,15 @@ long cg_read_lc(const char *cgroup, const char *control) return cnt; }
+/* Returns 0 on success, or -errno on failure. */ int cg_write(const char *cgroup, const char *control, char *buf) { char path[PATH_MAX]; - ssize_t len = strlen(buf); + ssize_t len = strlen(buf), ret;
snprintf(path, sizeof(path), "%s/%s", cgroup, control); - - if (write_text(path, buf, len) == len) - return 0; - - return -1; + ret = write_text(path, buf, len); + return ret == len ? 0 : ret; }
int cg_find_unified_root(char *root, size_t len) @@ -545,7 +538,8 @@ ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t else snprintf(path, sizeof(path), "/proc/%d/%s", pid, item);
- return read_text(path, buf, size); + size = read_text(path, buf, size); + return size < 0 ? -1 : size; }
int proc_read_strstr(int pid, bool thread, const char *item, const char *needle)
On Thu, Apr 21, 2022 at 11:44:24PM +0000, Yosry Ahmed wrote:
Currently, cg_read()/cg_write() returns 0 on success and -1 on failure. Modify them to return the -errno on failure.
Signed-off-by: Yosry Ahmed yosryahmed@google.com
Acked-by: Shakeel Butt shakeelb@google.com
Currently, alloc_anon_noexit() calls alloc_anon() which instantly frees the allocated memory. alloc_anon_noexit() is usually used with cg_run_nowait() to run a process in the background that allocates memory. It makes sense for the background process to keep the memory allocated and not instantly free it (otherwise there is no point of running it in the background).
Signed-off-by: Yosry Ahmed yosryahmed@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev --- tools/testing/selftests/cgroup/test_memcontrol.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index 36ccf2322e21..f2ffb3a30194 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -211,13 +211,17 @@ static int alloc_pagecache_50M_noexit(const char *cgroup, void *arg) static int alloc_anon_noexit(const char *cgroup, void *arg) { int ppid = getppid(); + size_t size = (unsigned long)arg; + char *buf, *ptr;
- if (alloc_anon(cgroup, arg)) - return -1; + buf = malloc(size); + for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE) + *ptr = 0;
while (getppid() == ppid) sleep(1);
+ free(buf); return 0; }
On Thu, Apr 21, 2022 at 11:44:25PM +0000, Yosry Ahmed wrote:
Currently, alloc_anon_noexit() calls alloc_anon() which instantly frees the allocated memory. alloc_anon_noexit() is usually used with cg_run_nowait() to run a process in the background that allocates memory. It makes sense for the background process to keep the memory allocated and not instantly free it (otherwise there is no point of running it in the background).
Signed-off-by: Yosry Ahmed yosryahmed@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev
Acked-by: Shakeel Butt shakeelb@google.com
Add a new test for memory.reclaim that verifies that the interface correctly reclaims memory as intended, from both anon and file pages.
Signed-off-by: Yosry Ahmed yosryahmed@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev --- .../selftests/cgroup/test_memcontrol.c | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index f2ffb3a30194..5f7c20de2426 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -760,6 +760,91 @@ static int test_memcg_max(const char *root) return ret; }
+/* + * This test checks that memory.reclaim reclaims the given + * amount of memory (from both anon and file). + */ +static int test_memcg_reclaim(const char *root) +{ + int ret = KSFT_FAIL, fd, retries; + char *memcg; + long current, to_reclaim; + char buf[64]; + + memcg = cg_name(root, "memcg_test"); + if (!memcg) + goto cleanup; + + if (cg_create(memcg)) + goto cleanup; + + current = cg_read_long(memcg, "memory.current"); + if (current != 0) + goto cleanup; + + cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(50)); + sleep(1); + + fd = get_temp_fd(); + if (fd < 0) + goto cleanup; + + cg_run_nowait(memcg, alloc_pagecache_50M_noexit, (void *)(long)fd); + sleep(1); + + current = cg_read_long(memcg, "memory.current"); + if (!values_close(current, MB(100), 10)) + goto cleanup; + + /* + * Reclaim until current reaches 30M, make sure to reclaim over 50M to + * hit both anon and file. + */ + retries = 5; + while (true) { + int err; + + current = cg_read_long(memcg, "memory.current"); + to_reclaim = current - MB(30); + + /* + * We only keep looping if we get EAGAIN, which means we could + * not reclaim the full amount. + */ + if (to_reclaim <= 0) + goto cleanup; + + + snprintf(buf, sizeof(buf), "%ld", to_reclaim); + err = cg_write(memcg, "memory.reclaim", buf); + if (!err) { + /* + * If writing succeeds, then the written amount should have been + * fully reclaimed (and maybe more). + */ + current = cg_read_long(memcg, "memory.current"); + if (!values_close(current, MB(30), 3) && current > MB(30)) + goto cleanup; + break; + } + + /* The kernel could not reclaim the full amount, try again. */ + if (err == -EAGAIN && retries--) + continue; + + /* We got an unexpected error or ran out of retries. */ + goto cleanup; + } + + ret = KSFT_PASS; +cleanup: + cg_destroy(memcg); + free(memcg); + close(fd); + + return ret; +} + static int alloc_anon_50M_check_swap(const char *cgroup, void *arg) { long mem_max = (long)arg; @@ -1263,6 +1348,7 @@ struct memcg_test { T(test_memcg_high), T(test_memcg_high_sync), T(test_memcg_max), + T(test_memcg_reclaim), T(test_memcg_oom_events), T(test_memcg_swap_max), T(test_memcg_sock),
On Thu, Apr 21, 2022 at 11:44:26PM +0000, Yosry Ahmed wrote:
Add a new test for memory.reclaim that verifies that the interface correctly reclaims memory as intended, from both anon and file pages.
Signed-off-by: Yosry Ahmed yosryahmed@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev
.../selftests/cgroup/test_memcontrol.c | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index f2ffb3a30194..5f7c20de2426 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -760,6 +760,91 @@ static int test_memcg_max(const char *root) return ret; } +/*
- This test checks that memory.reclaim reclaims the given
- amount of memory (from both anon and file).
- */
+static int test_memcg_reclaim(const char *root) +{
- int ret = KSFT_FAIL, fd, retries;
- char *memcg;
- long current, to_reclaim;
- char buf[64];
- memcg = cg_name(root, "memcg_test");
- if (!memcg)
goto cleanup;
- if (cg_create(memcg))
goto cleanup;
- current = cg_read_long(memcg, "memory.current");
- if (current != 0)
goto cleanup;
- cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(50));
Don't you need is_swap_enabled() check before deciding to do the anon allocations?
- sleep(1);
- fd = get_temp_fd();
- if (fd < 0)
goto cleanup;
- cg_run_nowait(memcg, alloc_pagecache_50M_noexit, (void *)(long)fd);
- sleep(1);
These sleep(1)s do not seem robust. Since kernel keeps the page cache around, you can convert anon to use tmpfs and use simple cg_run to trigger the allocations of anon (tmpfs) and file which will remain in memory even after return from cg_run.
On Sat, Apr 23, 2022 at 7:28 AM Shakeel Butt shakeelb@google.com wrote:
On Thu, Apr 21, 2022 at 11:44:26PM +0000, Yosry Ahmed wrote:
Add a new test for memory.reclaim that verifies that the interface correctly reclaims memory as intended, from both anon and file pages.
Signed-off-by: Yosry Ahmed yosryahmed@google.com Acked-by: Roman Gushchin roman.gushchin@linux.dev
.../selftests/cgroup/test_memcontrol.c | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index f2ffb3a30194..5f7c20de2426 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -760,6 +760,91 @@ static int test_memcg_max(const char *root) return ret; }
+/*
- This test checks that memory.reclaim reclaims the given
- amount of memory (from both anon and file).
- */
+static int test_memcg_reclaim(const char *root) +{
int ret = KSFT_FAIL, fd, retries;
char *memcg;
long current, to_reclaim;
char buf[64];
memcg = cg_name(root, "memcg_test");
if (!memcg)
goto cleanup;
if (cg_create(memcg))
goto cleanup;
current = cg_read_long(memcg, "memory.current");
if (current != 0)
goto cleanup;
cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(50));
Don't you need is_swap_enabled() check before deciding to do the anon allocations?
Yes you are right. In the next version will check whether or not swap is enabled and modify the test accordingly.
sleep(1);
fd = get_temp_fd();
if (fd < 0)
goto cleanup;
cg_run_nowait(memcg, alloc_pagecache_50M_noexit, (void *)(long)fd);
sleep(1);
These sleep(1)s do not seem robust. Since kernel keeps the page cache around, you can convert anon to use tmpfs and use simple cg_run to trigger the allocations of anon (tmpfs) and file which will remain in memory even after return from cg_run.
Other tests in the file are also using sleep approach (see test_memcg_min, although it retries for multiple times until memory.current reaches an expected amount). In my experience it hasn't been flaky running for multiple times on different machines, but I agree it can be flaky (false negative).
I am not sure about the allocating file pages with cg_run, is it guaranteed that the page cache will remain in memory until the test ends? If it doesn't, it can also flake, but it would produce false positives (the test could pass because the kernel drained page cache for some other reason although the interface is not working correctly).
In my personal opinion, false negative flakes are better than false positives. At least currently the test explicitly and clearly fails if the allocations are not successful. If we rely on the page cache remaining until the test finishes then it could silently pass if the interface is not working correctly.
There are a few ways we can go forward with this: 1) Keep everything as-is, but print a message if the test fails due to memory.current not reaching 100MB to make it clear that it didn't fail due to a problem with the interface. 2) Add a sleep/retry loop similar to test_memcg_min instead of sleeping once. 3) Send a signal from forked children when they are done with the allocation, and wait to receive this signal in the test to make sure the allocation is completed.
In my opinion we should do (1) (and maybe (2)) for now as (3) could be an overkill if the test is normal passing. Maybe add a comment about (3) being an option in the future if the test flakes. Let me know what you think?
On Sat, Apr 23, 2022 at 02:43:13PM -0700, Yosry Ahmed wrote: [...]
cg_run_nowait(memcg, alloc_pagecache_50M_noexit, (void *)(long)fd);
sleep(1);
These sleep(1)s do not seem robust. Since kernel keeps the page cache around, you can convert anon to use tmpfs and use simple cg_run to trigger the allocations of anon (tmpfs) and file which will remain in memory even after return from cg_run.
Other tests in the file are also using sleep approach (see test_memcg_min, although it retries for multiple times until memory.current reaches an expected amount). In my experience it hasn't been flaky running for multiple times on different machines, but I agree it can be flaky (false negative).
If other tests are doing the same then ignore this comment for now. There should be a separate effort to move towards more deterministic approach for the tests instead of sleep().
I am not sure about the allocating file pages with cg_run, is it guaranteed that the page cache will remain in memory until the test ends? If it doesn't, it can also flake, but it would produce false positives (the test could pass because the kernel drained page cache for some other reason although the interface is not working correctly).
In my personal opinion, false negative flakes are better than false positives. At least currently the test explicitly and clearly fails if the allocations are not successful. If we rely on the page cache remaining until the test finishes then it could silently pass if the interface is not working correctly.
There are a few ways we can go forward with this:
- Keep everything as-is, but print a message if the test fails due to
memory.current not reaching 100MB to make it clear that it didn't fail due to a problem with the interface. 2) Add a sleep/retry loop similar to test_memcg_min instead of sleeping once. 3) Send a signal from forked children when they are done with the allocation, and wait to receive this signal in the test to make sure the allocation is completed.
In my opinion we should do (1) (and maybe (2)) for now as (3) could be an overkill if the test is normal passing. Maybe add a comment about (3) being an option in the future if the test flakes. Let me know what you think?
I am ok with (1).
linux-kselftest-mirror@lists.linaro.org