On Fri, Apr 24, 2020 at 9:14 PM Johannes Weiner hannes@cmpxchg.org wrote:
On Thu, Apr 23, 2020 at 02:16:29AM -0400, Yafang Shao wrote:
This patch is an improvement of a previous version[1], as the previous version is not easy to understand. This issue persists in the newest kernel, I have to resend the fix. As the implementation is changed, I drop Roman's ack from the previous version.
Now that I understand the problem, I much prefer the previous version.
Great news that this issue is understood by one more reviewer.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 745697906ce3..2bf91ae1e640 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6332,8 +6332,19 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
if (!root) root = root_mem_cgroup;
if (memcg == root)
if (memcg == root) {
/*
* The cgroup is the reclaim root in this reclaim
* cycle, and therefore not protected. But it may have
* stale effective protection values from previous
* cycles in which it was not the reclaim root - for
* example, global reclaim followed by limit reclaim.
* Reset these values for mem_cgroup_protection().
*/
memcg->memory.emin = 0;
memcg->memory.elow = 0; return MEMCG_PROT_NONE;
} usage = page_counter_read(&memcg->memory); if (!usage)
Here's the explanation of this issue. memory.{low,min} won't take effect if the to-be-reclaimed memcg is the sc->target_mem_cgroup, that can also be proved by the implementation in mem_cgroup_protected(), see bellow, mem_cgroup_protected if (memcg == root) [2] return MEMCG_PROT_NONE;
But this rule is ignored in mem_cgroup_protection(), which will read memory.{emin, elow} as the protection whatever the memcg is.
How would this issue happen? Because in mem_cgroup_protected() we forget to clear the memory.{emin, elow} if the memcg is target_mem_cgroup [2].
An example to illustrate this issue. root_mem_cgroup / A memory.max: 1024M memory.min: 512M memory.current: 800M ('current' must be greater than 'min') Once kswapd starts to reclaim memcg A, it assigns 512M to memory.emin of A. Then kswapd stops. As a result of it, the memory values of A will be, root_mem_cgroup / A memory.max: 1024M memory.min: 512M memory.current: 512M (approximately) memory.emin: 512M
Then a new workload starts to run in memcg A, and it will trigger memcg relcaim in A soon. As memcg A is the target_mem_cgroup of this reclaimer, so it return directly without touching memory.{emin, elow}.[2] The memory values of A will be, root_mem_cgroup / A memory.max: 1024M memory.min: 512M memory.current: 1024M (approximately) memory.emin: 512M Then this memory.emin will be used in mem_cgroup_protection() to get the scan count, which is obvoiusly a wrong scan count.
[1]. https://lore.kernel.org/linux-mm/20200216145249.6900-1-laoar.shao@gmail.com/
Fixes: 9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim") Cc: Chris Down chris@chrisdown.name Cc: Roman Gushchin guro@fb.com Cc: stable@vger.kernel.org Signed-off-by: Yafang Shao laoar.shao@gmail.com
As others have noted, it's fairly hard to understand the problem from the above changelog. How about the following:
A cgroup can have both memory protection and a memory limit to isolate it from its siblings in both directions - for example, to prevent it from being shrunk below 2G under high pressure from outside, but also from growing beyond 4G under low pressure.
9783aa9917f8 ("mm, memcg: proportional memory.{low,min} reclaim") implemented proportional scan pressure so that multiple siblings in excess of their protection settings don't get reclaimed equally but instead in accordance to their unprotected portion.
During limit reclaim, this proportionality shouldn't apply of course: there is no competition, all pressure is from within the cgroup and should be applied as such. Reclaim should operate at full efficiency.
However, mem_cgroup_protected() never expected anybody to look at the effective protection values when it indicated that the cgroup is above its protection. As a result, a query during limit reclaim may return stale protection values that were calculated by a previous reclaim cycle in which the cgroup did have siblings.
When this happens, reclaim is unnecessarily hesitant and potentially slow to meet the desired limit. In theory this could lead to premature OOM kills, although it's not obvious this has occurred in practice.
Thanks a lot for your improvement on the change log.
-- Thanks Yafang