On Tue, Dec 5, 2023 at 4:07 AM Michal Hocko mhocko@suse.com wrote:
This behavior is particularly useful for work scheduling systems that need to track memory usage of worker processes/cgroups per-work-item. Since memory can't be squeezed like CPU can (the OOM-killer has opinions), these systems need to track the peak memory usage to compute system/container fullness when binpacking workitems.
I do not understand the OOM-killer reference here but I do understand that your worker reuses a cgroup and you want a peak memory consumption of a single run to better profile/configure the memcg configuration for the specific worker type. Correct?
To a certain extent, yes. At the moment, we're only using the inner memcg cgroups for accounting/profiling, and using a larger (k8s container) cgroup for enforcement.
The OOM-killer is involved because we're not configuring any memory limits on these individual "worker" cgroups, so we need to provision for multiple workloads using their peak memory at the same time to minimize OOM-killing.
In case you're curious, this is the job/queue-work scheduling system we wrote in-house called Quickset that's mentioned in this blog post about our new transcoder system: https://medium.com/vimeo-engineering-blog/riding-the-dragon-e328a3dfd39d
Signed-off-by: David Finkel davidf@vimeo.com
Makes sense to me Acked-by: Michal Hocko mhocko@suse.com
Thanks!
Thank you!