From: Rob Clark robdclark@chromium.org
I've been spending some time looking into how things behave under high memory pressure. The first patch is a random cleanup I noticed along the way. The second improves the situation significantly when we are getting shrinker called from many threads in parallel. And the last two are $debugfs/gem fixes I needed so I could monitor the state of GEM objects (ie. how many are active/purgable/purged) while triggering high memory pressure.
We could probably go a bit further with dropping the mm_lock in the shrinker->scan() loop, but this is already a pretty big improvement. The next step is probably actually to add support to unpin/evict inactive objects. (We are part way there since we have already de- coupled the iova lifetime from the pages lifetime, but there are a few sharp corners to work through.)
Rob Clark (4): drm/msm: Remove unused freed llist node drm/msm: Avoid mutex in shrinker_count() drm/msm: Fix debugfs deadlock drm/msm: Improved debugfs gem stats
drivers/gpu/drm/msm/msm_debugfs.c | 14 ++--- drivers/gpu/drm/msm/msm_drv.c | 4 ++ drivers/gpu/drm/msm/msm_drv.h | 15 ++++-- drivers/gpu/drm/msm/msm_fb.c | 3 +- drivers/gpu/drm/msm/msm_gem.c | 65 ++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem.h | 72 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 28 ++++------ 7 files changed, 150 insertions(+), 51 deletions(-)