The 'pss_locked' field of smaps_rollup was being calculated incorrectly as it accumulated the current pss everytime a locked VMA was found.
Fix that by making sure we record the current pss value before each VMA is walked. So, we can only add the delta if the VMA was found to be VM_LOCKED.
Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup") Cc: stable@vger.kernel.org # 4.14.y 4.19.y Signed-off-by: Sandeep Patil sspatil@android.com --- fs/proc/task_mmu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f0ec9edab2f3..51a00a2b4733 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma, #endif .mm = vma->vm_mm, }; + unsigned long pss;
smaps_walk.private = mss;
@@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma, } } #endif - + /* record current pss so we can calculate the delta after page walk */ + pss = mss->pss; /* mmap_sem is held in m_start */ walk_page_vma(vma, &smaps_walk); if (vma->vm_flags & VM_LOCKED) - mss->pss_locked += mss->pss; + mss->pss_locked += mss->pss - pss; }
#define SEQ_PUT_DEC(str, val) \
On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil sspatil@android.com wrote:
The 'pss_locked' field of smaps_rollup was being calculated incorrectly as it accumulated the current pss everytime a locked VMA was found.
Fix that by making sure we record the current pss value before each VMA is walked. So, we can only add the delta if the VMA was found to be VM_LOCKED.
...
--- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma, #endif .mm = vma->vm_mm, };
- unsigned long pss;
smaps_walk.private = mss; @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma, } } #endif
- /* record current pss so we can calculate the delta after page walk */
- pss = mss->pss; /* mmap_sem is held in m_start */ walk_page_vma(vma, &smaps_walk); if (vma->vm_flags & VM_LOCKED)
mss->pss_locked += mss->pss;
mss->pss_locked += mss->pss - pss;
}
This seems to be a rather obscure way of accumulating mem_size_stats.pss_locked. Wouldn't it make more sense to do this in smaps_account(), wherever we increment mem_size_stats.pss?
It would be a tiny bit less efficient but I think that the code cleanup justifies such a cost?
On 1/29/19 1:15 AM, Andrew Morton wrote:
On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil sspatil@android.com wrote:
The 'pss_locked' field of smaps_rollup was being calculated incorrectly as it accumulated the current pss everytime a locked VMA was found.
Fix that by making sure we record the current pss value before each VMA is walked. So, we can only add the delta if the VMA was found to be VM_LOCKED.
...
--- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma, #endif .mm = vma->vm_mm, };
- unsigned long pss;
smaps_walk.private = mss; @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma, } } #endif
- /* record current pss so we can calculate the delta after page walk */
- pss = mss->pss; /* mmap_sem is held in m_start */ walk_page_vma(vma, &smaps_walk); if (vma->vm_flags & VM_LOCKED)
mss->pss_locked += mss->pss;
mss->pss_locked += mss->pss - pss;
}
This seems to be a rather obscure way of accumulating mem_size_stats.pss_locked. Wouldn't it make more sense to do this in smaps_account(), wherever we increment mem_size_stats.pss?
It would be a tiny bit less efficient but I think that the code cleanup justifies such a cost?
Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it there? We probably don't need the whole vma param yet.
Thanks.
On Tue, Jan 29, 2019 at 04:52:21PM +0100, Vlastimil Babka wrote:
On 1/29/19 1:15 AM, Andrew Morton wrote:
On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil sspatil@android.com wrote:
The 'pss_locked' field of smaps_rollup was being calculated incorrectly as it accumulated the current pss everytime a locked VMA was found.
Fix that by making sure we record the current pss value before each VMA is walked. So, we can only add the delta if the VMA was found to be VM_LOCKED.
...
--- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma, #endif .mm = vma->vm_mm, };
- unsigned long pss;
smaps_walk.private = mss; @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma, } } #endif
- /* record current pss so we can calculate the delta after page walk */
- pss = mss->pss; /* mmap_sem is held in m_start */ walk_page_vma(vma, &smaps_walk); if (vma->vm_flags & VM_LOCKED)
mss->pss_locked += mss->pss;
mss->pss_locked += mss->pss - pss;
}
This seems to be a rather obscure way of accumulating mem_size_stats.pss_locked. Wouldn't it make more sense to do this in smaps_account(), wherever we increment mem_size_stats.pss?
It would be a tiny bit less efficient but I think that the code cleanup justifies such a cost?
Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it there? We probably don't need the whole vma param yet.
Agree, I will send -v2 shortly.
- ssp
linux-stable-mirror@lists.linaro.org