On Thu 04-07-19 17:47:16, Kuo-Hsin Yang wrote:
On Wed, Jul 03, 2019 at 04:30:57PM +0200, Michal Hocko wrote:
How does the reclaim behave with workloads with file backed data set not fitting into the memory? Aren't we going to to swap a lot - something that the heuristic is protecting from?
In common case, most of the pages in a large file backed data set are non-executable. When there are a lot of non-executable file pages, usually more file pages are scanned because of the recent_scanned / recent_rotated ratio.
I modified the test program to set the accessed sizes of the executable and non-executable file pages respectively. The test program runs on 2GB RAM VM with kernel 5.2.0-rc7 and this patch, allocates 2000 MB anonymous memory, then accesses 100 MB executable file pages and 2100 MB non-executable file pages for 10 times. The test also prints the file and anonymous page sizes in kB from /proc/meminfo. There are not too many swaps in this test case. I got similar test result without this patch.
Could you record swap out stats please? Also what happens if you have multiple readers?
Thanks!