On 10/18/18 6:47 PM, Andrew Morton wrote:
On Thu, 18 Oct 2018 20:46:21 -0400 Andrea Arcangeli aarcange@redhat.com wrote:
On Thu, Oct 18, 2018 at 04:16:40PM -0700, Mike Kravetz wrote:
I was not sure about this, and expected someone could come up with something better. It just seems there are filesystems like huegtlbfs, where it makes no sense wasting cycles traversing the filesystem. So, let's not even try.
Hoping someone can come up with a better method than hard coding as I have done above.
It's not strictly required after marking the pages dirty though. The real fix is the other one? Could we just drop the hardcoding and let it run after the real fix is applied?
Yeah. The other part of the patch is the real fix. This drop_caches part is not necessary.
The performance of drop_caches doesn't seem critical, especially with gigapages. tmpfs doesn't seem to be optimized away from drop_caches and the gain would be bigger for tmpfs if THP is not enabled in the mount, so I'm not sure if we should worry about hugetlbfs first.
I guess so. I can't immediately see a clean way of expressing this so perhaps it would need a new BDI_CAP_NO_BACKING_STORE. Such a thing hardly seems worthwhile for drop_caches.
And drop_caches really shouldn't be there anyway. It's a standing workaround for ongoing suckage in pagecache and metadata reclaim behaviour :(
I'm OK with dropping the other part. It just seemed like there was no real reason to try and drop_caches for hugetlbfs (and perhaps others).
Andrew, would you like another version? Or can you just drop the fs/drop_caches.c part?