On 12/21/18 12:21 PM, Kirill A. Shutemov wrote:
On Fri, Dec 21, 2018 at 10:28:25AM -0800, Mike Kravetz wrote:
On 12/21/18 2:28 AM, Kirill A. Shutemov wrote:
On Tue, Dec 18, 2018 at 02:35:57PM -0800, Mike Kravetz wrote:
Instead of writing the required complicated code for this rare occurrence, just eliminate the race. i_mmap_rwsem is now held in read mode for the duration of page fault processing. Hold i_mmap_rwsem longer in truncation and hold punch code to cover the call to remove_inode_hugepages.
One of remove_inode_hugepages() callers is noticeably missing -- hugetlbfs_evict_inode(). Why?
It at least deserves a comment on why the lock rule doesn't apply to it.
In the case of hugetlbfs_evict_inode, the vfs layer guarantees there are no more users of the inode/file.
I'm not convinced that it is true. See documentation for ->evict_inode() in Documentation/filesystems/porting:
Caller does *not* evict the pagecache or inode-associated metadata buffers; the method has to use truncate_inode_pages_final() to get rid of those.
We may be talking about different things.
When I say there are no more users, I am talking about users via user space. We get to the hugetlbfs evict inode code via iput->iput_final->evict. In this path the count on the inode is zero, and is marked (I_FREEING) so that nobody will start using it. As a result, there can be no additional page faults against the file. This is what we are using i_mmap_rwsem to prevent.
The Documentation above says that the ->evict_inode() method must evict from pagecache and get rid of metadatta buffers. hugetlbfs_evict_inode does this remove_inode_hugepages evicts pages from page cache (and frees them) as well as cleaning up the hugetlbfs specific reserve map metadata.
Am I misunderstanding your question/concern?
I have decided to add the locking (although unnecessary) with something like this in hugetlbfs_evict_inode.
/* * The vfs layer guarantees that there are no other users of this * inode. Therefore, it would be safe to call remove_inode_hugepages * without holding i_mmap_rwsem. We acquire and hold here to be * consistent with other callers. Since there will be no contention * on the semaphore, overhead is negligible. */ i_mmap_lock_write(mapping); remove_inode_hugepages(inode, 0, LLONG_MAX); i_mmap_unlock_write(mapping);