folio_is_secretmem() states that secretmem folios cannot be LRU folios: so we may only exit early if we find an LRU folio. Yet, we exit early if we find a folio that is not a secretmem folio.
Consequently, folio_is_secretmem() fails to detect secretmem folios and, therefore, we can succeed in grabbing a secretmem folio during GUP-fast, crashing the kernel when we later try reading/writing to the folio, because the folio has been unmapped from the directmap.
Reported-by: xingwei lee xrivendell7@gmail.com Reported-by: yue sun samsun1006219@gmail.com Closes: https://lore.kernel.org/lkml/CABOYnLyevJeravW=QrH0JUPYEcDN160aZFb7kwndm-J2rm... Debugged-by: Miklos Szeredi miklos@szeredi.hu Reviewed-by: Mike Rapoport (IBM) rppt@kernel.org Tested-by: Miklos Szeredi mszeredi@redhat.com Fixes: 1507f51255c9 ("mm: introduce memfd_secret system call to create "secret" memory areas") Cc: stable@vger.kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- include/linux/secretmem.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h index 35f3a4a8ceb1..6996f1f53f14 100644 --- a/include/linux/secretmem.h +++ b/include/linux/secretmem.h @@ -16,7 +16,7 @@ static inline bool folio_is_secretmem(struct folio *folio) * We know that secretmem pages are not compound and LRU so we can * save a couple of cycles here. */ - if (folio_test_large(folio) || !folio_test_lru(folio)) + if (folio_test_large(folio) || folio_test_lru(folio)) return false;
mapping = (struct address_space *)