The patch titled Subject: x86/kfence: avoid writing L1TF-vulnerable PTEs has been added to the -mm mm-hotfixes-unstable branch. Its filename is x86-kfence-avoid-writing-l1tf-vulnerable-ptes.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days
------------------------------------------------------ From: Andrew Cooper andrew.cooper3@citrix.com Subject: x86/kfence: avoid writing L1TF-vulnerable PTEs Date: Tue, 6 Jan 2026 18:04:26 +0000
For native, the choice of PTE is fine. There's real memory backing the non-present PTE. However, for XenPV, Xen complains:
(XEN) d1 L1TF-vulnerable L1e 8010000018200066 - Shadowing
To explain, some background on XenPV pagetables:
Xen PV guests are control their own pagetables; they choose the new PTE value, and use hypercalls to make changes so Xen can audit for safety.
In addition to a regular reference count, Xen also maintains a type reference count. e.g. SegDesc (referenced by vGDT/vLDT), Writable (referenced with _PAGE_RW) or L{1..4} (referenced by vCR3 or a lower pagetable level). This is in order to prevent e.g. a page being inserted into the pagetables for which the guest has a writable mapping.
For non-present mappings, all other bits become software accessible, and typically contain metadata rather a real frame address. There is nothing that a reference count could sensibly be tied to. As such, even if Xen could recognise the address as currently safe, nothing would prevent that frame from changing owner to another VM in the future.
When Xen detects a PV guest writing a L1TF-PTE, it responds by activating shadow paging. This is normally only used for the live phase of migration, and comes with a reasonable overhead.
KFENCE only cares about getting #PF to catch wild accesses; it doesn't care about the value for non-present mappings. Use a fully inverted PTE, to avoid hitting the slow path when running under Xen.
While adjusting the logic, take the opportunity to skip all actions if the PTE is already in the right state, half the number PVOps callouts, and skip TLB maintenance on a !P -> P transition which benefits non-Xen cases too.
Link: https://lkml.kernel.org/r/20260106180426.710013-1-andrew.cooper3@citrix.com Fixes: 1dc0da6e9ec0 ("x86, kfence: enable KFENCE for x86") Signed-off-by: Andrew Cooper andrew.cooper3@citrix.com Tested-by: Marco Elver elver@google.com Cc: Alexander Potapenko glider@google.com Cc: Marco Elver elver@google.com Cc: Dmitry Vyukov dvyukov@google.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Jann Horn jannh@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
arch/x86/include/asm/kfence.h | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-)
--- a/arch/x86/include/asm/kfence.h~x86-kfence-avoid-writing-l1tf-vulnerable-ptes +++ a/arch/x86/include/asm/kfence.h @@ -42,10 +42,34 @@ static inline bool kfence_protect_page(u { unsigned int level; pte_t *pte = lookup_address(addr, &level); + pteval_t val;
if (WARN_ON(!pte || level != PG_LEVEL_4K)) return false;
+ val = pte_val(*pte); + + /* + * protect requires making the page not-present. If the PTE is + * already in the right state, there's nothing to do. + */ + if (protect != !!(val & _PAGE_PRESENT)) + return true; + + /* + * Otherwise, invert the entire PTE. This avoids writing out an + * L1TF-vulnerable PTE (not present, without the high address bits + * set). + */ + set_pte(pte, __pte(~val)); + + /* + * If the page was protected (non-present) and we're making it + * present, there is no need to flush the TLB at all. + */ + if (!protect) + return true; + /* * We need to avoid IPIs, as we may get KFENCE allocations or faults * with interrupts disabled. Therefore, the below is best-effort, and @@ -53,11 +77,6 @@ static inline bool kfence_protect_page(u * lazy fault handling takes care of faults after the page is PRESENT. */
- if (protect) - set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); - else - set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); - /* * Flush this CPU's TLB, assuming whoever did the allocation/free is * likely to continue running on this CPU. _
Patches currently in -mm which might be from andrew.cooper3@citrix.com are
x86-kfence-avoid-writing-l1tf-vulnerable-ptes.patch
linux-stable-mirror@lists.linaro.org