From: Ben Gardon bgardon@google.com
[ Upstream commit 734e45b329d626d2c14e2bcf8be3d069a33c3316 ]
The KVM MMU caches already guarantee that shadow page table memory will be zeroed, so there is no reason to re-zero the page in the TDP MMU page fault handler.
No functional change intended.
Reviewed-by: Peter Feiner pfeiner@google.com Reviewed-by: Sean Christopherson seanjc@google.com Acked-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Ben Gardon bgardon@google.com Message-Id: 20210202185734.1680553-5-bgardon@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kvm/mmu/tdp_mmu.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6bd86bb4c089..4a2b8844f00f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -708,7 +708,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); list_add(&sp->link, &vcpu->kvm->arch.tdp_mmu_pages); child_pt = sp->spt; - clear_page(child_pt); new_spte = make_nonleaf_spte(child_pt, !shadow_accessed_mask);