Adds mutex guard to the VMSA updating code. Also adds a check to skip a vCPU if it has already been LAUNCH_UPDATE_VMSA'd which should allow userspace to retry this ioctl until all the vCPUs can be successfully LAUNCH_UPDATE_VMSA'd. Because this operation cannot be undone we cannot unwind if one vCPU fails.
Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest")
Signed-off-by: Peter Gonda pgonda@google.com Cc: Marc Orr marcorr@google.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Sean Christopherson seanjc@google.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- arch/x86/kvm/svm/sev.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 75e0b21ad07c..9a2ebd0328ca 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -598,22 +598,29 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - struct sev_data_launch_update_vmsa vmsa; + struct sev_data_launch_update_vmsa vmsa = {0}; struct kvm_vcpu *vcpu; int i, ret;
if (!sev_es_guest(kvm)) return -ENOTTY;
- vmsa.reserved = 0; - kvm_for_each_vcpu(i, vcpu, kvm) { struct vcpu_svm *svm = to_svm(vcpu);
+ ret = mutex_lock_killable(&vcpu->mutex); + if (ret) + goto out_unlock; + + /* Skip to the next vCPU if this one has already be updated. */ + ret = sev_es_sync_vmsa(svm); + if (svm->vcpu.arch.guest_state_protected) + goto unlock; + /* Perform some pre-encryption checks against the VMSA */ ret = sev_es_sync_vmsa(svm); if (ret) - return ret; + goto out_unlock;
/* * The LAUNCH_UPDATE_VMSA command will perform in-place @@ -629,12 +636,19 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, &argp->error); if (ret) - return ret; + goto out_unlock;
svm->vcpu.arch.guest_state_protected = true; + +unlock: + mutex_unlock(&vcpu->mutex); }
return 0; + +out_unlock: + mutex_unlock(&vcpu->mutex); + return ret; }
static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
On Tue, Sep 14, 2021, Peter Gonda wrote:
Adds mutex guard to the VMSA updating code. Also adds a check to skip a vCPU if it has already been LAUNCH_UPDATE_VMSA'd which should allow userspace to retry this ioctl until all the vCPUs can be successfully LAUNCH_UPDATE_VMSA'd. Because this operation cannot be undone we cannot unwind if one vCPU fails.
Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest")
Signed-off-by: Peter Gonda pgonda@google.com Cc: Marc Orr marcorr@google.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Sean Christopherson seanjc@google.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org
arch/x86/kvm/svm/sev.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 75e0b21ad07c..9a2ebd0328ca 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -598,22 +598,29 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
- struct sev_data_launch_update_vmsa vmsa;
- struct sev_data_launch_update_vmsa vmsa = {0}; struct kvm_vcpu *vcpu; int i, ret;
if (!sev_es_guest(kvm)) return -ENOTTY;
- vmsa.reserved = 0;
Zeroing all of 'vmsa' is an unrelated chagne and belongs in a separate patch. I would even go so far as to say it's unnecessary, even field of the struct is explicitly written before it's consumed.
kvm_for_each_vcpu(i, vcpu, kvm) { struct vcpu_svm *svm = to_svm(vcpu);
ret = mutex_lock_killable(&vcpu->mutex);
if (ret)
goto out_unlock;
Rather than multiple unlock labels, move the guts of the loop to a wrapper. As discussed off list, this really should be a vCPU-scoped ioctl, but that ship has sadly sailed :-( We can at least imitate that by making the VM-scoped ioctl nothing but a wrapper.
/* Skip to the next vCPU if this one has already be updated. */
s/be/been
Uber nit, there may not be a next vCPU. It'd be more slightly more accurate to say something like "Do nothing if this vCPU has already been updated".
ret = sev_es_sync_vmsa(svm);
if (svm->vcpu.arch.guest_state_protected)
goto unlock;
This belongs in a separate patch, too. It also introduces a bug (arguably two) in that it adds a duplicate call to sev_es_sync_vmsa(). The second bug is that if sev_es_sync_vmsa() fails _and_ the vCPU is already protected, this will cause that failure to be squashed.
In the end, I think the least gross implementation will look something like this, implemented over two patches (one for the lock, one for the protected check).
static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, int *error) { struct sev_data_launch_update_vmsa vmsa; struct vcpu_svm *svm = to_svm(vcpu); int ret;
/* * Do nothing if this vCPU has already been updated. This is allowed * to let userspace retry LAUNCH_UPDATE_VMSA if the command fails on a * later vCPU. */ if (svm->vcpu.arch.guest_state_protected) return 0;
/* Perform some pre-encryption checks against the VMSA */ ret = sev_es_sync_vmsa(svm); if (ret) return ret;
/* * The LAUNCH_UPDATE_VMSA command will perform in-place * encryption of the VMSA memory content (i.e it will write * the same memory region with the guest's key), so invalidate * it first. */ clflush_cache_range(svm->vmsa, PAGE_SIZE);
vmsa.reserved = 0; vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; vmsa.address = __sme_pa(svm->vmsa); vmsa.len = PAGE_SIZE; return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); }
static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_vcpu *vcpu; int i, ret;
if (!sev_es_guest(kvm)) return -ENOTTY;
kvm_for_each_vcpu(i, vcpu, kvm) { ret = mutex_lock_killable(&vcpu->mutex); if (ret) return ret;
ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
mutex_unlock(&vcpu->mutex); if (ret) return ret; } return 0; }
On Tue, Sep 14, 2021 at 3:34 PM Sean Christopherson seanjc@google.com wrote:
On Tue, Sep 14, 2021, Peter Gonda wrote:
Adds mutex guard to the VMSA updating code. Also adds a check to skip a vCPU if it has already been LAUNCH_UPDATE_VMSA'd which should allow userspace to retry this ioctl until all the vCPUs can be successfully LAUNCH_UPDATE_VMSA'd. Because this operation cannot be undone we cannot unwind if one vCPU fails.
Fixes: ad73109ae7ec ("KVM: SVM: Provide support to launch and run an SEV-ES guest")
Signed-off-by: Peter Gonda pgonda@google.com Cc: Marc Orr marcorr@google.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Sean Christopherson seanjc@google.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org
arch/x86/kvm/svm/sev.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 75e0b21ad07c..9a2ebd0328ca 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -598,22 +598,29 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_launch_update_vmsa vmsa;
struct sev_data_launch_update_vmsa vmsa = {0}; struct kvm_vcpu *vcpu; int i, ret; if (!sev_es_guest(kvm)) return -ENOTTY;
vmsa.reserved = 0;
Zeroing all of 'vmsa' is an unrelated chagne and belongs in a separate patch. I would even go so far as to say it's unnecessary, even field of the struct is explicitly written before it's consumed.
I'll remove this.
kvm_for_each_vcpu(i, vcpu, kvm) { struct vcpu_svm *svm = to_svm(vcpu);
ret = mutex_lock_killable(&vcpu->mutex);
if (ret)
goto out_unlock;
Rather than multiple unlock labels, move the guts of the loop to a wrapper. As discussed off list, this really should be a vCPU-scoped ioctl, but that ship has sadly sailed :-( We can at least imitate that by making the VM-scoped ioctl nothing but a wrapper.
/* Skip to the next vCPU if this one has already be updated. */
s/be/been
Uber nit, there may not be a next vCPU. It'd be more slightly more accurate to say something like "Do nothing if this vCPU has already been updated".
ret = sev_es_sync_vmsa(svm);
if (svm->vcpu.arch.guest_state_protected)
goto unlock;
This belongs in a separate patch, too. It also introduces a bug (arguably two) in that it adds a duplicate call to sev_es_sync_vmsa(). The second bug is that if sev_es_sync_vmsa() fails _and_ the vCPU is already protected, this will cause that failure to be squashed.
I'll move skipping logic to a seperate patch
In the end, I think the least gross implementation will look something like this, implemented over two patches (one for the lock, one for the protected check).
static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, int *error) { struct sev_data_launch_update_vmsa vmsa; struct vcpu_svm *svm = to_svm(vcpu); int ret;
/* * Do nothing if this vCPU has already been updated. This is allowed * to let userspace retry LAUNCH_UPDATE_VMSA if the command fails on a * later vCPU. */ if (svm->vcpu.arch.guest_state_protected) return 0; /* Perform some pre-encryption checks against the VMSA */ ret = sev_es_sync_vmsa(svm); if (ret) return ret; /* * The LAUNCH_UPDATE_VMSA command will perform in-place * encryption of the VMSA memory content (i.e it will write * the same memory region with the guest's key), so invalidate * it first. */ clflush_cache_range(svm->vmsa, PAGE_SIZE); vmsa.reserved = 0; vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; vmsa.address = __sme_pa(svm->vmsa); vmsa.len = PAGE_SIZE; return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
}
static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_vcpu *vcpu; int i, ret;
if (!sev_es_guest(kvm)) return -ENOTTY; kvm_for_each_vcpu(i, vcpu, kvm) { ret = mutex_lock_killable(&vcpu->mutex); if (ret) return ret; ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error); mutex_unlock(&vcpu->mutex);
"> if (ret)
return ret; } return 0;
}
That looks reasonable to me. I didn't know if changes headed for LTS should be smaller so I avoided doing this refactor. From: https://www.kernel.org/doc/html/v4.11/process/stable-kernel-rules.html#stabl... seems to say less than 100 lines is ideal. I guess this could also be a "theoretical race condition” anyways so maybe not for LTS anyways. Thoughts?
On Tue, Sep 14, 2021, Peter Gonda wrote:
That looks reasonable to me. I didn't know if changes headed for LTS should be smaller so I avoided doing this refactor. From: https://www.kernel.org/doc/html/v4.11/process/stable-kernel-rules.html#stabl... seems to say less than 100 lines is ideal.
Most the rules are more like guidelines ;-) In seriousness, there's a balance to be had between minimizing the diff and keeping everything maintainable. E.g. if the fix is kept small and then the upstream code is immediately refactored, any future fixes to the refactored code will be harder to backport. And the actual fix would also be poorly tested in upstream since folks would be testing the refactored version of the code.
I guess this could also be a "theoretical race condition” anyways so maybe not for LTS anyways.
If there's doubt, write a test :-) The "theoretical race condition" thing is to discourage people from backporting fixes for ridiculously tiny windows that may or may not be exploitable. This is a giant gaping chasm that userspace can drive a car through, e.g. literally "do KVM_RUN at the same time".
linux-stable-mirror@lists.linaro.org