[Devel] [PATCH 07/11] KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowed

Alexander Mikhalitsyn alexander.mikhalitsyn at virtuozzo.com
Thu Jun 23 12:55:19 MSK 2022


From: Paolo Bonzini <pbonzini at redhat.com>

Do not hardcode is_smm so that all the architectural conditions for
blocking SMIs are listed in a single place.  Well, in two places because
this introduces some code duplication between Intel and AMD.

This ensures that nested SVM obeys GIF in kvm_vcpu_has_events.

Signed-off-by: Paolo Bonzini <pbonzini at redhat.com>
(cherry picked from commit a9fa7cb6aa997ba58294f1a07d402ce5855bafe1)

https://jira.sw.ru/browse/PSBM-139278

Signed-off-by: Alexander Mikhalitsyn <alexander.mikhalitsyn at virtuozzo.com>
---
 arch/x86/kvm/vmx.c | 2 +-
 arch/x86/kvm/x86.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index a2011e3861bd..f8a4b03e9934 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -12395,7 +12395,7 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu)
 	/* we need a nested vmexit to enter SMM, postpone if run is pending */
 	if (to_vmx(vcpu)->nested.nested_run_pending)
 		return 0;
-	return 1;
+	return !is_smm(vcpu);
 }
 
 static int vmx_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 47d35eb43c86..4207dbeb3768 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6674,7 +6674,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
 		}
 
 		kvm_x86_ops->queue_exception(vcpu);
-	} else if (vcpu->arch.smi_pending && !is_smm(vcpu) && kvm_x86_ops->smi_allowed(vcpu)) {
+	} else if (vcpu->arch.smi_pending && kvm_x86_ops->smi_allowed(vcpu)) {
 		vcpu->arch.smi_pending = false;
 		enter_smm(vcpu);
 	} else if (vcpu->arch.nmi_pending && kvm_x86_ops->nmi_allowed(vcpu)) {
@@ -8816,7 +8816,7 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 		return true;
 
 	if (test_bit(KVM_REQ_SMI, &vcpu->requests) ||
-	    (vcpu->arch.smi_pending && !is_smm(vcpu)))
+	    (vcpu->arch.smi_pending && kvm_x86_ops->smi_allowed(vcpu)))
 		return true;
 
 	if (kvm_arch_interrupt_allowed(vcpu) &&
-- 
2.36.1



More information about the Devel mailing list