This series aims to clarify the behavior of KVM_GET_EMULATED_CPUID and KVM_GET_SUPPORTED ioctls, and fix a corner case where the nent field of the struct kvm_cpuid2 is matching the amount of entries that kvm returns.
Patch 1 proposes the nent field fix to cpuid.c, patch 2 updates the ioctl documentation accordingly and patches 3 and 4 provide a selftest to check KVM_GET_EMULATED_CPUID accordingly.
Emanuele Giuseppe Esposito (4): kvm: cpuid: adjust the returned nent field of kvm_cpuid2 for KVM_GET_SUPPORTED_CPUID and KVM_GET_EMULATED_CPUID Documentation: kvm: update KVM_GET_EMULATED_CPUID ioctl description selftests: add kvm_get_emulated_cpuid selftests: kvm: add get_emulated_cpuid test
Documentation/virt/kvm/api.rst | 10 +- arch/x86/kvm/cpuid.c | 6 + tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/processor.h | 1 + .../selftests/kvm/lib/x86_64/processor.c | 33 ++++ .../selftests/kvm/x86_64/get_emulated_cpuid.c | 183 ++++++++++++++++++ 7 files changed, 229 insertions(+), 6 deletions(-) create mode 100644 tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c
Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires a nent field inside the kvm_cpuid2 struct to be big enough to contain all entries that will be set by kvm. Therefore if the nent field is too high, kvm will adjust it to the right value. If too low, -E2BIG is returned.
However, when filling the entries do_cpuid_func() requires an additional entry, so if the right nent is known in advance, giving the exact number of entries won't work because it has to be increased by one.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com --- arch/x86/kvm/cpuid.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..5412b48b9103 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -975,6 +975,12 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid,
if (cpuid->nent < 1) return -E2BIG; + + /* if there are X entries, we need to allocate at least X+1 + * entries but return the actual number of entries + */ + cpuid->nent++; + if (cpuid->nent > KVM_MAX_CPUID_ENTRIES) cpuid->nent = KVM_MAX_CPUID_ENTRIES;
On Tue, Mar 30, 2021, Emanuele Giuseppe Esposito wrote:
Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires a nent field inside the kvm_cpuid2 struct to be big enough to contain all entries that will be set by kvm. Therefore if the nent field is too high, kvm will adjust it to the right value. If too low, -E2BIG is returned.
However, when filling the entries do_cpuid_func() requires an additional entry, so if the right nent is known in advance, giving the exact number of entries won't work because it has to be increased by one.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
arch/x86/kvm/cpuid.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..5412b48b9103 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -975,6 +975,12 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, if (cpuid->nent < 1) return -E2BIG;
- /* if there are X entries, we need to allocate at least X+1
* entries but return the actual number of entries
*/
- cpuid->nent++;
I don't see how this can be correct.
If this bonus entry really is needed, then won't that be reflected in array.nent? I.e won't KVM overrun the userspace buffer?
If it's not reflected in array.nent, that would imply there's an off-by-one check somewhere, or KVM is creating an entry that it doesn't copy to userspace. The former seems unlikely as there are literally only two checks against maxnent, and they both look correct (famous last words...).
KVM does decrement array->nent in one specific case (CPUID.0xD.2..64), i.e. a false positive is theoretically possible, but that carries a WARN and requires a kernel or CPU bug as well. And fudging nent for that case would still break normal use cases due to the overrun problem.
What am I missing?
- if (cpuid->nent > KVM_MAX_CPUID_ENTRIES) cpuid->nent = KVM_MAX_CPUID_ENTRIES;
2.30.2
On 31/03/2021 05:01, Sean Christopherson wrote:
On Tue, Mar 30, 2021, Emanuele Giuseppe Esposito wrote:
Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires a nent field inside the kvm_cpuid2 struct to be big enough to contain all entries that will be set by kvm. Therefore if the nent field is too high, kvm will adjust it to the right value. If too low, -E2BIG is returned.
However, when filling the entries do_cpuid_func() requires an additional entry, so if the right nent is known in advance, giving the exact number of entries won't work because it has to be increased by one.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
arch/x86/kvm/cpuid.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..5412b48b9103 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -975,6 +975,12 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, if (cpuid->nent < 1) return -E2BIG;
- /* if there are X entries, we need to allocate at least X+1
* entries but return the actual number of entries
*/
- cpuid->nent++;
I don't see how this can be correct.
If this bonus entry really is needed, then won't that be reflected in array.nent? I.e won't KVM overrun the userspace buffer?
If it's not reflected in array.nent, that would imply there's an off-by-one check somewhere, or KVM is creating an entry that it doesn't copy to userspace. The former seems unlikely as there are literally only two checks against maxnent, and they both look correct (famous last words...).
KVM does decrement array->nent in one specific case (CPUID.0xD.2..64), i.e. a false positive is theoretically possible, but that carries a WARN and requires a kernel or CPU bug as well. And fudging nent for that case would still break normal use cases due to the overrun problem.
What am I missing?
(Maybe I should have put this series as RFC)
The problem I see and noticed while doing the KVM_GET_EMULATED_CPUID selftest is the following: assume there are 3 kvm emulated entries, and the user sets cpuid->nent = 3. This should work because kvm sets 3 array->entries[], and copies them to user space.
However, when the 3rd entry is populated inside kvm (array->entries[2]), array->nent is increased once more (do_host_cpuid and __do_cpuid_func_emulated). At that point, the loop in kvm_dev_ioctl_get_cpuid and get_cpuid_func can potentially iterate once more, going into the
if (array->nent >= array->maxnent) return -E2BIG;
in __do_cpuid_func_emulated and do_host_cpuid, returning the error. I agree that we need that check there because the following code tries to access the array entry at array->nent index, but from what I understand that access can be potentially useless because it might just jump to the default entry in the switch statement and not set the entry, leaving array->nent to 3. Therefore with 3 kvm entries, the user would need to set cpuid->nent = 4 in order to work, even though only 3 entries are set.
There is no user space overflow because kvm uses array.nent in kvm_dev_ioctl_get_cpuid to specify how many entries to copy to the user. My fix simply pre-increments the nent field on behalf of user space, so that an additional allocation is performed just in case but if not filled, it will not be copied to userspace.
Of course any better solution is very welcome :)
If you are wondering how a user can know in advance the exact number of nentries, the only way is to initially invoke the ioctl with cpuid->nent = 1000 or simply KVM_MAX_CPUID_ENTRIES, and kvm will not only set the entries but also adjust the nent field. In my case it was returning 3, but without this fix a successive KVM_GET_EMULATED_CPUID ioctl with nent = 3 would just return -E2BIG.
Thank you, Emanuele
- if (cpuid->nent > KVM_MAX_CPUID_ENTRIES) cpuid->nent = KVM_MAX_CPUID_ENTRIES;
2.30.2
Emanuele Giuseppe Esposito eesposit@redhat.com writes:
On 31/03/2021 05:01, Sean Christopherson wrote:
On Tue, Mar 30, 2021, Emanuele Giuseppe Esposito wrote:
Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires a nent field inside the kvm_cpuid2 struct to be big enough to contain all entries that will be set by kvm. Therefore if the nent field is too high, kvm will adjust it to the right value. If too low, -E2BIG is returned.
However, when filling the entries do_cpuid_func() requires an additional entry, so if the right nent is known in advance, giving the exact number of entries won't work because it has to be increased by one.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
arch/x86/kvm/cpuid.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..5412b48b9103 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -975,6 +975,12 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, if (cpuid->nent < 1) return -E2BIG;
- /* if there are X entries, we need to allocate at least X+1
* entries but return the actual number of entries
*/
- cpuid->nent++;
I don't see how this can be correct.
If this bonus entry really is needed, then won't that be reflected in array.nent? I.e won't KVM overrun the userspace buffer?
If it's not reflected in array.nent, that would imply there's an off-by-one check somewhere, or KVM is creating an entry that it doesn't copy to userspace. The former seems unlikely as there are literally only two checks against maxnent, and they both look correct (famous last words...).
KVM does decrement array->nent in one specific case (CPUID.0xD.2..64), i.e. a false positive is theoretically possible, but that carries a WARN and requires a kernel or CPU bug as well. And fudging nent for that case would still break normal use cases due to the overrun problem.
What am I missing?
(Maybe I should have put this series as RFC)
The problem I see and noticed while doing the KVM_GET_EMULATED_CPUID selftest is the following: assume there are 3 kvm emulated entries, and the user sets cpuid->nent = 3. This should work because kvm sets 3 array->entries[], and copies them to user space.
However, when the 3rd entry is populated inside kvm (array->entries[2]), array->nent is increased once more (do_host_cpuid and __do_cpuid_func_emulated). At that point, the loop in kvm_dev_ioctl_get_cpuid and get_cpuid_func can potentially iterate once more, going into the
if (array->nent >= array->maxnent) return -E2BIG;
in __do_cpuid_func_emulated and do_host_cpuid, returning the error. I agree that we need that check there because the following code tries to access the array entry at array->nent index, but from what I understand that access can be potentially useless because it might just jump to the default entry in the switch statement and not set the entry, leaving array->nent to 3.
The problem seems to be exclusive to __do_cpuid_func_emulated(), do_host_cpuid() always does
entry = &array->entries[array->nent++];
Something like (completely untested and stupid):
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..54dcabd3abec 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -565,14 +565,22 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array, return entry; }
+static bool cpuid_func_emulated(u32 func) +{ + return (func == 0) || (func == 1) || (func == 7); +} + static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) { struct kvm_cpuid_entry2 *entry;
+ if (!cpuid_func_emulated()) + return 0; + if (array->nent >= array->maxnent) return -E2BIG;
- entry = &array->entries[array->nent]; + entry = &array->entries[array->nent++]; entry->function = func; entry->index = 0; entry->flags = 0; @@ -580,18 +588,14 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) switch (func) { case 0: entry->eax = 7; - ++array->nent; break; case 1: entry->ecx = F(MOVBE); - ++array->nent; break; case 7: entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; entry->eax = 0; entry->ecx = F(RDPID); - ++array->nent; - default: break; }
should do the job, right?
On 31/03/2021 09:56, Vitaly Kuznetsov wrote:
Emanuele Giuseppe Esposito eesposit@redhat.com writes:
On 31/03/2021 05:01, Sean Christopherson wrote:
On Tue, Mar 30, 2021, Emanuele Giuseppe Esposito wrote:
Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires a nent field inside the kvm_cpuid2 struct to be big enough to contain all entries that will be set by kvm. Therefore if the nent field is too high, kvm will adjust it to the right value. If too low, -E2BIG is returned.
However, when filling the entries do_cpuid_func() requires an additional entry, so if the right nent is known in advance, giving the exact number of entries won't work because it has to be increased by one.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
arch/x86/kvm/cpuid.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..5412b48b9103 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -975,6 +975,12 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, if (cpuid->nent < 1) return -E2BIG;
- /* if there are X entries, we need to allocate at least X+1
* entries but return the actual number of entries
*/
- cpuid->nent++;
I don't see how this can be correct.
If this bonus entry really is needed, then won't that be reflected in array.nent? I.e won't KVM overrun the userspace buffer?
If it's not reflected in array.nent, that would imply there's an off-by-one check somewhere, or KVM is creating an entry that it doesn't copy to userspace. The former seems unlikely as there are literally only two checks against maxnent, and they both look correct (famous last words...).
KVM does decrement array->nent in one specific case (CPUID.0xD.2..64), i.e. a false positive is theoretically possible, but that carries a WARN and requires a kernel or CPU bug as well. And fudging nent for that case would still break normal use cases due to the overrun problem.
What am I missing?
(Maybe I should have put this series as RFC)
The problem I see and noticed while doing the KVM_GET_EMULATED_CPUID selftest is the following: assume there are 3 kvm emulated entries, and the user sets cpuid->nent = 3. This should work because kvm sets 3 array->entries[], and copies them to user space.
However, when the 3rd entry is populated inside kvm (array->entries[2]), array->nent is increased once more (do_host_cpuid and __do_cpuid_func_emulated). At that point, the loop in kvm_dev_ioctl_get_cpuid and get_cpuid_func can potentially iterate once more, going into the
if (array->nent >= array->maxnent) return -E2BIG;
in __do_cpuid_func_emulated and do_host_cpuid, returning the error. I agree that we need that check there because the following code tries to access the array entry at array->nent index, but from what I understand that access can be potentially useless because it might just jump to the default entry in the switch statement and not set the entry, leaving array->nent to 3.
The problem seems to be exclusive to __do_cpuid_func_emulated(), do_host_cpuid() always does
entry = &array->entries[array->nent++];
Something like (completely untested and stupid):
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..54dcabd3abec 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -565,14 +565,22 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array, return entry; } +static bool cpuid_func_emulated(u32 func) +{
return (func == 0) || (func == 1) || (func == 7);
+}
- static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) { struct kvm_cpuid_entry2 *entry;
if (!cpuid_func_emulated())
return 0;
if (array->nent >= array->maxnent) return -E2BIG;
entry = &array->entries[array->nent];
entry = &array->entries[array->nent++]; entry->function = func; entry->index = 0; entry->flags = 0;
@@ -580,18 +588,14 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) switch (func) { case 0: entry->eax = 7;
++array->nent; break; case 1: entry->ecx = F(MOVBE);
++array->nent; break; case 7: entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; entry->eax = 0; entry->ecx = F(RDPID);
++array->nent;
default: break; }
should do the job, right?
Yes, it would work better. Alternatively:
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index ba7437308d28..452b0acd6e9d 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -567,34 +567,37 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) { - struct kvm_cpuid_entry2 *entry; - - if (array->nent >= array->maxnent) - return -E2BIG; + struct kvm_cpuid_entry2 entry; + bool changed = true;
- entry = &array->entries[array->nent]; - entry->function = func; - entry->index = 0; - entry->flags = 0; + entry.function = func; + entry.index = 0; + entry.flags = 0;
switch (func) { case 0: - entry->eax = 7; - ++array->nent; + entry.eax = 7; break; case 1: - entry->ecx = F(MOVBE); - ++array->nent; + entry.ecx = F(MOVBE); break; case 7: - entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; - entry->eax = 0; - entry->ecx = F(RDPID); - ++array->nent; + entry.flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; + entry.eax = 0; + entry.ecx = F(RDPID); + break; default: + changed = false; break; }
+ if (changed) { + if (array->nent >= array->maxnent) + return -E2BIG; + + memcpy(&array->entries[array->nent++], &entry, sizeof(entry)); + } + return 0; }
pros: avoids hard-coding another function that would check what the switch already does. it will be more flexible if another func has to be added. cons: there is a memcpy for each entry.
What do you think?
Emanuele
Emanuele Giuseppe Esposito eesposit@redhat.com writes:
On 31/03/2021 09:56, Vitaly Kuznetsov wrote:
Emanuele Giuseppe Esposito eesposit@redhat.com writes:
On 31/03/2021 05:01, Sean Christopherson wrote:
On Tue, Mar 30, 2021, Emanuele Giuseppe Esposito wrote:
Calling the kvm KVM_GET_[SUPPORTED/EMULATED]_CPUID ioctl requires a nent field inside the kvm_cpuid2 struct to be big enough to contain all entries that will be set by kvm. Therefore if the nent field is too high, kvm will adjust it to the right value. If too low, -E2BIG is returned.
However, when filling the entries do_cpuid_func() requires an additional entry, so if the right nent is known in advance, giving the exact number of entries won't work because it has to be increased by one.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
arch/x86/kvm/cpuid.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..5412b48b9103 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -975,6 +975,12 @@ int kvm_dev_ioctl_get_cpuid(struct kvm_cpuid2 *cpuid, if (cpuid->nent < 1) return -E2BIG;
- /* if there are X entries, we need to allocate at least X+1
* entries but return the actual number of entries
*/
- cpuid->nent++;
I don't see how this can be correct.
If this bonus entry really is needed, then won't that be reflected in array.nent? I.e won't KVM overrun the userspace buffer?
If it's not reflected in array.nent, that would imply there's an off-by-one check somewhere, or KVM is creating an entry that it doesn't copy to userspace. The former seems unlikely as there are literally only two checks against maxnent, and they both look correct (famous last words...).
KVM does decrement array->nent in one specific case (CPUID.0xD.2..64), i.e. a false positive is theoretically possible, but that carries a WARN and requires a kernel or CPU bug as well. And fudging nent for that case would still break normal use cases due to the overrun problem.
What am I missing?
(Maybe I should have put this series as RFC)
The problem I see and noticed while doing the KVM_GET_EMULATED_CPUID selftest is the following: assume there are 3 kvm emulated entries, and the user sets cpuid->nent = 3. This should work because kvm sets 3 array->entries[], and copies them to user space.
However, when the 3rd entry is populated inside kvm (array->entries[2]), array->nent is increased once more (do_host_cpuid and __do_cpuid_func_emulated). At that point, the loop in kvm_dev_ioctl_get_cpuid and get_cpuid_func can potentially iterate once more, going into the
if (array->nent >= array->maxnent) return -E2BIG;
in __do_cpuid_func_emulated and do_host_cpuid, returning the error. I agree that we need that check there because the following code tries to access the array entry at array->nent index, but from what I understand that access can be potentially useless because it might just jump to the default entry in the switch statement and not set the entry, leaving array->nent to 3.
The problem seems to be exclusive to __do_cpuid_func_emulated(), do_host_cpuid() always does
entry = &array->entries[array->nent++];
Something like (completely untested and stupid):
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 6bd2f8b830e4..54dcabd3abec 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -565,14 +565,22 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array, return entry; } +static bool cpuid_func_emulated(u32 func) +{
return (func == 0) || (func == 1) || (func == 7);
+}
- static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) { struct kvm_cpuid_entry2 *entry;
if (!cpuid_func_emulated())
return 0;
if (array->nent >= array->maxnent) return -E2BIG;
entry = &array->entries[array->nent];
entry = &array->entries[array->nent++]; entry->function = func; entry->index = 0; entry->flags = 0;
@@ -580,18 +588,14 @@ static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) switch (func) { case 0: entry->eax = 7;
++array->nent; break; case 1: entry->ecx = F(MOVBE);
++array->nent; break; case 7: entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; entry->eax = 0; entry->ecx = F(RDPID);
++array->nent;
default: break; }
should do the job, right?
Yes, it would work better. Alternatively:
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index ba7437308d28..452b0acd6e9d 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -567,34 +567,37 @@ static struct kvm_cpuid_entry2 *do_host_cpuid(struct kvm_cpuid_array *array,
static int __do_cpuid_func_emulated(struct kvm_cpuid_array *array, u32 func) {
- struct kvm_cpuid_entry2 *entry;
- if (array->nent >= array->maxnent)
return -E2BIG;
- struct kvm_cpuid_entry2 entry;
- bool changed = true;
- entry = &array->entries[array->nent];
- entry->function = func;
- entry->index = 0;
- entry->flags = 0;
entry.function = func;
entry.index = 0;
entry.flags = 0;
switch (func) { case 0:
entry->eax = 7;
++array->nent;
break; case 1:entry.eax = 7;
entry->ecx = F(MOVBE);
++array->nent;
break; case 7:entry.ecx = F(MOVBE);
entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
entry->eax = 0;
entry->ecx = F(RDPID);
++array->nent;
entry.flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
entry.eax = 0;
entry.ecx = F(RDPID);
break;
default:
changed = false;
break; }
if (changed) {
if (array->nent >= array->maxnent)
return -E2BIG;
memcpy(&array->entries[array->nent++], &entry, sizeof(entry));
}
return 0; }
pros: avoids hard-coding another function that would check what the switch already does. it will be more flexible if another func has to be added. cons: there is a memcpy for each entry.
Looks good to me,
I'd drop just 'bool changed' and replaced it with 'goto out' in the 'default' case.
memcpy() here is not a problem I believe, this path is not that performace critical.
KVM_GET_EMULATED_CPUID returns -E2BIG if the nent field of struct kvm_cpuid2 is smaller than the actual entries, while it adjusts nent if the provided amount is bigger than the actual amount.
Update documentation accordingly. ENOMEM is just returned if the allocation fails, like all other calls.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com --- Documentation/virt/kvm/api.rst | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 307f2fcf1b02..8ba23bc2a625 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -3404,12 +3404,10 @@ which features are emulated by kvm instead of being present natively.
Userspace invokes KVM_GET_EMULATED_CPUID by passing a kvm_cpuid2 structure with the 'nent' field indicating the number of entries in -the variable-size array 'entries'. If the number of entries is too low -to describe the cpu capabilities, an error (E2BIG) is returned. If the -number is too high, the 'nent' field is adjusted and an error (ENOMEM) -is returned. If the number is just right, the 'nent' field is adjusted -to the number of valid entries in the 'entries' array, which is then -filled. +the variable-size array 'entries'. +If the number of entries is too low to describe the cpu +capabilities, an error (E2BIG) is returned. If the number is too high, +the 'nent' field is adjusted and the entries array is filled.
The entries returned are the set CPUID bits of the respective features which kvm emulates, as returned by the CPUID instruction, with unknown
As the similar kvm_get_supported_cpuid, allocates and gets the struct kvm_cpuid2 filled with emulated features.
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com --- .../selftests/kvm/include/x86_64/processor.h | 1 + .../selftests/kvm/lib/x86_64/processor.c | 33 +++++++++++++++++++ 2 files changed, 34 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 0b30b4e15c38..ae1b9530e187 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -353,6 +353,7 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_msr_list *kvm_get_msr_index_list(void); uint64_t kvm_get_feature_msr(uint64_t msr_index); struct kvm_cpuid2 *kvm_get_supported_cpuid(void); +struct kvm_cpuid2 *kvm_get_emulated_cpuid(void);
struct kvm_cpuid2 *vcpu_get_cpuid(struct kvm_vm *vm, uint32_t vcpuid); void vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid, diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index e676fe40bfe6..2ea14421bdfe 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -669,6 +669,39 @@ struct kvm_cpuid2 *kvm_get_supported_cpuid(void) return cpuid; }
+/* + * KVM Emulated CPUID Get + * + * Input Args: None + * + * Output Args: + * + * Return: The emulated KVM CPUID + * + * Get the guest CPUID emulated by KVM. + */ +struct kvm_cpuid2 *kvm_get_emulated_cpuid(void) +{ + static struct kvm_cpuid2 *cpuid; + int ret; + int kvm_fd; + + if (cpuid) + return cpuid; + + cpuid = allocate_kvm_cpuid2(); + kvm_fd = open(KVM_DEV_PATH, O_RDONLY); + if (kvm_fd < 0) + exit(KSFT_SKIP); + + ret = ioctl(kvm_fd, KVM_GET_EMULATED_CPUID, cpuid); + TEST_ASSERT(ret == 0, "KVM_GET_EMULATED_CPUID failed %d %d\n", + ret, errno); + + close(kvm_fd); + return cpuid; +} + /* * KVM Get MSR *
Introduce a new selftest for the KVM_GET_EMULATED_CPUID ioctl. Since the behavior and functionality is similar to get_cpuid_test, the test checks:
1) checks for corner case in the nent field of the struct kvm_cpuid2. 2) sets and gets it as cpuid from the guest VM
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/get_emulated_cpuid.c | 183 ++++++++++++++++++ 3 files changed, 185 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 7bd7e776c266..f1523f3bfd04 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -8,6 +8,7 @@ /x86_64/debug_regs /x86_64/evmcs_test /x86_64/get_cpuid_test +x86_64/get_emulated_cpuid /x86_64/get_msr_index_features /x86_64/kvm_pv_test /x86_64/hyperv_clock diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 67eebb53235f..0d8d3bd5a7c7 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -40,6 +40,7 @@ LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_ha
TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test TEST_GEN_PROGS_x86_64 += x86_64/get_msr_index_features +TEST_GEN_PROGS_x86_64 += x86_64/get_emulated_cpuid TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test TEST_GEN_PROGS_x86_64 += x86_64/get_cpuid_test TEST_GEN_PROGS_x86_64 += x86_64/hyperv_clock diff --git a/tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c b/tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c new file mode 100644 index 000000000000..f5294dc4b8ff --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021, Red Hat Inc. + * + * Generic tests for KVM CPUID set/get ioctls + */ +#include <asm/kvm_para.h> +#include <linux/kvm_para.h> +#include <stdint.h> + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" + +#define VCPU_ID 0 +#define MAX_NENT 1000 + +/* CPUIDs known to differ */ +struct { + u32 function; + u32 index; +} mangled_cpuids[] = { + {.function = 0xd, .index = 0}, +}; + +static void guest_main(void) +{ + +} + +static bool is_cpuid_mangled(struct kvm_cpuid_entry2 *entrie) +{ + int i; + + for (i = 0; i < sizeof(mangled_cpuids); i++) { + if (mangled_cpuids[i].function == entrie->function && + mangled_cpuids[i].index == entrie->index) + return true; + } + + return false; +} + +static void check_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 *entrie) +{ + int i; + + for (i = 0; i < cpuid->nent; i++) { + if (cpuid->entries[i].function == entrie->function && + cpuid->entries[i].index == entrie->index) { + if (is_cpuid_mangled(entrie)) + return; + + TEST_ASSERT(cpuid->entries[i].eax == entrie->eax && + cpuid->entries[i].ebx == entrie->ebx && + cpuid->entries[i].ecx == entrie->ecx && + cpuid->entries[i].edx == entrie->edx, + "CPUID 0x%x.%x differ: 0x%x:0x%x:0x%x:0x%x vs 0x%x:0x%x:0x%x:0x%x", + entrie->function, entrie->index, + cpuid->entries[i].eax, cpuid->entries[i].ebx, + cpuid->entries[i].ecx, cpuid->entries[i].edx, + entrie->eax, entrie->ebx, entrie->ecx, entrie->edx); + return; + } + } + + TEST_ASSERT(false, "CPUID 0x%x.%x not found", entrie->function, entrie->index); +} + +static void compare_cpuids(struct kvm_cpuid2 *cpuid1, + struct kvm_cpuid2 *cpuid2) +{ + int i; + + for (i = 0; i < cpuid1->nent; i++) + check_cpuid(cpuid2, &cpuid1->entries[i]); + + for (i = 0; i < cpuid2->nent; i++) + check_cpuid(cpuid1, &cpuid2->entries[i]); +} + +struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid) +{ + int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]); + vm_vaddr_t gva = vm_vaddr_alloc(vm, size, + getpagesize(), 0, 0); + struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva); + + memcpy(guest_cpuids, cpuid, size); + + *p_gva = gva; + return guest_cpuids; +} + +static struct kvm_cpuid2 *alloc_custom_kvm_cpuid2(int nent) +{ + struct kvm_cpuid2 *cpuid; + size_t size; + + size = sizeof(*cpuid); + size += nent * sizeof(struct kvm_cpuid_entry2); + cpuid = calloc(1, size); + if (!cpuid) { + perror("malloc"); + abort(); + } + + cpuid->nent = nent; + + return cpuid; +} + +static void test_emulated_entries(struct kvm_vm *vm) +{ + int res, right_nent; + struct kvm_cpuid2 *cpuid; + + cpuid = alloc_custom_kvm_cpuid2(MAX_NENT); + + /* 0 nent, return E2BIG */ + cpuid->nent = 0; + res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid); + TEST_ASSERT(res == -1 && errno == E2BIG, + "KVM_GET_EMULATED_CPUID should fail E2BIG with nent=0"); + + /* high nent, set the entries and adjust */ + cpuid->nent = MAX_NENT; + res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid); + printf("%d %d\n", res, errno); + TEST_ASSERT(res == 0, + "KVM_GET_EMULATED_CPUID should not fail with nent > actual nent"); + right_nent = cpuid->nent; + + /* high nent, set the entries and adjust */ + cpuid->nent++; + res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid); + TEST_ASSERT(res == 0, + "KVM_GET_EMULATED_CPUID should not fail with nent > actual nent"); + TEST_ASSERT(right_nent == cpuid->nent, + "KVM_GET_EMULATED_CPUID nent should be always the same"); + + /* low nent, return E2BIG */ + if (right_nent > 1) { + cpuid->nent = 1; + res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid); + TEST_ASSERT(res == -1 && errno == E2BIG, + "KVM_GET_EMULATED_CPUID should fail with nent=1"); + } + + /* exact nent */ + cpuid->nent = right_nent; + res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid); + TEST_ASSERT(res == 0, + "KVM_GET_EMULATED_CPUID should not fail with nent == actual nent"); + TEST_ASSERT(cpuid->nent == right_nent, + "KVM_GET_EMULATED_CPUID should be invaried when nent is exact"); + + free(cpuid); +} + +// emulated is all emulated +// supported is only hw + kvm +int main(void) +{ + struct kvm_cpuid2 *emul_cpuid, *cpuid2; + struct kvm_vm *vm; + + if (!kvm_check_cap(KVM_CAP_EXT_EMUL_CPUID)) { + print_skip("KVM_GET_EMULATED_CPUID not available"); + return 0; + } + + vm = vm_create_default(VCPU_ID, 0, guest_main); + + emul_cpuid = kvm_get_emulated_cpuid(); + vcpu_set_cpuid(vm, VCPU_ID, emul_cpuid); + cpuid2 = vcpu_get_cpuid(vm, VCPU_ID); + + test_emulated_entries(vm); + compare_cpuids(emul_cpuid, cpuid2); + + kvm_vm_free(vm); +}
Emanuele Giuseppe Esposito eesposit@redhat.com writes:
Introduce a new selftest for the KVM_GET_EMULATED_CPUID ioctl. Since the behavior and functionality is similar to get_cpuid_test, the test checks:
- checks for corner case in the nent field of the struct kvm_cpuid2.
- sets and gets it as cpuid from the guest VM
Signed-off-by: Emanuele Giuseppe Esposito eesposit@redhat.com
tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/get_emulated_cpuid.c | 183 ++++++++++++++++++ 3 files changed, 185 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 7bd7e776c266..f1523f3bfd04 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -8,6 +8,7 @@ /x86_64/debug_regs /x86_64/evmcs_test /x86_64/get_cpuid_test +x86_64/get_emulated_cpuid /x86_64/get_msr_index_features /x86_64/kvm_pv_test /x86_64/hyperv_clock diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 67eebb53235f..0d8d3bd5a7c7 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -40,6 +40,7 @@ LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_ha TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test TEST_GEN_PROGS_x86_64 += x86_64/get_msr_index_features +TEST_GEN_PROGS_x86_64 += x86_64/get_emulated_cpuid TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test TEST_GEN_PROGS_x86_64 += x86_64/get_cpuid_test TEST_GEN_PROGS_x86_64 += x86_64/hyperv_clock diff --git a/tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c b/tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c new file mode 100644 index 000000000000..f5294dc4b8ff --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/get_emulated_cpuid.c @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (C) 2021, Red Hat Inc.
- Generic tests for KVM CPUID set/get ioctls
- */
+#include <asm/kvm_para.h> +#include <linux/kvm_para.h> +#include <stdint.h>
+#include "test_util.h" +#include "kvm_util.h" +#include "processor.h"
+#define VCPU_ID 0 +#define MAX_NENT 1000
+/* CPUIDs known to differ */ +struct {
- u32 function;
- u32 index;
+} mangled_cpuids[] = {
- {.function = 0xd, .index = 0},
+};
+static void guest_main(void) +{
+}
+static bool is_cpuid_mangled(struct kvm_cpuid_entry2 *entrie) +{
- int i;
- for (i = 0; i < sizeof(mangled_cpuids); i++) {
if (mangled_cpuids[i].function == entrie->function &&
mangled_cpuids[i].index == entrie->index)
return true;
- }
- return false;
+}
+static void check_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 *entrie) +{
- int i;
- for (i = 0; i < cpuid->nent; i++) {
if (cpuid->entries[i].function == entrie->function &&
cpuid->entries[i].index == entrie->index) {
if (is_cpuid_mangled(entrie))
return;
TEST_ASSERT(cpuid->entries[i].eax == entrie->eax &&
cpuid->entries[i].ebx == entrie->ebx &&
cpuid->entries[i].ecx == entrie->ecx &&
cpuid->entries[i].edx == entrie->edx,
"CPUID 0x%x.%x differ: 0x%x:0x%x:0x%x:0x%x vs 0x%x:0x%x:0x%x:0x%x",
entrie->function, entrie->index,
cpuid->entries[i].eax, cpuid->entries[i].ebx,
cpuid->entries[i].ecx, cpuid->entries[i].edx,
entrie->eax, entrie->ebx, entrie->ecx, entrie->edx);
return;
}
- }
- TEST_ASSERT(false, "CPUID 0x%x.%x not found", entrie->function, entrie->index);
+}
+static void compare_cpuids(struct kvm_cpuid2 *cpuid1,
struct kvm_cpuid2 *cpuid2)
+{
- int i;
- for (i = 0; i < cpuid1->nent; i++)
check_cpuid(cpuid2, &cpuid1->entries[i]);
- for (i = 0; i < cpuid2->nent; i++)
check_cpuid(cpuid1, &cpuid2->entries[i]);
+}
CPUID comparison here seems to be borrowed from get_cpuid_test.c, I think we can either put it to a library or (my preference) just merge these two selftests together. 'get_cpuid_test' name is generic enough to be used for KVM_GET_EMULATED_CPUID too.
+struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid) +{
- int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
- vm_vaddr_t gva = vm_vaddr_alloc(vm, size,
getpagesize(), 0, 0);
- struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
- memcpy(guest_cpuids, cpuid, size);
- *p_gva = gva;
- return guest_cpuids;
+}
+static struct kvm_cpuid2 *alloc_custom_kvm_cpuid2(int nent) +{
- struct kvm_cpuid2 *cpuid;
- size_t size;
- size = sizeof(*cpuid);
- size += nent * sizeof(struct kvm_cpuid_entry2);
- cpuid = calloc(1, size);
- if (!cpuid) {
perror("malloc");
abort();
- }
- cpuid->nent = nent;
- return cpuid;
+}
+static void test_emulated_entries(struct kvm_vm *vm) +{
- int res, right_nent;
- struct kvm_cpuid2 *cpuid;
- cpuid = alloc_custom_kvm_cpuid2(MAX_NENT);
- /* 0 nent, return E2BIG */
- cpuid->nent = 0;
- res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid);
- TEST_ASSERT(res == -1 && errno == E2BIG,
"KVM_GET_EMULATED_CPUID should fail E2BIG with nent=0");
- /* high nent, set the entries and adjust */
- cpuid->nent = MAX_NENT;
- res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid);
- printf("%d %d\n", res, errno);
- TEST_ASSERT(res == 0,
"KVM_GET_EMULATED_CPUID should not fail with nent > actual nent");
- right_nent = cpuid->nent;
- /* high nent, set the entries and adjust */
- cpuid->nent++;
- res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid);
- TEST_ASSERT(res == 0,
"KVM_GET_EMULATED_CPUID should not fail with nent > actual nent");
- TEST_ASSERT(right_nent == cpuid->nent,
"KVM_GET_EMULATED_CPUID nent should be always the same");
- /* low nent, return E2BIG */
- if (right_nent > 1) {
cpuid->nent = 1;
res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid);
TEST_ASSERT(res == -1 && errno == E2BIG,
"KVM_GET_EMULATED_CPUID should fail with nent=1");
- }
- /* exact nent */
- cpuid->nent = right_nent;
- res = _kvm_ioctl(vm, KVM_GET_EMULATED_CPUID, cpuid);
- TEST_ASSERT(res == 0,
"KVM_GET_EMULATED_CPUID should not fail with nent == actual nent");
- TEST_ASSERT(cpuid->nent == right_nent,
"KVM_GET_EMULATED_CPUID should be invaried when nent is exact");
- free(cpuid);
+}
+// emulated is all emulated +// supported is only hw + kvm
/* * ... */
comments please
+int main(void) +{
- struct kvm_cpuid2 *emul_cpuid, *cpuid2;
- struct kvm_vm *vm;
- if (!kvm_check_cap(KVM_CAP_EXT_EMUL_CPUID)) {
print_skip("KVM_GET_EMULATED_CPUID not available");
return 0;
- }
- vm = vm_create_default(VCPU_ID, 0, guest_main);
- emul_cpuid = kvm_get_emulated_cpuid();
- vcpu_set_cpuid(vm, VCPU_ID, emul_cpuid);
- cpuid2 = vcpu_get_cpuid(vm, VCPU_ID);
- test_emulated_entries(vm);
- compare_cpuids(emul_cpuid, cpuid2);
- kvm_vm_free(vm);
+}
+static void check_cpuid(struct kvm_cpuid2 *cpuid, struct kvm_cpuid_entry2 *entrie) +{
- int i;
- for (i = 0; i < cpuid->nent; i++) {
if (cpuid->entries[i].function == entrie->function &&
cpuid->entries[i].index == entrie->index) {
if (is_cpuid_mangled(entrie))
return;
TEST_ASSERT(cpuid->entries[i].eax == entrie->eax &&
cpuid->entries[i].ebx == entrie->ebx &&
cpuid->entries[i].ecx == entrie->ecx &&
cpuid->entries[i].edx == entrie->edx,
"CPUID 0x%x.%x differ: 0x%x:0x%x:0x%x:0x%x vs 0x%x:0x%x:0x%x:0x%x",
entrie->function, entrie->index,
cpuid->entries[i].eax, cpuid->entries[i].ebx,
cpuid->entries[i].ecx, cpuid->entries[i].edx,
entrie->eax, entrie->ebx, entrie->ecx, entrie->edx);
return;
}
- }
- TEST_ASSERT(false, "CPUID 0x%x.%x not found", entrie->function, entrie->index);
+}
+static void compare_cpuids(struct kvm_cpuid2 *cpuid1,
struct kvm_cpuid2 *cpuid2)
+{
- int i;
- for (i = 0; i < cpuid1->nent; i++)
check_cpuid(cpuid2, &cpuid1->entries[i]);
- for (i = 0; i < cpuid2->nent; i++)
check_cpuid(cpuid1, &cpuid2->entries[i]);
+}
CPUID comparison here seems to be borrowed from get_cpuid_test.c, I think we can either put it to a library or (my preference) just merge these two selftests together. 'get_cpuid_test' name is generic enough to be used for KVM_GET_EMULATED_CPUID too.
Yes it is identical. I agree with you, I will merge the test in get_cpuid_test.c
Emanuele
linux-kselftest-mirror@lists.linaro.org