On Tue, May 18, 2021 at 1:40 PM Krish Sadhukhan krish.sadhukhan@oracle.com wrote:
On 5/18/21 10:25 AM, Jing Zhang wrote:
Hi David,
On Tue, May 18, 2021 at 11:27 AM David Matlack dmatlack@google.com wrote:
On Mon, May 17, 2021 at 5:10 PM Jing Zhang jingzhangos@google.com wrote:
<snip> > Actually the definition of kvm_{vcpu,vm}_stat are arch specific. There is > no real structure for arch agnostic stats. Most of the stats in common > structures are arch agnostic, but not all of them. > There are some benefits to put all common stats in a separate structure. > e.g. if we want to add a stat in kvm_main.c, we only need to add this stat > in the common structure, don't have to update all kvm_{vcpu,vm}_stat > definition for all architectures. I meant rename the existing arch-specific struct kvm_{vcpu,vm}_stat to kvm_{vcpu,vm}_stat_arch and rename struct kvm_{vcpu,vm}_stat_common to kvm_{vcpu,vm}_stat.
So in include/linux/kvm_types.h you'd have:
struct kvm_vm_stat { ulong remote_tlb_flush; struct kvm_vm_stat_arch arch; };
struct kvm_vcpu_stat { u64 halt_successful_poll; u64 halt_attempted_poll; u64 halt_poll_invalid; u64 halt_wakeup; u64 halt_poll_success_ns; u64 halt_poll_fail_ns; struct kvm_vcpu_stat_arch arch; };
And in arch/x86/include/asm/kvm_host.h you'd have:
struct kvm_vm_stat_arch { ulong mmu_shadow_zapped; ... };
struct kvm_vcpu_stat_arch { u64 pf_fixed; u64 pf_guest; u64 tlb_flush; ... };
You still have the same benefits of having an arch-neutral place to store stats but the struct layout more closely resembles struct kvm_vcpu and struct kvm.
You are right. This is a more reasonable way to layout the structures. I remember that I didn't choose this way is only because that it needs touching every arch specific stats in all architectures (stat.name -> stat.arch.name) instead of only touching arch neutral stats. Let's see if there is any vote from others about this.
+1
Thanks, Jing
It is still not fun to change hundreds of stats update code in every architectures. Let's keep it as it is for now and see how it is going.
Thanks, Jing