Move the common (or at least "ignored") aspects of resetting the vPMU to common x86 code, along with the stop/release helpers that are no used only by the common pmu.c. There is no need to manually handle fixed counters as all_valid_pmc_idx tracks both fixed and general purpose counters, and resetting the vPMU is far from a hot path, i.e. the extra bit of overhead to the PMC from the index is a non-issue. Zero fixed_ctr_ctrl in common code even though it's Intel specific. Ensuring it's zero doesn't harm AMD/SVM in any way, and stopping the fixed counters via all_valid_pmc_idx, but not clearing the associated control bits, would be odd/confusing. Make the .reset() hook optional as SVM no longer needs vendor specific handling. Cc: stable@vger.kernel.org Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://lore.kernel.org/r/20231103230541.352265-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
30 lines
911 B
C
30 lines
911 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#if !defined(KVM_X86_PMU_OP) || !defined(KVM_X86_PMU_OP_OPTIONAL)
|
|
BUILD_BUG_ON(1)
|
|
#endif
|
|
|
|
/*
|
|
* KVM_X86_PMU_OP() and KVM_X86_PMU_OP_OPTIONAL() are used to help generate
|
|
* both DECLARE/DEFINE_STATIC_CALL() invocations and
|
|
* "static_call_update()" calls.
|
|
*
|
|
* KVM_X86_PMU_OP_OPTIONAL() can be used for those functions that can have
|
|
* a NULL definition, for example if "static_call_cond()" will be used
|
|
* at the call sites.
|
|
*/
|
|
KVM_X86_PMU_OP(hw_event_available)
|
|
KVM_X86_PMU_OP(pmc_idx_to_pmc)
|
|
KVM_X86_PMU_OP(rdpmc_ecx_to_pmc)
|
|
KVM_X86_PMU_OP(msr_idx_to_pmc)
|
|
KVM_X86_PMU_OP(is_valid_rdpmc_ecx)
|
|
KVM_X86_PMU_OP(is_valid_msr)
|
|
KVM_X86_PMU_OP(get_msr)
|
|
KVM_X86_PMU_OP(set_msr)
|
|
KVM_X86_PMU_OP(refresh)
|
|
KVM_X86_PMU_OP(init)
|
|
KVM_X86_PMU_OP_OPTIONAL(reset)
|
|
KVM_X86_PMU_OP_OPTIONAL(deliver_pmi)
|
|
KVM_X86_PMU_OP_OPTIONAL(cleanup)
|
|
|
|
#undef KVM_X86_PMU_OP
|
|
#undef KVM_X86_PMU_OP_OPTIONAL
|