Add a common helper for *internal* PMC lookups, and delete the ops hook and Intel's implementation. Keep AMD's implementation, but rename it to amd_pmu_get_pmc() to make it somewhat more obvious that it's suited for both KVM-internal and guest-initiated lookups. Because KVM tracks all counters in a single bitmap, getting a counter when iterating over a bitmap, e.g. of all valid PMCs, requires a small amount of math, that while simple, isn't super obvious and doesn't use the same semantics as PMC lookups from RDPMC! Although AMD doesn't support fixed counters, the common PMU code still behaves as if there a split, the high half of which just happens to always be empty. Opportunstically add a comment to explain both what is going on, and why KVM uses a single bitmap, e.g. the boilerplate for iterating over separate bitmaps could be done via macros, so it's not (just) about deduplicating code. Link: https://lore.kernel.org/r/20231110022857.1273836-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
28 lines
853 B
C
28 lines
853 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#if !defined(KVM_X86_PMU_OP) || !defined(KVM_X86_PMU_OP_OPTIONAL)
|
|
BUILD_BUG_ON(1)
|
|
#endif
|
|
|
|
/*
|
|
* KVM_X86_PMU_OP() and KVM_X86_PMU_OP_OPTIONAL() are used to help generate
|
|
* both DECLARE/DEFINE_STATIC_CALL() invocations and
|
|
* "static_call_update()" calls.
|
|
*
|
|
* KVM_X86_PMU_OP_OPTIONAL() can be used for those functions that can have
|
|
* a NULL definition, for example if "static_call_cond()" will be used
|
|
* at the call sites.
|
|
*/
|
|
KVM_X86_PMU_OP(rdpmc_ecx_to_pmc)
|
|
KVM_X86_PMU_OP(msr_idx_to_pmc)
|
|
KVM_X86_PMU_OP_OPTIONAL(check_rdpmc_early)
|
|
KVM_X86_PMU_OP(is_valid_msr)
|
|
KVM_X86_PMU_OP(get_msr)
|
|
KVM_X86_PMU_OP(set_msr)
|
|
KVM_X86_PMU_OP(refresh)
|
|
KVM_X86_PMU_OP(init)
|
|
KVM_X86_PMU_OP_OPTIONAL(reset)
|
|
KVM_X86_PMU_OP_OPTIONAL(deliver_pmi)
|
|
KVM_X86_PMU_OP_OPTIONAL(cleanup)
|
|
|
|
#undef KVM_X86_PMU_OP
|
|
#undef KVM_X86_PMU_OP_OPTIONAL
|