Updates for the interrupt subsystem:
- Consolidation of the machine_kexec_mask_interrupts() by providing a generic implementation and replacing the copy & pasta orgy in the relevant architectures. - Prevent unconditional operations on interrupt chips during kexec shutdown, which can trigger warnings in certain cases when the underlying interrupt has been shut down before. - Make the enforcement of interrupt handling in interrupt context unconditionally available, so that it actually works for non x86 related interrupt chips. The earlier enablement for ARM GIC chips set the required chip flag, but did not notice that the check was hidden behind a config switch which is not selected by ARM[64]. - Decrapify the handling of deferred interrupt affinity setting. Some interrupt chips require that affinity changes are made from the context of handling an interrupt to avoid certain race conditions. For x86 this was the default, but with interrupt remapping this requirement was lifted and a flag was introduced which tells the core code that affinity changes can be done in any context. Unrestricted affinity changes are the default for the majority of interrupt chips. RISCV has the requirement to add the deferred mode to one of it's interrupt controllers, but with the original implementation this would require to add the any context flag to all other RISC-V interrupt chips. That's backwards, so reverse the logic and require that chips, which need the deferred mode have to be marked accordingly. That avoids chasing the 'sane' chips and marking them. - Add multi-node support to the Loongarch AVEC interrupt controller driver. - The usual tiny cleanups, fixes and improvements all over the place. -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmePkVITHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYoRbQD/9bHVph/V9Ekl7JAX3aY4gG4JbRhOc7 dp1VAcHRhktRfoTztYRbjsbMu2nvZ58GKA8bkOS2jHSF/m3PbkIJfOhwk0YdIAoa +kdy5yDgqCGfkqW43DN4Cr+CnzGjWMitw67tFp3fhwehMDpDjdt2L28IjtanSS0f hO6FV7o65MWeJwxk4Isb2/nvkO+X23Lrp6RrWS8SXBnF9FFXxiPIg/fiOPTizhCh 1W/bSGxLLb9WwsVzmlGAKVFlXDij0QGaIUug2fdVZ63OsELXD7tJrLSPG133yk92 ppIa0s6BT4IBsfM00us4hG15PkLuJmP3yWWcoquG0rP8Wq58VOXiN6+rcJIyvB+5 mWceTH6IKfZGoRQKwXC7BxeBAIb147reiJtb06meq1/8ADIvzafiNy0c8x9i/UaV QiyhPVENjaGCGDomZmJQqN7Yb02Wge1k8InQnodDrHxZNl/bX/B1Z8Bxd0n6hPHg NSJXYif2AxgaddpohsdygqRDbT6SNyQdj7YjJFY5qAGJ3yFyJ4JB6WTqkWW4o1vH 3FVqdAnJmejAmmYSkah0Hkem2T5QASQmTWb93PLxiV6q+d0NM8stWAujjyVdIV/B W4Uj9mQ20cz54TjLtxqX+A1k6KcqOWRgh1l2QbUlFsgsOP3V8yz47yqYdR9qMWlO 9kNEjI3sw+G/IQ== =q4rj -----END PGP SIGNATURE----- Merge tag 'irq-core-2025-01-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull interrupt subsystem updates from Thomas Gleixner: - Consolidate the machine_kexec_mask_interrupts() by providing a generic implementation and replacing the copy & pasta orgy in the relevant architectures. - Prevent unconditional operations on interrupt chips during kexec shutdown, which can trigger warnings in certain cases when the underlying interrupt has been shut down before. - Make the enforcement of interrupt handling in interrupt context unconditionally available, so that it actually works for non x86 related interrupt chips. The earlier enablement for ARM GIC chips set the required chip flag, but did not notice that the check was hidden behind a config switch which is not selected by ARM[64]. - Decrapify the handling of deferred interrupt affinity setting. Some interrupt chips require that affinity changes are made from the context of handling an interrupt to avoid certain race conditions. For x86 this was the default, but with interrupt remapping this requirement was lifted and a flag was introduced which tells the core code that affinity changes can be done in any context. Unrestricted affinity changes are the default for the majority of interrupt chips. RISCV has the requirement to add the deferred mode to one of it's interrupt controllers, but with the original implementation this would require to add the any context flag to all other RISC-V interrupt chips. That's backwards, so reverse the logic and require that chips, which need the deferred mode have to be marked accordingly. That avoids chasing the 'sane' chips and marking them. - Add multi-node support to the Loongarch AVEC interrupt controller driver. - The usual tiny cleanups, fixes and improvements all over the place. * tag 'irq-core-2025-01-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq/generic_chip: Export irq_gc_mask_disable_and_ack_set() genirq/timings: Add kernel-doc for a function parameter genirq: Remove IRQ_MOVE_PCNTXT and related code x86/apic: Convert to IRQCHIP_MOVE_DEFERRED genirq: Provide IRQCHIP_MOVE_DEFERRED hexagon: Remove GENERIC_PENDING_IRQ leftover ARC: Remove GENERIC_PENDING_IRQ genirq: Remove handle_enforce_irqctx() wrapper genirq: Make handle_enforce_irqctx() unconditionally available irqchip/loongarch-avec: Add multi-nodes topology support irqchip/ts4800: Replace seq_printf() by seq_puts() irqchip/ti-sci-inta : Add module build support irqchip/ti-sci-intr: Add module build support irqchip/irq-brcmstb-l2: Replace brcmstb_l2_mask_and_ack() by generic function irqchip: keystone: Use syscon_regmap_lookup_by_phandle_args genirq/kexec: Prevent redundant IRQ masking by checking state before shutdown kexec: Consolidate machine_kexec_mask_interrupts() implementation genirq: Reuse irq_thread_fn() for forced thread case genirq: Move irq_thread_fn() further up in the code
This commit is contained in:
commit
4c551165e7
41 changed files with 108 additions and 230 deletions
|
@ -25,7 +25,6 @@ config ARC
|
|||
# for now, we don't need GENERIC_IRQ_PROBE, CONFIG_GENERIC_IRQ_CHIP
|
||||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_PCI_IOMAP
|
||||
select GENERIC_PENDING_IRQ if SMP
|
||||
select GENERIC_SCHED_CLOCK
|
||||
select GENERIC_SMP_IDLE_THREAD
|
||||
select GENERIC_IOREMAP
|
||||
|
|
|
@ -357,8 +357,6 @@ static void idu_cascade_isr(struct irq_desc *desc)
|
|||
static int idu_irq_map(struct irq_domain *d, unsigned int virq, irq_hw_number_t hwirq)
|
||||
{
|
||||
irq_set_chip_and_handler(virq, &idu_irq_chip, handle_level_irq);
|
||||
irq_set_status_flags(virq, IRQ_MOVE_PCNTXT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -127,29 +127,6 @@ void crash_smp_send_stop(void)
|
|||
cpus_stopped = 1;
|
||||
}
|
||||
|
||||
static void machine_kexec_mask_interrupts(void)
|
||||
{
|
||||
unsigned int i;
|
||||
struct irq_desc *desc;
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_chip *chip;
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip)
|
||||
continue;
|
||||
|
||||
if (chip->irq_eoi && irqd_irq_inprogress(&desc->irq_data))
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
|
||||
if (chip->irq_mask)
|
||||
chip->irq_mask(&desc->irq_data);
|
||||
|
||||
if (chip->irq_disable && !irqd_irq_disabled(&desc->irq_data))
|
||||
chip->irq_disable(&desc->irq_data);
|
||||
}
|
||||
}
|
||||
|
||||
void machine_crash_shutdown(struct pt_regs *regs)
|
||||
{
|
||||
local_irq_disable();
|
||||
|
|
|
@ -149,6 +149,7 @@ config ARM64
|
|||
select GENERIC_IDLE_POLL_SETUP
|
||||
select GENERIC_IOREMAP
|
||||
select GENERIC_IRQ_IPI
|
||||
select GENERIC_IRQ_KEXEC_CLEAR_VM_FORWARD
|
||||
select GENERIC_IRQ_PROBE
|
||||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_IRQ_SHOW_LEVEL
|
||||
|
|
|
@ -135,8 +135,6 @@ config ARCH_K3
|
|||
select SOC_TI
|
||||
select TI_MESSAGE_MANAGER
|
||||
select TI_SCI_PROTOCOL
|
||||
select TI_SCI_INTR_IRQCHIP
|
||||
select TI_SCI_INTA_IRQCHIP
|
||||
select TI_K3_SOCINFO
|
||||
help
|
||||
This enables support for Texas Instruments' K3 multicore SoC
|
||||
|
|
|
@ -207,37 +207,6 @@ void machine_kexec(struct kimage *kimage)
|
|||
BUG(); /* Should never get here. */
|
||||
}
|
||||
|
||||
static void machine_kexec_mask_interrupts(void)
|
||||
{
|
||||
unsigned int i;
|
||||
struct irq_desc *desc;
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_chip *chip;
|
||||
int ret;
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* First try to remove the active state. If this
|
||||
* fails, try to EOI the interrupt.
|
||||
*/
|
||||
ret = irq_set_irqchip_state(i, IRQCHIP_STATE_ACTIVE, false);
|
||||
|
||||
if (ret && irqd_irq_inprogress(&desc->irq_data) &&
|
||||
chip->irq_eoi)
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
|
||||
if (chip->irq_mask)
|
||||
chip->irq_mask(&desc->irq_data);
|
||||
|
||||
if (chip->irq_disable && !irqd_irq_disabled(&desc->irq_data))
|
||||
chip->irq_disable(&desc->irq_data);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* machine_crash_shutdown - shutdown non-crashing cpus and save registers
|
||||
*/
|
||||
|
|
|
@ -20,7 +20,6 @@ config HEXAGON
|
|||
# select ARCH_HAS_CPU_IDLE_WAIT
|
||||
# select GPIOLIB
|
||||
# select HAVE_CLK
|
||||
# select GENERIC_PENDING_IRQ if SMP
|
||||
select GENERIC_ATOMIC64
|
||||
select HAVE_PERF_EVENTS
|
||||
# GENERIC_ALLOCATOR is used by dma_alloc_coherent()
|
||||
|
|
|
@ -61,7 +61,6 @@ struct pt_regs;
|
|||
extern void kexec_smp_wait(void); /* get and clear naca physid, wait for
|
||||
master to copy new code to 0 */
|
||||
extern void default_machine_kexec(struct kimage *image);
|
||||
extern void machine_kexec_mask_interrupts(void);
|
||||
|
||||
void relocate_new_kernel(unsigned long indirection_page, unsigned long reboot_code_buffer,
|
||||
unsigned long start_address) __noreturn;
|
||||
|
|
|
@ -22,28 +22,6 @@
|
|||
#include <asm/setup.h>
|
||||
#include <asm/firmware.h>
|
||||
|
||||
void machine_kexec_mask_interrupts(void) {
|
||||
unsigned int i;
|
||||
struct irq_desc *desc;
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_chip *chip;
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip)
|
||||
continue;
|
||||
|
||||
if (chip->irq_eoi && irqd_irq_inprogress(&desc->irq_data))
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
|
||||
if (chip->irq_mask)
|
||||
chip->irq_mask(&desc->irq_data);
|
||||
|
||||
if (chip->irq_disable && !irqd_irq_disabled(&desc->irq_data))
|
||||
chip->irq_disable(&desc->irq_data);
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CRASH_DUMP
|
||||
void machine_crash_shutdown(struct pt_regs *regs)
|
||||
{
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
* Copyright (C) 2005 IBM Corporation.
|
||||
*/
|
||||
|
||||
#include <linux/irq.h>
|
||||
#include <linux/kexec.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/string.h>
|
||||
|
|
|
@ -114,29 +114,6 @@ void machine_shutdown(void)
|
|||
#endif
|
||||
}
|
||||
|
||||
static void machine_kexec_mask_interrupts(void)
|
||||
{
|
||||
unsigned int i;
|
||||
struct irq_desc *desc;
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_chip *chip;
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip)
|
||||
continue;
|
||||
|
||||
if (chip->irq_eoi && irqd_irq_inprogress(&desc->irq_data))
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
|
||||
if (chip->irq_mask)
|
||||
chip->irq_mask(&desc->irq_data);
|
||||
|
||||
if (chip->irq_disable && !irqd_irq_disabled(&desc->irq_data))
|
||||
chip->irq_disable(&desc->irq_data);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* machine_crash_shutdown - Prepare to kexec after a kernel crash
|
||||
*
|
||||
|
|
|
@ -304,7 +304,7 @@ static struct irq_chip hv_pci_msi_controller = {
|
|||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.irq_compose_msi_msg = hv_irq_compose_msi_msg,
|
||||
.irq_set_affinity = msi_domain_set_affinity,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_MOVE_DEFERRED,
|
||||
};
|
||||
|
||||
static struct msi_domain_ops pci_msi_domain_ops = {
|
||||
|
|
|
@ -1861,7 +1861,7 @@ static struct irq_chip ioapic_chip __read_mostly = {
|
|||
.irq_set_affinity = ioapic_set_affinity,
|
||||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.irq_get_irqchip_state = ioapic_irq_get_chip_state,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE |
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_MOVE_DEFERRED |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
};
|
||||
|
||||
|
|
|
@ -214,6 +214,7 @@ static bool x86_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
|
|||
if (WARN_ON_ONCE(domain != real_parent))
|
||||
return false;
|
||||
info->chip->irq_set_affinity = msi_set_affinity;
|
||||
info->chip->flags |= IRQCHIP_MOVE_DEFERRED;
|
||||
break;
|
||||
case DOMAIN_BUS_DMAR:
|
||||
case DOMAIN_BUS_AMDVI:
|
||||
|
@ -315,7 +316,7 @@ static struct irq_chip dmar_msi_controller = {
|
|||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.irq_compose_msi_msg = dmar_msi_compose_msg,
|
||||
.irq_write_msi_msg = dmar_msi_write_msg,
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE |
|
||||
.flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_MOVE_DEFERRED |
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP,
|
||||
};
|
||||
|
||||
|
|
|
@ -517,22 +517,14 @@ static int hpet_msi_init(struct irq_domain *domain,
|
|||
struct msi_domain_info *info, unsigned int virq,
|
||||
irq_hw_number_t hwirq, msi_alloc_info_t *arg)
|
||||
{
|
||||
irq_set_status_flags(virq, IRQ_MOVE_PCNTXT);
|
||||
irq_domain_set_info(domain, virq, arg->hwirq, info->chip, NULL,
|
||||
handle_edge_irq, arg->data, "edge");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hpet_msi_free(struct irq_domain *domain,
|
||||
struct msi_domain_info *info, unsigned int virq)
|
||||
{
|
||||
irq_clear_status_flags(virq, IRQ_MOVE_PCNTXT);
|
||||
}
|
||||
|
||||
static struct msi_domain_ops hpet_msi_domain_ops = {
|
||||
.msi_init = hpet_msi_init,
|
||||
.msi_free = hpet_msi_free,
|
||||
};
|
||||
|
||||
static struct msi_domain_info hpet_msi_domain_info = {
|
||||
|
|
|
@ -92,8 +92,6 @@ static int uv_domain_alloc(struct irq_domain *domain, unsigned int virq,
|
|||
if (ret >= 0) {
|
||||
if (info->uv.limit == UV_AFFINITY_CPU)
|
||||
irq_set_status_flags(virq, IRQ_NO_BALANCING);
|
||||
else
|
||||
irq_set_status_flags(virq, IRQ_MOVE_PCNTXT);
|
||||
|
||||
chip_data->pnode = uv_blade_to_pnode(info->uv.blade);
|
||||
chip_data->offset = info->uv.offset;
|
||||
|
@ -113,7 +111,6 @@ static void uv_domain_free(struct irq_domain *domain, unsigned int virq,
|
|||
|
||||
BUG_ON(nr_irqs != 1);
|
||||
kfree(irq_data->chip_data);
|
||||
irq_clear_status_flags(virq, IRQ_MOVE_PCNTXT);
|
||||
irq_clear_status_flags(virq, IRQ_NO_BALANCING);
|
||||
irq_domain_free_irqs_top(domain, virq, nr_irqs);
|
||||
}
|
||||
|
|
|
@ -2332,7 +2332,7 @@ static struct irq_chip intcapxt_controller = {
|
|||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.irq_set_affinity = intcapxt_set_affinity,
|
||||
.irq_set_wake = intcapxt_set_wake,
|
||||
.flags = IRQCHIP_MASK_ON_SUSPEND,
|
||||
.flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_MOVE_DEFERRED,
|
||||
};
|
||||
|
||||
static const struct irq_domain_ops intcapxt_domain_ops = {
|
||||
|
|
|
@ -3540,7 +3540,6 @@ static int irq_remapping_alloc(struct irq_domain *domain, unsigned int virq,
|
|||
irq_data->chip_data = data;
|
||||
irq_data->chip = &amd_ir_chip;
|
||||
irq_remapping_prepare_irte(data, cfg, info, devid, index, i);
|
||||
irq_set_status_flags(virq + i, IRQ_MOVE_PCNTXT);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -1463,7 +1463,6 @@ static int intel_irq_remapping_alloc(struct irq_domain *domain,
|
|||
else
|
||||
irq_data->chip = &intel_ir_chip;
|
||||
intel_irq_remapping_prepare_irte(ird, irq_cfg, info, index, i);
|
||||
irq_set_status_flags(virq + i, IRQ_MOVE_PCNTXT);
|
||||
}
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -534,8 +534,9 @@ config LS1X_IRQ
|
|||
Support for the Loongson-1 platform Interrupt Controller.
|
||||
|
||||
config TI_SCI_INTR_IRQCHIP
|
||||
bool
|
||||
tristate "TI SCI INTR Interrupt Controller"
|
||||
depends on TI_SCI_PROTOCOL
|
||||
depends on ARCH_K3 || COMPILE_TEST
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
help
|
||||
This enables the irqchip driver support for K3 Interrupt router
|
||||
|
@ -544,8 +545,9 @@ config TI_SCI_INTR_IRQCHIP
|
|||
TI System Controller, say Y here. Otherwise, say N.
|
||||
|
||||
config TI_SCI_INTA_IRQCHIP
|
||||
bool
|
||||
tristate "TI SCI INTA Interrupt Controller"
|
||||
depends on TI_SCI_PROTOCOL
|
||||
depends on ARCH_K3 || (COMPILE_TEST && ARM64)
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
select TI_SCI_INTA_MSI_DOMAIN
|
||||
help
|
||||
|
|
|
@ -61,32 +61,6 @@ struct brcmstb_l2_intc_data {
|
|||
u32 saved_mask; /* for suspend/resume */
|
||||
};
|
||||
|
||||
/**
|
||||
* brcmstb_l2_mask_and_ack - Mask and ack pending interrupt
|
||||
* @d: irq_data
|
||||
*
|
||||
* Chip has separate enable/disable registers instead of a single mask
|
||||
* register and pending interrupt is acknowledged by setting a bit.
|
||||
*
|
||||
* Note: This function is generic and could easily be added to the
|
||||
* generic irqchip implementation if there ever becomes a will to do so.
|
||||
* Perhaps with a name like irq_gc_mask_disable_and_ack_set().
|
||||
*
|
||||
* e.g.: https://patchwork.kernel.org/patch/9831047/
|
||||
*/
|
||||
static void brcmstb_l2_mask_and_ack(struct irq_data *d)
|
||||
{
|
||||
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
|
||||
struct irq_chip_type *ct = irq_data_get_chip_type(d);
|
||||
u32 mask = d->mask;
|
||||
|
||||
irq_gc_lock(gc);
|
||||
irq_reg_writel(gc, mask, ct->regs.disable);
|
||||
*ct->mask_cache &= ~mask;
|
||||
irq_reg_writel(gc, mask, ct->regs.ack);
|
||||
irq_gc_unlock(gc);
|
||||
}
|
||||
|
||||
static void brcmstb_l2_intc_irq_handle(struct irq_desc *desc)
|
||||
{
|
||||
struct brcmstb_l2_intc_data *b = irq_desc_get_handler_data(desc);
|
||||
|
@ -248,7 +222,7 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
|
|||
if (init_params->cpu_clear >= 0) {
|
||||
ct->regs.ack = init_params->cpu_clear;
|
||||
ct->chip.irq_ack = irq_gc_ack_set_bit;
|
||||
ct->chip.irq_mask_ack = brcmstb_l2_mask_and_ack;
|
||||
ct->chip.irq_mask_ack = irq_gc_mask_disable_and_ack_set;
|
||||
} else {
|
||||
/* No Ack - but still slightly more efficient to define this */
|
||||
ct->chip.irq_mask_ack = irq_gc_mask_disable_reg;
|
||||
|
|
|
@ -141,18 +141,11 @@ static int keystone_irq_probe(struct platform_device *pdev)
|
|||
if (!kirq)
|
||||
return -ENOMEM;
|
||||
|
||||
kirq->devctrl_regs =
|
||||
syscon_regmap_lookup_by_phandle(np, "ti,syscon-dev");
|
||||
kirq->devctrl_regs = syscon_regmap_lookup_by_phandle_args(np, "ti,syscon-dev",
|
||||
1, &kirq->devctrl_offset);
|
||||
if (IS_ERR(kirq->devctrl_regs))
|
||||
return PTR_ERR(kirq->devctrl_regs);
|
||||
|
||||
ret = of_property_read_u32_index(np, "ti,syscon-dev", 1,
|
||||
&kirq->devctrl_offset);
|
||||
if (ret) {
|
||||
dev_err(dev, "couldn't read the devctrl_offset offset!\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
kirq->irq = platform_get_irq(pdev, 0);
|
||||
if (kirq->irq < 0)
|
||||
return kirq->irq;
|
||||
|
|
|
@ -56,6 +56,15 @@ struct avecintc_data {
|
|||
unsigned int moving;
|
||||
};
|
||||
|
||||
static inline void avecintc_enable(void)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
value = iocsr_read64(LOONGARCH_IOCSR_MISC_FUNC);
|
||||
value |= IOCSR_MISC_FUNC_AVEC_EN;
|
||||
iocsr_write64(value, LOONGARCH_IOCSR_MISC_FUNC);
|
||||
}
|
||||
|
||||
static inline void avecintc_ack_irq(struct irq_data *d)
|
||||
{
|
||||
}
|
||||
|
@ -127,6 +136,8 @@ static int avecintc_cpu_online(unsigned int cpu)
|
|||
|
||||
guard(raw_spinlock)(&loongarch_avec.lock);
|
||||
|
||||
avecintc_enable();
|
||||
|
||||
irq_matrix_online(loongarch_avec.vector_matrix);
|
||||
|
||||
pending_list_init(cpu);
|
||||
|
@ -339,7 +350,6 @@ static int __init irq_matrix_init(void)
|
|||
static int __init avecintc_init(struct irq_domain *parent)
|
||||
{
|
||||
int ret, parent_irq;
|
||||
unsigned long value;
|
||||
|
||||
raw_spin_lock_init(&loongarch_avec.lock);
|
||||
|
||||
|
@ -378,9 +388,7 @@ static int __init avecintc_init(struct irq_domain *parent)
|
|||
"irqchip/loongarch/avecintc:starting",
|
||||
avecintc_cpu_online, avecintc_cpu_offline);
|
||||
#endif
|
||||
value = iocsr_read64(LOONGARCH_IOCSR_MISC_FUNC);
|
||||
value |= IOCSR_MISC_FUNC_AVEC_EN;
|
||||
iocsr_write64(value, LOONGARCH_IOCSR_MISC_FUNC);
|
||||
avecintc_enable();
|
||||
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -743,3 +743,4 @@ module_platform_driver(ti_sci_inta_irq_domain_driver);
|
|||
|
||||
MODULE_AUTHOR("Lokesh Vutla <lokeshvutla@ti.com>");
|
||||
MODULE_DESCRIPTION("K3 Interrupt Aggregator driver over TI SCI protocol");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -303,3 +303,4 @@ module_platform_driver(ti_sci_intr_irq_domain_driver);
|
|||
|
||||
MODULE_AUTHOR("Lokesh Vutla <lokeshvutla@ticom>");
|
||||
MODULE_DESCRIPTION("K3 Interrupt Router driver over TI SCI protocol");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -52,7 +52,7 @@ static void ts4800_irq_print_chip(struct irq_data *d, struct seq_file *p)
|
|||
{
|
||||
struct ts4800_irq_data *data = irq_data_get_irq_chip_data(d);
|
||||
|
||||
seq_printf(p, "%s", dev_name(&data->pdev->dev));
|
||||
seq_puts(p, dev_name(&data->pdev->dev));
|
||||
}
|
||||
|
||||
static const struct irq_chip ts4800_chip = {
|
||||
|
|
|
@ -2053,6 +2053,7 @@ static struct irq_chip hv_msi_irq_chip = {
|
|||
.irq_set_affinity = irq_chip_set_affinity_parent,
|
||||
#ifdef CONFIG_X86
|
||||
.irq_ack = irq_chip_ack_parent,
|
||||
.flags = IRQCHIP_MOVE_DEFERRED,
|
||||
#elif defined(CONFIG_ARM64)
|
||||
.irq_eoi = irq_chip_eoi_parent,
|
||||
#endif
|
||||
|
|
|
@ -722,12 +722,6 @@ static struct irq_info *xen_irq_init(unsigned int irq)
|
|||
INIT_RCU_WORK(&info->rwork, delayed_free_irq);
|
||||
|
||||
set_info_for_irq(irq, info);
|
||||
/*
|
||||
* Interrupt affinity setting can be immediate. No point
|
||||
* in delaying it until an interrupt is handled.
|
||||
*/
|
||||
irq_set_status_flags(irq, IRQ_MOVE_PCNTXT);
|
||||
|
||||
INIT_LIST_HEAD(&info->eoi_list);
|
||||
list_add_tail(&info->list, &xen_irq_list_head);
|
||||
}
|
||||
|
|
|
@ -64,7 +64,6 @@ enum irqchip_irq_state;
|
|||
* IRQ_NOAUTOEN - Interrupt is not automatically enabled in
|
||||
* request/setup_irq()
|
||||
* IRQ_NO_BALANCING - Interrupt cannot be balanced (affinity set)
|
||||
* IRQ_MOVE_PCNTXT - Interrupt can be migrated from process context
|
||||
* IRQ_NESTED_THREAD - Interrupt nests into another thread
|
||||
* IRQ_PER_CPU_DEVID - Dev_id is a per-cpu variable
|
||||
* IRQ_IS_POLLED - Always polled by another interrupt. Exclude
|
||||
|
@ -93,7 +92,6 @@ enum {
|
|||
IRQ_NOREQUEST = (1 << 11),
|
||||
IRQ_NOAUTOEN = (1 << 12),
|
||||
IRQ_NO_BALANCING = (1 << 13),
|
||||
IRQ_MOVE_PCNTXT = (1 << 14),
|
||||
IRQ_NESTED_THREAD = (1 << 15),
|
||||
IRQ_NOTHREAD = (1 << 16),
|
||||
IRQ_PER_CPU_DEVID = (1 << 17),
|
||||
|
@ -105,7 +103,7 @@ enum {
|
|||
|
||||
#define IRQF_MODIFY_MASK \
|
||||
(IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \
|
||||
IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \
|
||||
IRQ_NOAUTOEN | IRQ_LEVEL | IRQ_NO_BALANCING | \
|
||||
IRQ_PER_CPU | IRQ_NESTED_THREAD | IRQ_NOTHREAD | IRQ_PER_CPU_DEVID | \
|
||||
IRQ_IS_POLLED | IRQ_DISABLE_UNLAZY | IRQ_HIDDEN)
|
||||
|
||||
|
@ -201,8 +199,6 @@ struct irq_data {
|
|||
* IRQD_LEVEL - Interrupt is level triggered
|
||||
* IRQD_WAKEUP_STATE - Interrupt is configured for wakeup
|
||||
* from suspend
|
||||
* IRQD_MOVE_PCNTXT - Interrupt can be moved in process
|
||||
* context
|
||||
* IRQD_IRQ_DISABLED - Disabled state of the interrupt
|
||||
* IRQD_IRQ_MASKED - Masked state of the interrupt
|
||||
* IRQD_IRQ_INPROGRESS - In progress state of the interrupt
|
||||
|
@ -233,7 +229,6 @@ enum {
|
|||
IRQD_AFFINITY_SET = BIT(12),
|
||||
IRQD_LEVEL = BIT(13),
|
||||
IRQD_WAKEUP_STATE = BIT(14),
|
||||
IRQD_MOVE_PCNTXT = BIT(15),
|
||||
IRQD_IRQ_DISABLED = BIT(16),
|
||||
IRQD_IRQ_MASKED = BIT(17),
|
||||
IRQD_IRQ_INPROGRESS = BIT(18),
|
||||
|
@ -338,11 +333,6 @@ static inline bool irqd_is_wakeup_set(struct irq_data *d)
|
|||
return __irqd_to_state(d) & IRQD_WAKEUP_STATE;
|
||||
}
|
||||
|
||||
static inline bool irqd_can_move_in_process_context(struct irq_data *d)
|
||||
{
|
||||
return __irqd_to_state(d) & IRQD_MOVE_PCNTXT;
|
||||
}
|
||||
|
||||
static inline bool irqd_irq_disabled(struct irq_data *d)
|
||||
{
|
||||
return __irqd_to_state(d) & IRQD_IRQ_DISABLED;
|
||||
|
@ -567,6 +557,7 @@ struct irq_chip {
|
|||
* in the suspend path if they are in disabled state
|
||||
* IRQCHIP_AFFINITY_PRE_STARTUP: Default affinity update before startup
|
||||
* IRQCHIP_IMMUTABLE: Don't ever change anything in this chip
|
||||
* IRQCHIP_MOVE_DEFERRED: Move the interrupt in actual interrupt context
|
||||
*/
|
||||
enum {
|
||||
IRQCHIP_SET_TYPE_MASKED = (1 << 0),
|
||||
|
@ -581,6 +572,7 @@ enum {
|
|||
IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND = (1 << 9),
|
||||
IRQCHIP_AFFINITY_PRE_STARTUP = (1 << 10),
|
||||
IRQCHIP_IMMUTABLE = (1 << 11),
|
||||
IRQCHIP_MOVE_DEFERRED = (1 << 12),
|
||||
};
|
||||
|
||||
#include <linux/irqdesc.h>
|
||||
|
@ -694,6 +686,9 @@ extern int irq_chip_request_resources_parent(struct irq_data *data);
|
|||
extern void irq_chip_release_resources_parent(struct irq_data *data);
|
||||
#endif
|
||||
|
||||
/* Disable or mask interrupts during a kernel kexec */
|
||||
extern void machine_kexec_mask_interrupts(void);
|
||||
|
||||
/* Handling of unhandled and spurious interrupts: */
|
||||
extern void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret);
|
||||
|
||||
|
|
|
@ -31,6 +31,10 @@ config GENERIC_IRQ_EFFECTIVE_AFF_MASK
|
|||
config GENERIC_PENDING_IRQ
|
||||
bool
|
||||
|
||||
# Deduce delayed migration from top-level interrupt chip flags
|
||||
config GENERIC_PENDING_IRQ_CHIPFLAGS
|
||||
bool
|
||||
|
||||
# Support for generic irq migrating off cpu before the cpu is offline.
|
||||
config GENERIC_IRQ_MIGRATION
|
||||
bool
|
||||
|
@ -141,6 +145,12 @@ config GENERIC_IRQ_DEBUGFS
|
|||
|
||||
If you don't know what to do here, say N.
|
||||
|
||||
# Clear forwarded VM interrupts during kexec.
|
||||
# This option ensures the kernel clears active states for interrupts
|
||||
# forwarded to virtual machines (VMs) during a machine kexec.
|
||||
config GENERIC_IRQ_KEXEC_CLEAR_VM_FORWARD
|
||||
bool
|
||||
|
||||
endmenu
|
||||
|
||||
config GENERIC_IRQ_MULTI_HANDLER
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
obj-y := irqdesc.o handle.o manage.o spurious.o resend.o chip.o dummychip.o devres.o
|
||||
obj-y := irqdesc.o handle.o manage.o spurious.o resend.o chip.o dummychip.o devres.o kexec.o
|
||||
obj-$(CONFIG_IRQ_TIMINGS) += timings.o
|
||||
ifeq ($(CONFIG_TEST_IRQ_TIMINGS),y)
|
||||
CFLAGS_timings.o += -DDEBUG
|
||||
|
|
|
@ -1114,13 +1114,11 @@ void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set)
|
|||
trigger = irqd_get_trigger_type(&desc->irq_data);
|
||||
|
||||
irqd_clear(&desc->irq_data, IRQD_NO_BALANCING | IRQD_PER_CPU |
|
||||
IRQD_TRIGGER_MASK | IRQD_LEVEL | IRQD_MOVE_PCNTXT);
|
||||
IRQD_TRIGGER_MASK | IRQD_LEVEL);
|
||||
if (irq_settings_has_no_balance_set(desc))
|
||||
irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
|
||||
if (irq_settings_is_per_cpu(desc))
|
||||
irqd_set(&desc->irq_data, IRQD_PER_CPU);
|
||||
if (irq_settings_can_move_pcntxt(desc))
|
||||
irqd_set(&desc->irq_data, IRQD_MOVE_PCNTXT);
|
||||
if (irq_settings_is_level(desc))
|
||||
irqd_set(&desc->irq_data, IRQD_LEVEL);
|
||||
|
||||
|
|
|
@ -53,6 +53,7 @@ static const struct irq_bit_descr irqchip_flags[] = {
|
|||
BIT_MASK_DESCR(IRQCHIP_SUPPORTS_NMI),
|
||||
BIT_MASK_DESCR(IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND),
|
||||
BIT_MASK_DESCR(IRQCHIP_IMMUTABLE),
|
||||
BIT_MASK_DESCR(IRQCHIP_MOVE_DEFERRED),
|
||||
};
|
||||
|
||||
static void
|
||||
|
@ -108,7 +109,6 @@ static const struct irq_bit_descr irqdata_states[] = {
|
|||
BIT_MASK_DESCR(IRQD_NO_BALANCING),
|
||||
|
||||
BIT_MASK_DESCR(IRQD_SINGLE_TARGET),
|
||||
BIT_MASK_DESCR(IRQD_MOVE_PCNTXT),
|
||||
BIT_MASK_DESCR(IRQD_AFFINITY_SET),
|
||||
BIT_MASK_DESCR(IRQD_SETAFFINITY_PENDING),
|
||||
BIT_MASK_DESCR(IRQD_AFFINITY_MANAGED),
|
||||
|
|
|
@ -162,6 +162,7 @@ void irq_gc_mask_disable_and_ack_set(struct irq_data *d)
|
|||
irq_reg_writel(gc, mask, ct->regs.ack);
|
||||
irq_gc_unlock(gc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(irq_gc_mask_disable_and_ack_set);
|
||||
|
||||
/**
|
||||
* irq_gc_eoi - EOI interrupt
|
||||
|
|
|
@ -421,7 +421,7 @@ irq_init_generic_chip(struct irq_chip_generic *gc, const char *name,
|
|||
#ifdef CONFIG_GENERIC_PENDING_IRQ
|
||||
static inline bool irq_can_move_pcntxt(struct irq_data *data)
|
||||
{
|
||||
return irqd_can_move_in_process_context(data);
|
||||
return !(data->chip->flags & IRQCHIP_MOVE_DEFERRED);
|
||||
}
|
||||
static inline bool irq_move_pending(struct irq_data *data)
|
||||
{
|
||||
|
@ -441,10 +441,6 @@ static inline struct cpumask *irq_desc_get_pending_mask(struct irq_desc *desc)
|
|||
{
|
||||
return desc->pending_mask;
|
||||
}
|
||||
static inline bool handle_enforce_irqctx(struct irq_data *data)
|
||||
{
|
||||
return irqd_is_handle_enforce_irqctx(data);
|
||||
}
|
||||
bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear);
|
||||
#else /* CONFIG_GENERIC_PENDING_IRQ */
|
||||
static inline bool irq_can_move_pcntxt(struct irq_data *data)
|
||||
|
@ -471,10 +467,6 @@ static inline bool irq_fixup_move_pending(struct irq_desc *desc, bool fclear)
|
|||
{
|
||||
return false;
|
||||
}
|
||||
static inline bool handle_enforce_irqctx(struct irq_data *data)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif /* !CONFIG_GENERIC_PENDING_IRQ */
|
||||
|
||||
#if !defined(CONFIG_IRQ_DOMAIN) || !defined(CONFIG_IRQ_DOMAIN_HIERARCHY)
|
||||
|
|
|
@ -708,7 +708,7 @@ int handle_irq_desc(struct irq_desc *desc)
|
|||
return -EINVAL;
|
||||
|
||||
data = irq_desc_get_irq_data(desc);
|
||||
if (WARN_ON_ONCE(!in_hardirq() && handle_enforce_irqctx(data)))
|
||||
if (WARN_ON_ONCE(!in_hardirq() && irqd_is_handle_enforce_irqctx(data)))
|
||||
return -EPERM;
|
||||
|
||||
generic_handle_irq_desc(desc);
|
||||
|
|
36
kernel/irq/kexec.c
Normal file
36
kernel/irq/kexec.c
Normal file
|
@ -0,0 +1,36 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/irqdesc.h>
|
||||
#include <linux/irqnr.h>
|
||||
|
||||
#include "internals.h"
|
||||
|
||||
void machine_kexec_mask_interrupts(void)
|
||||
{
|
||||
struct irq_desc *desc;
|
||||
unsigned int i;
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_chip *chip;
|
||||
int check_eoi = 1;
|
||||
|
||||
chip = irq_desc_get_chip(desc);
|
||||
if (!chip || !irqd_is_started(&desc->irq_data))
|
||||
continue;
|
||||
|
||||
if (IS_ENABLED(CONFIG_GENERIC_IRQ_KEXEC_CLEAR_VM_FORWARD)) {
|
||||
/*
|
||||
* First try to remove the active state from an interrupt which is forwarded
|
||||
* to a VM. If the interrupt is not forwarded, try to EOI the interrupt.
|
||||
*/
|
||||
check_eoi = irq_set_irqchip_state(i, IRQCHIP_STATE_ACTIVE, false);
|
||||
}
|
||||
|
||||
if (check_eoi && chip->irq_eoi && irqd_irq_inprogress(&desc->irq_data))
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
|
||||
irq_shutdown(desc);
|
||||
}
|
||||
}
|
|
@ -1181,49 +1181,42 @@ out_unlock:
|
|||
chip_bus_sync_unlock(desc);
|
||||
}
|
||||
|
||||
/*
|
||||
* Interrupts explicitly requested as threaded interrupts want to be
|
||||
* preemptible - many of them need to sleep and wait for slow busses to
|
||||
* complete.
|
||||
*/
|
||||
static irqreturn_t irq_thread_fn(struct irq_desc *desc, struct irqaction *action)
|
||||
{
|
||||
irqreturn_t ret = action->thread_fn(action->irq, action->dev_id);
|
||||
|
||||
if (ret == IRQ_HANDLED)
|
||||
atomic_inc(&desc->threads_handled);
|
||||
|
||||
irq_finalize_oneshot(desc, action);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Interrupts which are not explicitly requested as threaded
|
||||
* interrupts rely on the implicit bh/preempt disable of the hard irq
|
||||
* context. So we need to disable bh here to avoid deadlocks and other
|
||||
* side effects.
|
||||
*/
|
||||
static irqreturn_t
|
||||
irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
|
||||
static irqreturn_t irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
|
||||
{
|
||||
irqreturn_t ret;
|
||||
|
||||
local_bh_disable();
|
||||
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
|
||||
local_irq_disable();
|
||||
ret = action->thread_fn(action->irq, action->dev_id);
|
||||
if (ret == IRQ_HANDLED)
|
||||
atomic_inc(&desc->threads_handled);
|
||||
|
||||
irq_finalize_oneshot(desc, action);
|
||||
ret = irq_thread_fn(desc, action);
|
||||
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
|
||||
local_irq_enable();
|
||||
local_bh_enable();
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Interrupts explicitly requested as threaded interrupts want to be
|
||||
* preemptible - many of them need to sleep and wait for slow busses to
|
||||
* complete.
|
||||
*/
|
||||
static irqreturn_t irq_thread_fn(struct irq_desc *desc,
|
||||
struct irqaction *action)
|
||||
{
|
||||
irqreturn_t ret;
|
||||
|
||||
ret = action->thread_fn(action->irq, action->dev_id);
|
||||
if (ret == IRQ_HANDLED)
|
||||
atomic_inc(&desc->threads_handled);
|
||||
|
||||
irq_finalize_oneshot(desc, action);
|
||||
return ret;
|
||||
}
|
||||
|
||||
void wake_threads_waitq(struct irq_desc *desc)
|
||||
{
|
||||
if (atomic_dec_and_test(&desc->threads_active))
|
||||
|
|
|
@ -53,7 +53,7 @@ static int irq_sw_resend(struct irq_desc *desc)
|
|||
* Validate whether this interrupt can be safely injected from
|
||||
* non interrupt context
|
||||
*/
|
||||
if (handle_enforce_irqctx(&desc->irq_data))
|
||||
if (irqd_is_handle_enforce_irqctx(&desc->irq_data))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
|
|
|
@ -11,7 +11,6 @@ enum {
|
|||
_IRQ_NOREQUEST = IRQ_NOREQUEST,
|
||||
_IRQ_NOTHREAD = IRQ_NOTHREAD,
|
||||
_IRQ_NOAUTOEN = IRQ_NOAUTOEN,
|
||||
_IRQ_MOVE_PCNTXT = IRQ_MOVE_PCNTXT,
|
||||
_IRQ_NO_BALANCING = IRQ_NO_BALANCING,
|
||||
_IRQ_NESTED_THREAD = IRQ_NESTED_THREAD,
|
||||
_IRQ_PER_CPU_DEVID = IRQ_PER_CPU_DEVID,
|
||||
|
@ -142,11 +141,6 @@ static inline void irq_settings_set_noprobe(struct irq_desc *desc)
|
|||
desc->status_use_accessors |= _IRQ_NOPROBE;
|
||||
}
|
||||
|
||||
static inline bool irq_settings_can_move_pcntxt(struct irq_desc *desc)
|
||||
{
|
||||
return desc->status_use_accessors & _IRQ_MOVE_PCNTXT;
|
||||
}
|
||||
|
||||
static inline bool irq_settings_can_autoenable(struct irq_desc *desc)
|
||||
{
|
||||
return !(desc->status_use_accessors & _IRQ_NOAUTOEN);
|
||||
|
|
|
@ -509,6 +509,7 @@ static inline void irq_timings_store(int irq, struct irqt_stat *irqs, u64 ts)
|
|||
|
||||
/**
|
||||
* irq_timings_next_event - Return when the next event is supposed to arrive
|
||||
* @now: current time
|
||||
*
|
||||
* During the last busy cycle, the number of interrupts is incremented
|
||||
* and stored in the irq_timings structure. This information is
|
||||
|
|
Loading…
Add table
Reference in a new issue