Enable qspinlock by the requirements mentioned in a8ad07e524
("asm-generic: qspinlock: Indicate the use of mixed-size atomics").
C-SKY only has "ldex/stex" for all atomic operations. So csky give a
strong forward guarantee for "ldex/stex." That means when ldex grabbed
the cache line into $L1, it would block other cores from snooping the
address with several cycles. The atomic_fetch_add & xchg16 has the same
forward guarantee level in C-SKY.
Qspinlock has better code size and performance in a fast path.
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
12 lines
304 B
Makefile
12 lines
304 B
Makefile
# SPDX-License-Identifier: GPL-2.0
|
|
generic-y += asm-offsets.h
|
|
generic-y += extable.h
|
|
generic-y += gpio.h
|
|
generic-y += kvm_para.h
|
|
generic-y += mcs_spinlock.h
|
|
generic-y += qrwlock.h
|
|
generic-y += qrwlock_types.h
|
|
generic-y += qspinlock.h
|
|
generic-y += parport.h
|
|
generic-y += user.h
|
|
generic-y += vmlinux.lds.h
|