Enable qspinlock by the requirements mentioned in a8ad07e524
("asm-generic: qspinlock: Indicate the use of mixed-size atomics").
C-SKY only has "ldex/stex" for all atomic operations. So csky give a
strong forward guarantee for "ldex/stex." That means when ldex grabbed
the cache line into $L1, it would block other cores from snooping the
address with several cycles. The atomic_fetch_add & xchg16 has the same
forward guarantee level in C-SKY.
Qspinlock has better code size and performance in a fast path.
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
12 lines
267 B
C
12 lines
267 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
#ifndef __ASM_CSKY_SPINLOCK_H
|
|
#define __ASM_CSKY_SPINLOCK_H
|
|
|
|
#include <asm/qspinlock.h>
|
|
#include <asm/qrwlock.h>
|
|
|
|
/* See include/linux/spinlock.h */
|
|
#define smp_mb__after_spinlock() smp_mb()
|
|
|
|
#endif /* __ASM_CSKY_SPINLOCK_H */
|