The TSC_ADJUST MSR shows whether the TSC has been modified. This is helpful
in a two aspects:
1) It allows to detect BIOS wreckage, where SMM code tries to 'hide' the
cycles spent by storing the TSC value at SMM entry and restoring it at
SMM exit. On affected machines the TSCs run slowly out of sync up to the
point where the clocksource watchdog (if available) detects it.
The TSC_ADJUST MSR allows to detect the TSC modification before that and
eventually restore it. This is also important for SoCs which have no
watchdog clocksource and therefore TSC wreckage cannot be detected and
acted upon.
2) All threads in a package are required to have the same TSC_ADJUST
value. Broken BIOSes break that and as a result the TSC synchronization
check fails.
The TSC_ADJUST MSR allows to detect the deviation when a CPU comes
online. If detected set it to the value of an already online CPU in the
same package. This also allows to reduce the number of sync tests
because with that in place the test is only required for the first CPU
in a package.
In principle all CPUs in a system should have the same TSC_ADJUST value
even across packages, but with physical CPU hotplug this assumption is
not true because the TSC starts with power on, so physical hotplug has
to do some trickery to bring the TSC into sync with already running
packages, which requires to use an TSC_ADJUST value different from CPUs
which got powered earlier.
A final enhancement is the opportunity to compensate for unsynced TSCs
accross nodes at boot time and make the TSC usable that way. It won't
help for TSCs which run apart due to frequency skew between packages,
but this gets detected by the clocksource watchdog later.
The first step toward this is to store the TSC_ADJUST value of a starting
CPU and compare it with the value of an already online CPU in the same
package. If they differ, emit a warning and adjust it to the reference
value. The !SMP version just stores the boot value for later verification.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.655323776@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
If time warps can be observed then they should only ever be observed on one
CPU. If they are observed on both CPUs then the system is completely hosed.
Add a check for this condition and notify if it happens.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.574838461@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The art detection uses rdmsrl_safe() to detect the availablity of the
TSC_ADJUST MSR.
That's pointless because we have a feature bit for this. Use it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.483561692@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
We can allow modules to be loaded into the vmalloc region, where they
should also benefit from the same protections as those loaded into
the more efficient module region. Allow these functions to operate
there as well.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
The set_memory_*() bounds checks are buggy on several fronts:
1. They fail to round the region size up if the passed address is not
page aligned.
2. The region check was incomplete, and didn't correspond with what
was being asked of apply_to_page_range()
So, rework change_memory_common() to fix these problems, adding an
"in_region()" helper to determine whether the start & size fit within
the provided region start and stop addresses.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
commit 1c3c909303 broke PAE40. Macro pfn_pte(pfn, prot) creates paddr
from pfn, but the page shift was getting truncated to 32 bits since we lost
the proper cast to 64 bits (for PAE400
Instead of reverting that commit, use a better helper which is 32/64 bits
safe just like ARM implementation.
Fixes: 1c3c909303 ("ARC: mm: fix build breakage with STRICT_MM_TYPECHECKS")
Cc: <stable@vger.kernel.org> #4.4+
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
[vgupta: massaged changelog]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Power management suspend/resume tracing (ab)uses the RTC to store
suspend/resume information persistently. As a consequence the RTC value is
clobbered when timekeeping is resumed and tries to inject the sleep time.
Commit a4f8f6667f ("timekeeping: Cap array access in timekeeping_debug")
plugged a out of bounds array access in the timekeeping debug code which
was caused by the clobbered RTC value, but we still use the clobbered RTC
value for sleep time injection into kernel timekeeping, which will result
in random adjustments depending on the stored "hash" value.
To prevent this keep track of the RTC clobbering and ignore the invalid RTC
timestamp at resume. If the system resumed successfully clear the flag,
which marks the RTC as unusable, warn the user about the RTC clobber and
recommend to adjust the RTC with 'ntpdate' or 'rdate'.
[jstultz: Fixed up pr_warn formating, and implemented suggestions from Ingo]
[ tglx: Rewrote changelog ]
Originally-from: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Xunlei Pang <xlpang@redhat.com>
Cc: Len Brown <lenb@kernel.org>
Link: http://lkml.kernel.org/r/1480372524-15181-3-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* will/for-next/perf:
selftests: arm64: add test for unaligned/inexact watchpoint handling
arm64: Allow hw watchpoint of length 3,5,6 and 7
arm64: hw_breakpoint: Handle inexact watchpoint addresses
arm64: Allow hw watchpoint at varied offset from base address
hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
PPC KVM update for 4.10:
* Support for KVM guests on POWER9 using the hashed page table MMU.
* Updates and improvements to the halt-polling support on PPC, from
Suraj Jitindar Singh.
* An optimization to speed up emulated MMIO, from Yongji Xie.
* Various other minor cleanups.
Two small optimizations to not do register reloading in
vcpu_put/get, instead do it in the ioctl path. This reduces
the overhead for schedule-intense workload that does not
exit to QEMU. (e.g. KVM guest with eventfd/irqfd that
does a lot of context switching with vhost or iothreads).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJYO+yuAAoJEBF7vIC1phx8jGQQALWZ4dnLN3GG52zG0ke4iRrf
7EY4/L+/LbSOHqaQ+6Dg4xkW3mg4nLxi4rcTRO2QCfn9yBJWEAI6++9nAodV3lIw
m4R4/7ppAuLLTzVeD9QwGoVyamJkKpd+N3zLHPInI6/pK5b07WBSVGeddGKtz+Z8
qMJohcbyVKXXccgNK4mH/tGI8mDoC4wFbxgETk9meYIqgYbiZE55LQoLEtOJDxF+
/+dCaxbxO/hezI+ktYSu/c+WBO0RKQuV7zyLaAKRVl/Dh6vqVSlNXPDOxtWxRgVa
aFEknEV21PCLCJczUuDRXgtvHCiwmGIn6JatVs//b+1e7Ugtsa2sgGWUjXO50SbV
ibK/FBgg0WVuSXkMO/pMD4jSvEJjyggK5PNV64F77H9dYelRmDhlt3o9wdmayjdN
ENqDUbtAqPFis2RrSH6aVkxh7KaaK9lD0anafdvBnFSQoF8voALza83M+tprAY8H
Q5xQ1ckIuGdFYXjvQAfsfjL5ZD+d+KoKnnKMTUsSM/5Lbf+iCSlQ0XuhXlH3m88J
K/owMAPiisbiJJwjbmZ41dOUpBGJbU4ii7rvaHWOa13llaNDU4OCbvr/SRso6DnU
MjAO271ySwFVaiu8lN7Y6mh3x2KbbuJDjIaFKGRtjraO6sW/gU4ty3qsaRCi1wqj
oBt+O/XmRqJS6AZh0kFe
=OlEr
-----END PGP SIGNATURE-----
Merge tag 'kvm-s390-next-4.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux
KVM: s390: Changes for 4.10 (via kvm/next)
Two small optimizations to not do register reloading in
vcpu_put/get, instead do it in the ioctl path. This reduces
the overhead for schedule-intense workload that does not
exit to QEMU. (e.g. KVM guest with eventfd/irqfd that
does a lot of context switching with vhost or iothreads).
On 64-bit CPUs with no-execute support and non-snooping icache, such as
970 or POWER4, we have a software mechanism to ensure coherency of the
cache (using exec faults when needed).
This was broken due to a logic error when the code was rewritten
from assembly to C, previously the assembly code did:
BEGIN_FTR_SECTION
mr r4,r30
mr r5,r7
bl hash_page_do_lazy_icache
END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
Which tests that:
(cpu_features & (NOEXECUTE | COHERENT_ICACHE)) == NOEXECUTE
Which says that the current cpu does have NOEXECUTE, but does not have
COHERENT_ICACHE.
Fixes: 91f1da9979 ("powerpc/mm: Convert 4k hash insert to C")
Fixes: 89ff725051 ("powerpc/mm: Convert __hash_page_64K to C")
Fixes: a43c0eb836 ("powerpc/mm: Convert 4k insert from asm to C")
Cc: stable@vger.kernel.org # v4.5+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Change log verbosification]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit.
EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they
are 11th and 10th bits respectively when E2H is set. Current code is
unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set.
In fact, we don't need to set those two bits, which allow EL1 and EL0 to
access physical timer and counter respectively, if E2H and TGE are set
for the host kernel. They will be configured later as necessary. First,
we don't need to configure those bits for EL1, since the host kernel
runs in EL2. It is a hypervisor's responsibility to configure them
before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses
are configured in the later stage of boot process.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we don't set ARCH incorrectly when calling the boot Makefile,
we can use the generic cpp_lds_S rule for converting our zImage.lds.S
into zImage.lds.
The main advantage of using the generic rule is that it correctly uses
if_changed, which means we correctly regenerate the linker script when
switching endian. Fixing that means we are finally able to build one
endian and then rebuild the other endian without requiring to clean
between builds.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
If we're using if_changed then we must depend on FORCE, so that
if_changed gets a chance to check if something changed.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Back in 2005 when the ppc/ppc64 merge started, we used to build the
kernel code in arch/powerpc but use the boot code from arch/ppc or
arch/ppc64 depending on whether we were building for 32 or 64-bit.
Originally we called the boot Makefile passing ARCH=$(OLDARCH), where
OLDARCH was ppc or ppc64.
In commit 20f629549b ("powerpc: Make building the boot image work for
both 32-bit and 64-bit") (2005-10-11) we split the call for 32/64-bit
using an ifeq check, because the two Makefiles took different targets,
and explicitly passed ARCH=ppc64 for the 64-bit case and ARCH=ppc for
the 32-bit case.
Then in commit 94b212c29f ("powerpc: Move ppc64 boot wrapper code over
to arch/powerpc") (2005-11-16) we moved the boot code into arch/powerpc
and dropped the ppc case, but kept passing ARCH=ppc64 to
arch/powerpc/boot/Makefile.
Since then there have been several more boot targets added, all of which
have copied the ARCH=ppc64 setting, such that now we have four targets
using it.
Currently it seems that nothing actually uses the ARCH value, but that's
basically just luck, and in particular it prevents us from using the
generic cpp_lds_S rule. It's also clearly wrong, ARCH=ppc64 is dead,
buried and cremated.
Fix it by dropping the setting of ARCH completely, the correct value is
exported by the top level Makefile.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The Xilinx interrupt controller driver is now available in drivers/irqchip.
Switch to using that driver.
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Now that the driver is generic and used by multiple archs,
get_irq is too generic.
Rename get_irq to xintc_get_irq to avoid any conflicts
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The Xilinx AXI Interrupt Controller IP block is used by the MIPS
based xilfpga platform and a few PowerPC based platforms.
Move the interrupt controller code out of arch/microblaze so that
it can be used by everyone
Tested-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This patch allows ARM guests to use GICv3 ITS on an arm64 host
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Wire-up flush_dcache, readq- and writeq-like gic-v3-its assessors, so
GICv3 ITS gets all it needs to be built and run.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
readq and writeq type of assessors are not supported in AArch32, so we
need to specialise them and glue later with series of 32-bit accesses
on AArch32 side.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
It'd be better to switch to CMA... but before that done redirect
flush_dcache operation, so 32-bit implementation could be wired
latter.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The workaround for Cavium ThunderX erratum 23154 has a homebrew
pipeflush built out of NOP sequences around the read of the IAR.
This patch converts the code to use the new nops macro, which makes it
a little easier to read.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The GIC system registers are accessed using open-coded wrappers around
the mrs_s/msr_s asm macros.
This patch moves the code over to the {read,wrote}_sysreg_s accessors
instead, reducing the amount of explicit asm blocks in the arch headers.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Add the dts node for device configuration unit that provides
general purpose configuration and status for the device.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Acked-by: Scott Wood <oss@buserror.net>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
The skcipher conversion for ARM missed the select on CRYPTO_SIMD,
causing build failures if SIMD was not otherwise enabled.
Fixes: da40e7a4ba ("crypto: aes-ce - Convert to skcipher")
Fixes: 211f41af53 ("crypto: aesbs - Convert to skcipher")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the files that are generated by the recently merged OpenSSL
SHA-256/512 implementation to .gitignore so Git disregards them
when showing untracked files.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Get rid of all remaining alloc_bootmem calls and use memblock_alloc
instead everywhere. This way we get rid of the inconsistent mixture
of alloc_bootmem and memblock_alloc usages.
Two of the alloc_bootmem_low calls within arch/s390/kernel/setup.c are
replaced with memblock_alloc calls that don't enforce that the
allocated memory is below 2GB. This restriction was never necessary.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The SCPI driver has an updated compatible to indicate the pre-released
(pre v1.0) status of the driver. Since Amlogic used a pre-1.0
version, add that compatible as well.
Signed-off-by: Kevin Hilman <khilman@baylibre.com>
The Nexbox A95X exists with a Meson GXBB (S905) Soc or a Meson GXL SoC (S905X).
Add the S905X variant which uses the internal PHY instead of an external PHY.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Kevin Hilman <khilman@baylibre.com>
Add support for the Nexbox A1 board based on the Amlogic S912 SoC.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
[khilman: replace '_' in node-names with '-']
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Kevin Hilman <khilman@baylibre.com>
Apparenty this is coming in the way of gcc fix which inhibits the usage
of LP_COUNT as a gpr.
Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This patch converts aesni (including fpu) over to the skcipher
interface. The LRW implementation has been removed as the generic
LRW code can now be used directly on top of the accelerated ECB
implementation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds xts helpers that use the skcipher interface rather
than blkcipher. This will be used by aesni_intel.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch makes use of the new skcipher walk interface instead of
the obsolete blkcipher walk interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For consistency with the other 246 kernel configuration options,
rename CRYPT_CRC32C_VPMSUM to CRYPTO_CRC32C_VPMSUM.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Anton Blanchard <anton@samba.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This integrates both the accelerated scalar and the NEON implementations
of SHA-224/256 as well as SHA-384/512 from the OpenSSL project.
Relative performance compared to the respective generic C versions:
| SHA256-scalar | SHA256-NEON* | SHA512 |
------------+-----------------+--------------+----------+
Cortex-A53 | 1.63x | 1.63x | 2.34x |
Cortex-A57 | 1.43x | 1.59x | 1.95x |
Cortex-A73 | 1.26x | 1.56x | ? |
The core crypto code was authored by Andy Polyakov of the OpenSSL
project, in collaboration with whom the upstream code was adapted so
that this module can be built from the same version of sha512-armv8.pl.
The version in this patch was taken from OpenSSL commit 32bbb62ea634
("sha/asm/sha512-armv8.pl: fix big-endian support in __KERNEL__ case.")
* The core SHA algorithm is fundamentally sequential, but there is a
secondary transformation involved, called the schedule update, which
can be performed independently. The NEON version of SHA-224/SHA-256
only implements this part of the algorithm using NEON instructions,
the sequential part is always done using scalar instructions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This will improve the task exit case, by batching tlb invalidates.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When we are updating a pte, we just need to flush the tlb mapping
that pte. Right now we do a full mm flush because we don't track page
size. Now that we have page size details in pte use that to do the
optimized flush
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When we are updating a pte, we just need to flush the tlb mapping
that pte. Right now we do a full mm flush because we don't track the page
size. Now that we have page size details in pte use that to do the
optimized flush
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Now that we have page size details encoded in pte using software pte
bits, use that to find the page size needed for tlb flush.
This function should only be used on P9 DD1, so give it a horrible name
to make that clear.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>