1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00
Commit graph

129978 commits

Author SHA1 Message Date
Thomas Gleixner
8b223bc7ab x86/tsc: Store and check TSC ADJUST MSR
The TSC_ADJUST MSR shows whether the TSC has been modified. This is helpful
in a two aspects:

1) It allows to detect BIOS wreckage, where SMM code tries to 'hide' the
   cycles spent by storing the TSC value at SMM entry and restoring it at
   SMM exit. On affected machines the TSCs run slowly out of sync up to the
   point where the clocksource watchdog (if available) detects it.

   The TSC_ADJUST MSR allows to detect the TSC modification before that and
   eventually restore it. This is also important for SoCs which have no
   watchdog clocksource and therefore TSC wreckage cannot be detected and
   acted upon.

2) All threads in a package are required to have the same TSC_ADJUST
   value. Broken BIOSes break that and as a result the TSC synchronization
   check fails.

   The TSC_ADJUST MSR allows to detect the deviation when a CPU comes
   online. If detected set it to the value of an already online CPU in the
   same package. This also allows to reduce the number of sync tests
   because with that in place the test is only required for the first CPU
   in a package.

   In principle all CPUs in a system should have the same TSC_ADJUST value
   even across packages, but with physical CPU hotplug this assumption is
   not true because the TSC starts with power on, so physical hotplug has
   to do some trickery to bring the TSC into sync with already running
   packages, which requires to use an TSC_ADJUST value different from CPUs
   which got powered earlier.

   A final enhancement is the opportunity to compensate for unsynced TSCs
   accross nodes at boot time and make the TSC usable that way. It won't
   help for TSCs which run apart due to frequency skew between packages,
   but this gets detected by the clocksource watchdog later.

The first step toward this is to store the TSC_ADJUST value of a starting
CPU and compare it with the value of an already online CPU in the same
package. If they differ, emit a warning and adjust it to the reference
value. The !SMP version just stores the boot value for later verification.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.655323776@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:16 +01:00
Thomas Gleixner
bec8520dca x86/tsc: Detect random warps
If time warps can be observed then they should only ever be observed on one
CPU. If they are observed on both CPUs then the system is completely hosed.

Add a check for this condition and notify if it happens.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.574838461@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:15 +01:00
Thomas Gleixner
7b3d2f6e08 x86/tsc: Use X86_FEATURE_TSC_ADJUST in detect_art()
The art detection uses rdmsrl_safe() to detect the availablity of the
TSC_ADJUST MSR.

That's pointless because we have a feature bit for this. Use it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/20161119134017.483561692@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 19:23:15 +01:00
Russell King
76fb051d42 ARM: mm: allow set_memory_*() to be used on the vmalloc region
We can allow modules to be loaded into the vmalloc region, where they
should also benefit from the same protections as those loaded into
the more efficient module region.  Allow these functions to operate
there as well.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-11-29 18:00:34 +00:00
Russell King
580218f967 ARM: mm: fix set_memory_*() bounds checks
The set_memory_*() bounds checks are buggy on several fronts:

1. They fail to round the region size up if the passed address is not
   page aligned.
2. The region check was incomplete, and didn't correspond with what
   was being asked of apply_to_page_range()

So, rework change_memory_common() to fix these problems, adding an
"in_region()" helper to determine whether the start & size fit within
the provided region start and stop addresses.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-11-29 18:00:34 +00:00
Yuriy Kolerov
6a8b2ca702 ARC: mm: PAE40: Fix crash at munmap
commit 1c3c909303 broke PAE40. Macro pfn_pte(pfn, prot) creates paddr
from pfn, but the page shift was getting truncated to 32 bits since we lost
the proper cast to 64 bits (for PAE400

Instead of reverting that commit, use a better helper which is 32/64 bits
safe just like ARM implementation.

Fixes: 1c3c909303 ("ARC: mm: fix build breakage with STRICT_MM_TYPECHECKS")
Cc: <stable@vger.kernel.org>   #4.4+
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
[vgupta: massaged changelog]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-11-29 09:12:08 -08:00
Chen Yu
ba58d1020a timekeeping: Ignore the bogus sleep time if pm_trace is enabled
Power management suspend/resume tracing (ab)uses the RTC to store
suspend/resume information persistently. As a consequence the RTC value is
clobbered when timekeeping is resumed and tries to inject the sleep time.

Commit a4f8f6667f ("timekeeping: Cap array access in timekeeping_debug")
plugged a out of bounds array access in the timekeeping debug code which
was caused by the clobbered RTC value, but we still use the clobbered RTC
value for sleep time injection into kernel timekeeping, which will result
in random adjustments depending on the stored "hash" value.

To prevent this keep track of the RTC clobbering and ignore the invalid RTC
timestamp at resume. If the system resumed successfully clear the flag,
which marks the RTC as unusable, warn the user about the RTC clobber and
recommend to adjust the RTC with 'ntpdate' or 'rdate'.

[jstultz: Fixed up pr_warn formating, and implemented suggestions from Ingo]
[ tglx: Rewrote changelog ]

Originally-from: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Xunlei Pang <xlpang@redhat.com>
Cc: Len Brown <lenb@kernel.org>
Link: http://lkml.kernel.org/r/1480372524-15181-3-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-11-29 18:02:58 +01:00
Catalin Marinas
00cc2e0745 Merge Will Deacon's for-next/perf branch into for-next/core
* will/for-next/perf:
  selftests: arm64: add test for unaligned/inexact watchpoint handling
  arm64: Allow hw watchpoint of length 3,5,6 and 7
  arm64: hw_breakpoint: Handle inexact watchpoint addresses
  arm64: Allow hw watchpoint at varied offset from base address
  hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
2016-11-29 15:38:57 +00:00
Radim Krčmář
ffcb09f27f Merge branch 'kvm-ppc-next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
PPC KVM update for 4.10:

 * Support for KVM guests on POWER9 using the hashed page table MMU.
 * Updates and improvements to the halt-polling support on PPC, from
   Suraj Jitindar Singh.
 * An optimization to speed up emulated MMIO, from Yongji Xie.
 * Various other minor cleanups.
2016-11-29 14:26:55 +01:00
Radim Krčmář
bf65014d0b KVM: s390: Changes for 4.10 (via kvm/next)
Two small optimizations to not do register reloading in
 vcpu_put/get, instead do it in the ioctl path. This reduces
 the overhead for schedule-intense workload that does not
 exit to QEMU. (e.g. KVM guest with eventfd/irqfd that
 does a lot of context switching with vhost or iothreads).
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJYO+yuAAoJEBF7vIC1phx8jGQQALWZ4dnLN3GG52zG0ke4iRrf
 7EY4/L+/LbSOHqaQ+6Dg4xkW3mg4nLxi4rcTRO2QCfn9yBJWEAI6++9nAodV3lIw
 m4R4/7ppAuLLTzVeD9QwGoVyamJkKpd+N3zLHPInI6/pK5b07WBSVGeddGKtz+Z8
 qMJohcbyVKXXccgNK4mH/tGI8mDoC4wFbxgETk9meYIqgYbiZE55LQoLEtOJDxF+
 /+dCaxbxO/hezI+ktYSu/c+WBO0RKQuV7zyLaAKRVl/Dh6vqVSlNXPDOxtWxRgVa
 aFEknEV21PCLCJczUuDRXgtvHCiwmGIn6JatVs//b+1e7Ugtsa2sgGWUjXO50SbV
 ibK/FBgg0WVuSXkMO/pMD4jSvEJjyggK5PNV64F77H9dYelRmDhlt3o9wdmayjdN
 ENqDUbtAqPFis2RrSH6aVkxh7KaaK9lD0anafdvBnFSQoF8voALza83M+tprAY8H
 Q5xQ1ckIuGdFYXjvQAfsfjL5ZD+d+KoKnnKMTUsSM/5Lbf+iCSlQ0XuhXlH3m88J
 K/owMAPiisbiJJwjbmZ41dOUpBGJbU4ii7rvaHWOa13llaNDU4OCbvr/SRso6DnU
 MjAO271ySwFVaiu8lN7Y6mh3x2KbbuJDjIaFKGRtjraO6sW/gU4ty3qsaRCi1wqj
 oBt+O/XmRqJS6AZh0kFe
 =OlEr
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-next-4.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux

KVM: s390: Changes for 4.10 (via kvm/next)

Two small optimizations to not do register reloading in
vcpu_put/get, instead do it in the ioctl path. This reduces
the overhead for schedule-intense workload that does not
exit to QEMU. (e.g. KVM guest with eventfd/irqfd that
does a lot of context switching with vhost or iothreads).
2016-11-29 14:25:58 +01:00
Benjamin Herrenschmidt
dd7b2f035e powerpc/mm: Fix lazy icache flush on pre-POWER5
On 64-bit CPUs with no-execute support and non-snooping icache, such as
970 or POWER4, we have a software mechanism to ensure coherency of the
cache (using exec faults when needed).

This was broken due to a logic error when the code was rewritten
from assembly to C, previously the assembly code did:

  BEGIN_FTR_SECTION
         mr      r4,r30
         mr      r5,r7
         bl      hash_page_do_lazy_icache
  END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)

Which tests that:
   (cpu_features & (NOEXECUTE | COHERENT_ICACHE)) == NOEXECUTE

Which says that the current cpu does have NOEXECUTE, but does not have
COHERENT_ICACHE.

Fixes: 91f1da9979 ("powerpc/mm: Convert 4k hash insert to C")
Fixes: 89ff725051 ("powerpc/mm: Convert __hash_page_64K to C")
Fixes: a43c0eb836 ("powerpc/mm: Convert 4k insert from asm to C")
Cc: stable@vger.kernel.org # v4.5+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Change log verbosification]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 23:59:40 +11:00
Jintack
1650ac49c2 arm64: head.S: Fix CNTHCTL_EL2 access on VHE system
Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit.
EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they
are 11th and 10th bits respectively when E2H is set.  Current code is
unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set.

In fact, we don't need to set those two bits, which allow EL1 and EL0 to
access physical timer and counter respectively, if E2H and TGE are set
for the host kernel. They will be configured later as necessary. First,
we don't need to configure those bits for EL1, since the host kernel
runs in EL2.  It is a hypervisor's responsibility to configure them
before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses
are configured in the later stage of boot process.

Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-29 11:37:05 +00:00
Michael Ellerman
f0f7fe1ac3 powerpc/boot: Fix rebuild when changing kernel endian
Now that we don't set ARCH incorrectly when calling the boot Makefile,
we can use the generic cpp_lds_S rule for converting our zImage.lds.S
into zImage.lds.

The main advantage of using the generic rule is that it correctly uses
if_changed, which means we correctly regenerate the linker script when
switching endian. Fixing that means we are finally able to build one
endian and then rebuild the other endian without requiring to clean
between builds.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 21:42:34 +11:00
Michael Ellerman
42d0c932b0 powerpc/boot: All uses of if_changed should depend on FORCE
If we're using if_changed then we must depend on FORCE, so that
if_changed gets a chance to check if something changed.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 21:42:34 +11:00
Michael Ellerman
1196d7aaeb powerpc: Stop passing ARCH=ppc64 to boot Makefile
Back in 2005 when the ppc/ppc64 merge started, we used to build the
kernel code in arch/powerpc but use the boot code from arch/ppc or
arch/ppc64 depending on whether we were building for 32 or 64-bit.

Originally we called the boot Makefile passing ARCH=$(OLDARCH), where
OLDARCH was ppc or ppc64.

In commit 20f629549b ("powerpc: Make building the boot image work for
both 32-bit and 64-bit") (2005-10-11) we split the call for 32/64-bit
using an ifeq check, because the two Makefiles took different targets,
and explicitly passed ARCH=ppc64 for the 64-bit case and ARCH=ppc for
the 32-bit case.

Then in commit 94b212c29f ("powerpc: Move ppc64 boot wrapper code over
to arch/powerpc") (2005-11-16) we moved the boot code into arch/powerpc
and dropped the ppc case, but kept passing ARCH=ppc64 to
arch/powerpc/boot/Makefile.

Since then there have been several more boot targets added, all of which
have copied the ARCH=ppc64 setting, such that now we have four targets
using it.

Currently it seems that nothing actually uses the ARCH value, but that's
basically just luck, and in particular it prevents us from using the
generic cpp_lds_S rule. It's also clearly wrong, ARCH=ppc64 is dead,
buried and cremated.

Fix it by dropping the setting of ARCH completely, the correct value is
exported by the top level Makefile.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29 21:42:34 +11:00
Zubair Lutfullah Kakakhel
8328255ff8 powerpc/virtex: Use generic xilinx irqchip driver
The Xilinx interrupt controller driver is now available in drivers/irqchip.
Switch to using that driver.

Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:50 +00:00
Zubair Lutfullah Kakakhel
2120a43527 irqchip/xilinx: Rename get_irq to xintc_get_irq
Now that the driver is generic and used by multiple archs,
get_irq is too generic.

Rename get_irq to xintc_get_irq to avoid any conflicts

Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:49 +00:00
Zubair Lutfullah Kakakhel
0547dc7885 microblaze/irqchip: Move intc driver to irqchip
The Xilinx AXI Interrupt Controller IP block is used by the MIPS
based xilfpga platform and a few PowerPC based platforms.

Move the interrupt controller code out of arch/microblaze so that
it can be used by everyone

Tested-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:49 +00:00
Vladimir Murzin
bb29cecb3b ARM: virt: Select ARM_GIC_V3_ITS
This patch allows ARM guests to use GICv3 ITS on an arm64 host

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:49 +00:00
Vladimir Murzin
92116b804a ARM: gic-v3-its: Add 32bit support to GICv3 ITS
Wire-up flush_dcache, readq- and writeq-like gic-v3-its assessors, so
GICv3 ITS gets all it needs to be built and run.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:48 +00:00
Vladimir Murzin
0968a61918 irqchip/gic-v3-its: Specialise readq and writeq accesses
readq and writeq type of assessors are not supported in AArch32, so we
need to specialise them and glue later with series of 32-bit accesses
on AArch32 side.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:48 +00:00
Vladimir Murzin
328191c05e irqchip/gic-v3-its: Specialise flush_dcache operation
It'd be better to switch to CMA... but before that done redirect
flush_dcache operation, so 32-bit implementation could be wired
latter.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:48 +00:00
Will Deacon
016f98afd0 irqchip/gic-v3: Use nops macro for Cavium ThunderX erratum 23154
The workaround for Cavium ThunderX erratum 23154 has a homebrew
pipeflush built out of NOP sequences around the read of the IAR.

This patch converts the code to use the new nops macro, which makes it
a little easier to read.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:48 +00:00
Will Deacon
d44ffa5ae7 irqchip/gic-v3: Convert arm64 GIC accessors to {read,write}_sysreg_s
The GIC system registers are accessed using open-coded wrappers around
the mrs_s/msr_s asm macros.

This patch moves the code over to the {read,wrote}_sysreg_s accessors
instead, reducing the amount of explicit asm blocks in the arch headers.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-11-29 09:14:48 +00:00
yangbo lu
e7a802c02c ARM64: dts: ls2080a: add device configuration node
Add the dts node for device configuration unit that provides
general purpose configuration and status for the device.

Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Acked-by: Scott Wood <oss@buserror.net>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-11-29 09:17:20 +01:00
Herbert Xu
585b5fa63d crypto: arm/aes - Select SIMD in Kconfig
The skcipher conversion for ARM missed the select on CRYPTO_SIMD,
causing build failures if SIMD was not otherwise enabled.

Fixes: da40e7a4ba ("crypto: aes-ce - Convert to skcipher")
Fixes: 211f41af53 ("crypto: aesbs - Convert to skcipher")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-29 16:11:14 +08:00
Ard Biesheuvel
a4b15bed54 crypto: arm64/sha2 - add generated .S files to .gitignore
Add the files that are generated by the recently merged OpenSSL
SHA-256/512 implementation to .gitignore so Git disregards them
when showing untracked files.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-29 16:06:56 +08:00
Heiko Carstens
dd5224986e s390/uapi: sort header export list
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-11-29 07:52:59 +01:00
Heiko Carstens
aa9725ff9c s390/hypfs: add hypfs header file to uapi header export list
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-11-29 07:52:58 +01:00
Heiko Carstens
6bc32bf0a0 s390: use generic asm-offsets.h
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-11-29 07:52:57 +01:00
Heiko Carstens
9e427365af s390: convert remaining bootmem allocations to memblock
Get rid of all remaining alloc_bootmem calls and use memblock_alloc
instead everywhere.  This way we get rid of the inconsistent mixture
of alloc_bootmem and memblock_alloc usages.

Two of the alloc_bootmem_low calls within arch/s390/kernel/setup.c are
replaced with memblock_alloc calls that don't enforce that the
allocated memory is below 2GB. This restriction was never necessary.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-11-29 07:52:55 +01:00
Kevin Hilman
c681ca42bf ARM64: dts: meson-gxbb: add SCPI pre-1.0 compatible
The SCPI driver has an updated compatible to indicate the pre-released
(pre v1.0) status of the driver.  Since Amlogic used a pre-1.0
version, add that compatible as well.

Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2016-11-28 12:06:31 -08:00
Neil Armstrong
8441add12b ARM64: dts: meson-gxl: Add support for Nexbox A95X
The Nexbox A95X exists with a Meson GXBB (S905) Soc or a Meson GXL SoC (S905X).
Add the S905X variant which uses the internal PHY instead of an external PHY.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2016-11-28 12:06:28 -08:00
Neil Armstrong
f51b454549 ARM64: dts: meson-gxm: Add support for the Nexbox A1
Add support for the Nexbox A1 board based on the Amlogic S912 SoC.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
[khilman: replace '_' in node-names with '-']
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2016-11-28 12:05:45 -08:00
Vineet Gupta
23cb1f6440 ARC: mm: IOC: Don't enable IOC by default
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-11-28 09:18:36 -08:00
Vineet Gupta
3c7c7a2fc8 ARC: Don't use "+l" inline asm constraint
Apparenty this is coming in the way of gcc fix which inhibits the usage
of LP_COUNT as a gpr.

Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-11-28 09:17:32 -08:00
Herbert Xu
211f41af53 crypto: aesbs - Convert to skcipher
This patch converts aesbs over to the skcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:21 +08:00
Herbert Xu
da40e7a4ba crypto: aes-ce - Convert to skcipher
This patch converts aes-ce over to the skcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:21 +08:00
Herbert Xu
d0ed0db149 crypto: arm64/aes - Convert to skcipher
This patch converts arm64/aes over to the skcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:20 +08:00
Herbert Xu
85671860ca crypto: aesni - Convert to skcipher
This patch converts aesni (including fpu) over to the skcipher
interface.  The LRW implementation has been removed as the generic
LRW code can now be used directly on top of the accelerated ECB
implementation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:20 +08:00
Herbert Xu
065ce32737 crypto: glue_helper - Add skcipher xts helpers
This patch adds xts helpers that use the skcipher interface rather
than blkcipher.  This will be used by aesni_intel.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:20 +08:00
Herbert Xu
cf2c0fe740 crypto: aes-ce-ccm - Use skcipher walk interface
This patch makes use of the new skcipher walk interface instead of
the obsolete blkcipher walk interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:17 +08:00
Jean Delvare
7cf31864e6 crypto: crc32c-vpmsum - Rename CRYPT_CRC32C_VPMSUM option
For consistency with the other 246 kernel configuration options,
rename CRYPT_CRC32C_VPMSUM to CRYPTO_CRC32C_VPMSUM.

Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Anton Blanchard <anton@samba.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:17 +08:00
Patrice Chotard
88d7658bf3 ARM: multi_v7_defconfig: enable STMicroelectronics HVA driver
Enable HVA (Hardware Video Accelerator) video encoder
driver for STMicroelectronics SoC.

Signed-off-by: Patrice Chotard <patrice.chotard@st.com>
2016-11-28 13:02:14 +01:00
Ben Hutchings
10c77dba40 powerpc/boot: Fix build failure in 32-bit boot wrapper
OPAL is not callable from 32-bit mode and the assembly code for it
may not even build (depending on how binutils was configured).

References: https://buildd.debian.org/status/fetch.php?pkg=linux&arch=powerpcspe&ver=4.8.7-1&stamp=1479203712
Fixes: 656ad58ef1 ("powerpc/boot: Add OPAL console to epapr wrappers")
Cc: stable@vger.kernel.org # v4.8+
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28 22:58:13 +11:00
Ard Biesheuvel
7918ecef07 crypto: arm64/sha2 - integrate OpenSSL implementations of SHA256/SHA512
This integrates both the accelerated scalar and the NEON implementations
of SHA-224/256 as well as SHA-384/512 from the OpenSSL project.

Relative performance compared to the respective generic C versions:

                 |  SHA256-scalar  | SHA256-NEON* |  SHA512  |
     ------------+-----------------+--------------+----------+
     Cortex-A53  |      1.63x      |     1.63x    |   2.34x  |
     Cortex-A57  |      1.43x      |     1.59x    |   1.95x  |
     Cortex-A73  |      1.26x      |     1.56x    |     ?    |

The core crypto code was authored by Andy Polyakov of the OpenSSL
project, in collaboration with whom the upstream code was adapted so
that this module can be built from the same version of sha512-armv8.pl.

The version in this patch was taken from OpenSSL commit 32bbb62ea634
("sha/asm/sha512-armv8.pl: fix big-endian support in __KERNEL__ case.")

* The core SHA algorithm is fundamentally sequential, but there is a
  secondary transformation involved, called the schedule update, which
  can be performed independently. The NEON version of SHA-224/SHA-256
  only implements this part of the algorithm using NEON instructions,
  the sequential part is always done using scalar instructions.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 19:58:05 +08:00
Aneesh Kumar K.V
d522ae1e49 powerpc/mm: Batch tlb flush when invalidating pte entries
This will improve the task exit case, by batching tlb invalidates.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28 22:45:49 +11:00
Aneesh Kumar K.V
e58d1cf243 powerpc/mm: update radix__pte_update to not do full mm tlb flush
When we are updating a pte, we just need to flush the tlb mapping
that pte. Right now we do a full mm flush because we don't track page
size. Now that we have page size details in pte use that to do the
optimized flush

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28 22:45:12 +11:00
Aneesh Kumar K.V
b3603e174f powerpc/mm: update radix__ptep_set_access_flag to not do full mm tlb flush
When we are updating a pte, we just need to flush the tlb mapping
that pte. Right now we do a full mm flush because we don't track the page
size. Now that we have page size details in pte use that to do the
optimized flush

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28 22:44:33 +11:00
Aneesh Kumar K.V
6d3a0379eb powerpc/mm: Add radix__tlb_flush_pte_p9_dd1()
Now that we have page size details encoded in pte using software pte
bits, use that to find the page size needed for tlb flush.

This function should only be used on P9 DD1, so give it a horrible name
to make that clear.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28 22:43:45 +11:00