- Fix early_iounmap
- Drop cc-option fallbacks for architecture selection
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmGRL48ACgkQ9OeQG+St
rGQyXA//ZuMjsA615TncG5686glPds6y+SoiWAPosz9ElGawL5Ke9vybIBPxYrZj
gSaR4aiyoCy4B22tSQcAuyd8Ai8+cX5jUv7fbgZcCWWQto0OiD3S1jpaGwaPN+3I
1fwbUvUjrdzzDfGHJbFwBYykkQ8bPWrnjtZ9EK8g9t7QuetqkZrihw/gfqHeflnu
Ad9WqEiHWDguOU9kdT6uGZf5RpfjwoLuxYNMSQVktQw81+Lk0GGfEroR2ZqUZAf4
YIrhaKTxjXfkm4Hv6bfLtSYxWY0sw1ziO0qhtEJHKQyE5hbMOy5Dhdq9Ntefkin1
/+OaNZzLiKuv6ZqWJjg5ViXc0pVgDNfLKbj8aicGW3966F13OISq9MIy9pkivK+2
R7ROhcdkJ4Tz6sIbAZBxmyFbgY+sjZ0F2OoOkjcLUCfyMGqfl8FohNd6EvSs0yP8
6zfnUOg4vBkLg80d/lg6RjSszqOSdFjW4Qbi8HQoDYk9bemLPtVxQnjYsd2RMZhM
oKTkNTcA17MUCYncw6xu2lL3ahz15XMxBf/L2mfFwFCIplKvpTIYbP5BWSisjiyz
JhbjLj9PaCOmubu+n2JKK4dm0GeEngWhu5CW0xN0eQ0at8CJa9dmt5y3V1V8//jj
F3Xltms1d67nHr+6pkE+EYp4YMTMBO1L7opsktTs1vCIJ2trTWM=
=Ksk/
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
- Fix early_iounmap
- Drop cc-option fallbacks for architecture selection
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 9156/1: drop cc-option fallbacks for architecture selection
ARM: 9155/1: fix early early_iounmap()
Currently __set_fixmap() bails out with a warning when called in early boot
from early_iounmap(). Fix it, and while at it, make the comment a bit easier
to understand.
Cc: <stable@vger.kernel.org>
Fixes: b089c31c51 ("ARM: 8667/3: Fix memory attribute inconsistencies when using fixmap")
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
- Fix double-evaluation of 'pte' macro argument when using 52-bit PAs
- Fix signedness of some MTE prctl PR_* constants
- Fix kmemleak memory usage by skipping early pgtable allocations
- Fix printing of CPU feature register strings
- Remove redundant -nostdlib linker flag for vDSO binaries
-----BEGIN PGP SIGNATURE-----
iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAmGL8ukQHHdpbGxAa2Vy
bmVsLm9yZwAKCRC3rHDchMFjNB0XCADIFxVeLM3YMPsnDC6ttmGp/w1TXh5RoNBm
IY5utdu8ybdrdCYO//Z6i7rQiYwPBRbR8Xts1LAfZDVuD/IPKqxiY+wjMe9FYu0U
lb14dnT5iI7L3l0yo/5s1EC8FC/pfCMwi50+trAS5H5jo2wg/+6B/bgbymzTd1ZB
vzVVzyGv1L/3xKHT4uEDjY1jMPRBH4Ugx2bJ1tzRzsC3Gz0D7ZeVd1Dg3ftPMsqI
MA4Q5QeE9MsafcV8sKDRLnzNSYEr50goA0xtCre81VmJDNcm/y6UybUCF75KU72M
kAOImXs0+wrDqNIocYgYlSWRu9xjw8qjoxjnyqikFx47HBesjY8T
=pHQk
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
- Fix double-evaluation of 'pte' macro argument when using 52-bit PAs
- Fix signedness of some MTE prctl PR_* constants
- Fix kmemleak memory usage by skipping early pgtable allocations
- Fix printing of CPU feature register strings
- Remove redundant -nostdlib linker flag for vDSO binaries
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: pgtable: make __pte_to_phys/__phys_to_pte_val inline functions
arm64: Track no early_pgtable_alloc() for kmemleak
arm64: mte: change PR_MTE_TCF_NONE back into an unsigned long
arm64: vdso: remove -nostdlib compiler flag
arm64: arm64_ftr_reg->name may not be a human-readable string
After switched page size from 64KB to 4KB on several arm64 servers here,
kmemleak starts to run out of early memory pool due to a huge number of
those early_pgtable_alloc() calls:
kmemleak_alloc_phys()
memblock_alloc_range_nid()
memblock_phys_alloc_range()
early_pgtable_alloc()
init_pmd()
alloc_init_pud()
__create_pgd_mapping()
__map_memblock()
paging_init()
setup_arch()
start_kernel()
Increased the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE by 4 times
won't be enough for a server with 200GB+ memory. There isn't much
interesting to check memory leaks for those early page tables and those
early memory mappings should not reference to other memory. Hence, no
kmemleak false positives, and we can safely skip tracking those early
allocations from kmemleak like we did in the commit fed84c7852
("mm/memblock.c: skip kmemleak for kasan_init()") without needing to
introduce complications to automatically scale the value depends on the
runtime memory size etc. After the patch, the default value of
DEBUG_KMEMLEAK_MEM_POOL_SIZE becomes sufficient again.
Signed-off-by: Qian Cai <quic_qiancai@quicinc.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20211105150509.7826-1-quic_qiancai@quicinc.com
Signed-off-by: Will Deacon <will@kernel.org>
Merge misc updates from Andrew Morton:
"257 patches.
Subsystems affected by this patch series: scripts, ocfs2, vfs, and
mm (slab-generic, slab, slub, kconfig, dax, kasan, debug, pagecache,
gup, swap, memcg, pagemap, mprotect, mremap, iomap, tracing, vmalloc,
pagealloc, memory-failure, hugetlb, userfaultfd, vmscan, tools,
memblock, oom-kill, hugetlbfs, migration, thp, readahead, nommu, ksm,
vmstat, madvise, memory-hotplug, rmap, zsmalloc, highmem, zram,
cleanups, kfence, and damon)"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (257 commits)
mm/damon: remove return value from before_terminate callback
mm/damon: fix a few spelling mistakes in comments and a pr_debug message
mm/damon: simplify stop mechanism
Docs/admin-guide/mm/pagemap: wordsmith page flags descriptions
Docs/admin-guide/mm/damon/start: simplify the content
Docs/admin-guide/mm/damon/start: fix a wrong link
Docs/admin-guide/mm/damon/start: fix wrong example commands
mm/damon/dbgfs: add adaptive_targets list check before enable monitor_on
mm/damon: remove unnecessary variable initialization
Documentation/admin-guide/mm/damon: add a document for DAMON_RECLAIM
mm/damon: introduce DAMON-based Reclamation (DAMON_RECLAIM)
selftests/damon: support watermarks
mm/damon/dbgfs: support watermarks
mm/damon/schemes: activate schemes based on a watermarks mechanism
tools/selftests/damon: update for regions prioritization of schemes
mm/damon/dbgfs: support prioritization weights
mm/damon/vaddr,paddr: support pageout prioritization
mm/damon/schemes: prioritize regions within the quotas
mm/damon/selftests: support schemes quotas
mm/damon/dbgfs: support quotas of schemes
...
Since memblock_free() operates on a physical range, make its name
reflect it and rename it to memblock_phys_free(), so it will be a
logical counterpart to memblock_phys_alloc().
The callers are updated with the below semantic patch:
@@
expression addr;
expression size;
@@
- memblock_free(addr, size);
+ memblock_phys_free(addr, size);
Link: https://lkml.kernel.org/r/20210930185031.18648-6-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Juergen Gross <jgross@suse.com>
Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The code that implements the rarely used PID_IN_CONTEXTIDR feature
dereferences the 'task' field of struct thread_info directly, and this
is no longer possible when THREAD_INFO_IN_TASK=y, as the 'task' field is
omitted from the struct definition in that case. Instead, we should just
cast the thread_info pointer to a task_struct pointer, given that the
former is now the first member of the latter.
So use a helper that abstracts this, and provide implementations for
both cases.
Reported by: Arnd Bergmann <arnd@arndb.de>
Fixes: 18ed1c01a7 ("ARM: smp: Enable THREAD_INFO_IN_TASK")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
pgd_page_vaddr() returns an 'unsigned long' address, causing a warning
with the memcpy() call in kasan_init():
arch/arm/mm/kasan_init.c: In function 'kasan_init':
include/asm-generic/pgtable-nop4d.h:44:50: error: passing argument 2 of '__memcpy' makes pointer from integer without a cast [-Werror=int-conversion]
44 | #define pgd_page_vaddr(pgd) ((unsigned long)(p4d_pgtable((p4d_t){ pgd })))
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| |
| long unsigned int
arch/arm/include/asm/string.h:58:45: note: in definition of macro 'memcpy'
58 | #define memcpy(dst, src, len) __memcpy(dst, src, len)
| ^~~
arch/arm/mm/kasan_init.c:229:16: note: in expansion of macro 'pgd_page_vaddr'
229 | pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)),
| ^~~~~~~~~~~~~~
arch/arm/include/asm/string.h:21:47: note: expected 'const void *' but argument is of type 'long unsigned int'
21 | extern void *__memcpy(void *dest, const void *src, __kernel_size_t n);
| ~~~~~~~~~~~~^~~
Avoid this by adding an explicit typecast.
Link: https://lore.kernel.org/all/CACRpkdb3DMvof3-xdtss0Pc6KM36pJA-iy=WhvtNVnsDpeJ24Q@mail.gmail.com/
Fixes: 5615f69bc2 ("ARM: 9016/2: Initialize the mapping of KASan shadow memory")
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
We can currently build a multi-cpu enabled kernel that allows both ARMv4
and ARMv5 CPUs, and also supports THUMB mode in user space.
However, returning to user space in this configuration with the usr_ret
macro requires the use of the 'bx' instruction, which is refused by
the assembler:
arch/arm/kernel/entry-armv.S: Assembler messages:
arch/arm/kernel/entry-armv.S:937: Error: selected processor does not support `bx lr' in ARM mode
arch/arm/kernel/entry-armv.S:960: Error: selected processor does not support `bx lr' in ARM mode
arch/arm/kernel/entry-armv.S:1003: Error: selected processor does not support `bx lr' in ARM mode
<instantiation>:2:2: note: instruction requires: armv4t
bx lr
While it would be possible to handle this correctly in principle, doing so
seems to not be worth it, if we can simply avoid the problem by enforcing
that a kernel supporting both ARMv4 and a later CPU architecture cannot
run THUMB binaries.
This turned up while build-testing with clang; for some reason,
gcc never triggered the problem.
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
When configuring the kernel for big-endian, we set either BE-8 or BE-32
based on the CPU architecture level. Until linux-4.4, we did not have
any ARMv7-M platform allowing big-endian builds, but now i.MX/Vybrid
is in that category, adn we get a build error because of this:
arch/arm/kernel/module-plts.c: In function 'get_module_plt':
arch/arm/kernel/module-plts.c:60:46: error: implicit declaration of function '__opcode_to_mem_thumb32' [-Werror=implicit-function-declaration]
This comes down to picking the wrong default, ARMv7-M uses BE8
like ARMv7-A does. Changing the default gets the kernel to compile
and presumably works.
https://lore.kernel.org/all/1455804123-2526139-2-git-send-email-arnd@arndb.de/
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
A kernel built with CONFIG_THUMB2_KERNEL=y and using clang as the
assembler could generate non-naturally-aligned v7wbi_tlb_fns which
results in a boot failure. The original commit adding the macro missed
the .align directive on this data.
Link: https://github.com/ClangBuiltLinux/linux/issues/1447
Link: https://lore.kernel.org/all/0699da7b-354f-aecc-a62f-e25693209af4@linaro.org/
Debugged-by: Ard Biesheuvel <ardb@kernel.org>
Debugged-by: Nathan Chancellor <nathan@kernel.org>
Debugged-by: Richard Henderson <richard.henderson@linaro.org>
Fixes: 66a625a881 ("ARM: mm: proc-macros: Add generic proc/cache/tlb struct definition macros")
Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
When user code execution with privilege mode, it will lead to
infinite loop in the page fault handler if ARM_LPAE enabled,
The issue could be reproduced with
"echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT"
As Permission fault shows in ARM spec,
IFSR format when using the Short-descriptor translation table format
Permission fault: 01101 First level 01111 Second level
IFSR format when using the Long-descriptor translation table format
Permission fault: 0011LL LL bits indicate levelb.
Add is_permission_fault() function to check permission fault and die
if permission fault occurred under instruction fault in do_page_fault().
Fixes: 1d4d37159d ("ARM: 8235/1: Support for the PXN CPU feature on ARMv7")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Provide die_kernel_fault() helper to do the kernel fault reporting,
which with msg argument, it could report different message in different
scenes, and the later patch "ARM: mm: Fix PXN process with LPAE feature"
will use it.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Now the show_pts() will dump the virtual (hashed) address of page
table base, it is useless, kill it.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Now the write fault check in do_page_fault() and access_error() twice,
we can cleanup access_error(), and make the fault check and vma flags set
into do_page_fault() directly, then pass the vma flags to __do_page_fault.
No functional change.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
The __do_page_fault() won't use task_struct argument, kill it
and also use current->mm directly in do_page_fault().
No functional change.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Clean up the multiple goto statements and drops local variable
vm_fault_t fault, which will make the __do_page_fault() much more
readability.
No functional change.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
__arm_iomem_set_ro() marks an ioremapped area read-only. This is
intended for use with __arm_ioremap_exec() to allow the kernel to
write some code into e.g. SRAM and then write-protect it so the
kernel doesn't complain about W+X mappings.
Tested-by: Fabio Estevam <festevam@gmail.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.
Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.
<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count
Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Merge misc updates from Andrew Morton:
"173 patches.
Subsystems affected by this series: ia64, ocfs2, block, and mm (debug,
pagecache, gup, swap, shmem, memcg, selftests, pagemap, mremap,
bootmem, sparsemem, vmalloc, kasan, pagealloc, memory-failure,
hugetlb, userfaultfd, vmscan, compaction, mempolicy, memblock,
oom-kill, migration, ksm, percpu, vmstat, and madvise)"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (173 commits)
mm/madvise: add MADV_WILLNEED to process_madvise()
mm/vmstat: remove unneeded return value
mm/vmstat: simplify the array size calculation
mm/vmstat: correct some wrong comments
mm/percpu,c: remove obsolete comments of pcpu_chunk_populated()
selftests: vm: add COW time test for KSM pages
selftests: vm: add KSM merging time test
mm: KSM: fix data type
selftests: vm: add KSM merging across nodes test
selftests: vm: add KSM zero page merging test
selftests: vm: add KSM unmerge test
selftests: vm: add KSM merge test
mm/migrate: correct kernel-doc notation
mm: wire up syscall process_mrelease
mm: introduce process_mrelease system call
memblock: make memblock_find_in_range method private
mm/mempolicy.c: use in_task() in mempolicy_slab_node()
mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies
mm/mempolicy: advertise new MPOL_PREFERRED_MANY
mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
...
flush_kernel_dcache_page is a rather confusing interface that implements a
subset of flush_dcache_page by not being able to properly handle page
cache mapped pages.
The only callers left are in the exec code as all other previous callers
were incorrect as they could have dealt with page cache pages. Replace
the calls to flush_kernel_dcache_page with calls to flush_dcache_page,
which for all architectures does either exactly the same thing, can
contains one or more of the following:
1) an optimization to defer the cache flush for page cache pages not
mapped into userspace
2) additional flushing for mapped page cache pages if cache aliases
are possible
Link: https://lkml.kernel.org/r/20210712060928.4161649-7-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Cc: Alex Shi <alexs@kernel.org>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Guo Ren <guoren@kernel.org>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Cercueil <paul@crapouillou.net>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- fix debugfs initialization order (Anthony Iliopoulos)
- use memory_intersects() directly (Kefeng Wang)
- allow to return specific errors from ->map_sg
(Logan Gunthorpe, Martin Oliveira)
- turn the dma_map_sg return value into an unsigned int (me)
- provide a common global coherent pool іmplementation (me)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAmEvY+8LHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYPaehAAsgnBzzzoLHO83pgs0KL92c+0DiMNHYmaMCJOvZXk
x2Irv+O74WikRJc4S7uQ26p2spjmUxjmiOjld+8+NN0liD4QO9BQ/SZpIp8emuKS
/yPG6Xh86xSl/OrPL1y7kGeHkRi5sm3mRhcTdILFQFPLcSReupe++GRfnvrpbOPk
tj3pBGXluD6iJH12BBt00ushUVzZ0F2xaF6xUDAs94RSZ3tlqsfx6c928Y1KxSZh
f89q/KuaokyogFG7Ujj/nYgIUETaIs2W6UmxBfRzdEMJFSffwomUMbw+M+qGJ7/d
2UjamFYRX16FReE8WNsndbX1E6k5JBW12E1qwV3dUwatlNLWEaRq3PNiWkF7zcFH
LDkpDYN6s5bIDPTfDp21XfPygoH8KQhnD9lVf0aB7n04uu8VJrGB9+10PpkCJVXD
0b2dcuSwCO7hAfTfNGVV8f3EI/1XPflr1hJvMgcVtY53CR96ldp+4QaElzWLXumN
MyptirmrVITNVyVwGzhGAblXBLWdarXD0EXudyiaF4Xbrj3AkIOSUCghEwKLpjQf
UwMFFwSE8yGxKTRK4HfU5gMzy6G751fU7TUe5lmxZLovDflQoSXMWgHE8e7r0Qel
o5v6lmUzoWz2fAISf3xjauo2ncgmfWMwYM6C7OJy5nG73QXLQId9J+ReXbmrgrrN
DgI=
=spje
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-5.15' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- fix debugfs initialization order (Anthony Iliopoulos)
- use memory_intersects() directly (Kefeng Wang)
- allow to return specific errors from ->map_sg (Logan Gunthorpe,
Martin Oliveira)
- turn the dma_map_sg return value into an unsigned int (me)
- provide a common global coherent pool іmplementation (me)
* tag 'dma-mapping-5.15' of git://git.infradead.org/users/hch/dma-mapping: (31 commits)
hexagon: use the generic global coherent pool
dma-mapping: make the global coherent pool conditional
dma-mapping: add a dma_init_global_coherent helper
dma-mapping: simplify dma_init_coherent_memory
dma-mapping: allow using the global coherent pool for !ARM
ARM/nommu: use the generic dma-direct code for non-coherent devices
dma-direct: add support for dma_coherent_default_memory
dma-mapping: return an unsigned int from dma_map_sg{,_attrs}
dma-mapping: disallow .map_sg operations from returning zero on error
dma-mapping: return error code from dma_dummy_map_sg()
x86/amd_gart: don't set failed sg dma_address to DMA_MAPPING_ERROR
x86/amd_gart: return error code from gart_map_sg()
xen: swiotlb: return error code from xen_swiotlb_map_sg()
parisc: return error code from .map_sg() ops
sparc/iommu: don't set failed sg dma_address to DMA_MAPPING_ERROR
sparc/iommu: return error codes from .map_sg() ops
s390/pci: don't set failed sg dma_address to DMA_MAPPING_ERROR
s390/pci: return error code from s390_dma_map_sg()
powerpc/iommu: don't set failed sg dma_address to DMA_MAPPING_ERROR
powerpc/iommu: return error code from .map_sg() ops
...
Select the right options to just use the generic dma-direct code
instead of reimplementing it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Dillon Min <dillon.minfei@gmail.com>
This fixes a Keystone 2 regression discovered as a side effect of
defining an passing the physical start/end sections of the kernel
to the MMU remapping code.
As the Keystone applies an offset to all physical addresses,
including those identified and patches by phys2virt, we fail to
account for this offset in the kernel_sec_start and kernel_sec_end
variables.
Further these offsets can extend into the 64bit range on LPAE
systems such as the Keystone 2.
Fix it like this:
- Extend kernel_sec_start and kernel_sec_end to be 64bit
- Add the offset also to kernel_sec_start and kernel_sec_end
As passing kernel_sec_start and kernel_sec_end as 64bit invariably
incurs BE8 endianness issues I have attempted to dry-code around
these.
Tested on the Vexpress QEMU model both with and without LPAE
enabled.
Fixes: 6e121df14c ("ARM: 9090/1: Map the lowmem and kernel separately")
Reported-by: Nishanth Menon <nmenon@kernel.org>
Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Tested-by: Nishanth Menon <nmenon@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of the
->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.GC13345@lst.de/
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
The .map_sg() op now expects an error code instead of zero on failure.
In the case of a DMA_MAPPING_ERROR, -EIO is returned. Otherwise,
-ENOMEM or -EINVAL is returned depending on the error from
__map_sg_chunk().
Signed-off-by: Martin Oliveira <martin.oliveira@eideticom.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
The semantics of pfn_valid() is to check presence of the memory map for a
PFN and not whether a PFN is in RAM. The memory map may be present for a
hole in the physical memory and if such hole corresponds to an MMIO range,
__arm_ioremap_pfn_caller() will produce a WARN() and fail:
Use memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in
RAM or not.
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmDoctcTHHJwcHRAbGlu
dXguaWJtLmNvbQAKCRA5A4Ymyw79kW1HCAC8lhm79EoktNw2+fumIfch15RjOGHC
8MfTSoPGJXLbwQXxBsTY8lGk8NRaLVRFDJ4OE8pMuwjCBbsgukg3oQUAbEt2n+zt
DN4bCOV9vsunWs/w7S5xifFcNeeB3VqOjy2mFPaLpz3wj8fll+LtfghvHlXPyMSO
xCO61F8AhsR4mUHeMmy5FRi0AG7fjYfaFrRYHqXQNAlnYqc3UgSSkfVViVEPeyOj
TLefgjXkwCdrJ/84GRFkBqkXAz2vLNhic9BFt7YG2ZKbRDcAsQK5uqdmRR667edo
Ca58NDOdsq3P2p+Qqum+JzWRc/F7/cGp+Twg+sA8sVvQ6Hr0TxmUNmsy
=klV7
-----END PGP SIGNATURE-----
Merge tag 'fixes-2021-07-09' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock fix from Mike Rapoport:
"This is a fix for the rework of ARM's pfn_valid() implementation
merged during this merge window.
Don't abuse pfn_valid() to check if pfn is in RAM
The semantics of pfn_valid() is to check presence of the memory map
for a PFN and not whether a PFN is in RAM. The memory map may be
present for a hole in the physical memory and if such hole corresponds
to an MMIO range, __arm_ioremap_pfn_caller() will produce a WARN() and
fail.
Use memblock_is_map_memory() instead of pfn_valid() to check if a PFN
is in RAM or not"
* tag 'fixes-2021-07-09' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
arm: ioremap: don't abuse pfn_valid() to check if pfn is in RAM
- Make it clear __swp_entry_to_pte() uses PTE_TYPE_FAULT
- Updates for setting vmalloc size via command line to resolve an issue
with the 8MiB hole not properly being accounted for, and clean up the
code.
- ftrace support for module PLTs
- Spelling fixes
- kbuild updates for removing generated files and pattern rules for
generating files
- Clang/llvm updates
- Change the way the kernel is mapped, placing it in vmalloc space
instead.
- Remove arm_pm_restart from arm and aarch64.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmDi5ewACgkQ9OeQG+St
rGRfLg//cWUq/FBgRWggSgvLGBnbqwJABKFnynVy7c+g+kPxNudDHjL9a2A8c6aR
oTBMzaQvfRCQA2drgGK2fZ02sCHJxStX8d6Y6WyVaVEIBZPH6y09gZy1wW0/fIZS
S8qk82WaASddk/kvNeFrWD/5qNT4tz8COndZeYbBpEsXw/5RjIqSQqyn0k5CZqUj
0lL95y1AW9vD9AH7OYyYMB6pLwDMt0LCTSynx/o6ZmaysX56KdM8c3ziiUllWwJB
TIR03DeSpCZMiJMjwZUiWVl2BLjTES9WE2klZYulhgfh+ljlhkHvO+i8B+qy8kDS
JHIXHnuMi3GjSFg6MlP/s21pLHT6yuCZ8dSGaACa+HEf1s0nRnE9wl2kzUFcJtLY
jHAE5YyvO0BLJHCMuRGiB77rKwI92ij4yxKHvchU0BRlpgaVYcBmhZfqdVGnB4VO
Mu2pMaHLzEdrkfLteYJ7bvKn0o5cD/G3wj/9UDAzJ6ME91LINiNqzgub68pf1KTe
/YipxKipqcpbSBeysZAkfqTbMNB5WuxNnfmgwU15ZyfZsalcXSYEDkYex5+GGgOc
w36VddVtQXNKd0LuCfoquda3hIjLvgCNf62ZDFNDXgOHcVu8okYXwZi9vyYg6xIn
0gfh/T/lK0DoLWul0/CuLpSnsjw+1T7WTgKlvgLYGusWIQ2mC7w=
=dq60
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM development updates from Russell King:
- Make it clear __swp_entry_to_pte() uses PTE_TYPE_FAULT
- Updates for setting vmalloc size via command line to resolve an issue
with the 8MiB hole not properly being accounted for, and clean up the
code.
- ftrace support for module PLTs
- Spelling fixes
- kbuild updates for removing generated files and pattern rules for
generating files
- Clang/llvm updates
- Change the way the kernel is mapped, placing it in vmalloc space
instead.
- Remove arm_pm_restart from arm and aarch64.
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (29 commits)
ARM: 9098/1: ftrace: MODULE_PLT: Fix build problem without DYNAMIC_FTRACE
ARM: 9097/1: mmu: Declare section start/end correctly
ARM: 9096/1: Remove arm_pm_restart()
ARM: 9095/1: ARM64: Remove arm_pm_restart()
ARM: 9094/1: Register with kernel restart handler
ARM: 9093/1: drivers: firmwapsci: Register with kernel restart handler
ARM: 9092/1: xen: Register with kernel restart handler
ARM: 9091/1: Revert "mm: qsd8x50: Fix incorrect permission faults"
ARM: 9090/1: Map the lowmem and kernel separately
ARM: 9089/1: Define kernel physical section start and end
ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET
ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1
ARM: 9086/1: syscalls: use pattern rules to generate syscall headers
ARM: 9085/1: remove unneeded abi parameter to syscallnr.sh
ARM: 9084/1: simplify the build rule of mach-types.h
ARM: 9083/1: uncompress: atags_to_fdt: Spelling s/REturn/Return/
ARM: 9082/1: [v2] mark prepare_page_table as __init
ARM: 9079/1: ftrace: Add MODULE_PLTS support
ARM: 9078/1: Add warn suppress parameter to arm_gen_branch_link()
ARM: 9077/1: PLT: Move struct plt_entries definition to header
...
The coordination between freeing of unused memory map, pfn_valid() and core
mm assumptions about validity of the memory map in various ranges was not
designed for complex layouts of the physical memory with a lot of holes all
over the place.
Kefen Wang reported crashes in move_freepages() on a system with the
following memory layout [1]:
node 0: [mem 0x0000000080a00000-0x00000000855fffff]
node 0: [mem 0x0000000086a00000-0x0000000087dfffff]
node 0: [mem 0x000000008bd00000-0x000000008c4fffff]
node 0: [mem 0x000000008e300000-0x000000008ecfffff]
node 0: [mem 0x0000000090d00000-0x00000000bfffffff]
node 0: [mem 0x00000000cc000000-0x00000000dc9fffff]
node 0: [mem 0x00000000de700000-0x00000000de9fffff]
node 0: [mem 0x00000000e0800000-0x00000000e0bfffff]
node 0: [mem 0x00000000f4b00000-0x00000000f6ffffff]
node 0: [mem 0x00000000fda00000-0x00000000ffffefff]
These crashes can be mitigated by enabling CONFIG_HOLES_IN_ZONE on ARM and
essentially turning pfn_valid_within() to pfn_valid() instead of having it
hardwired to 1 on that architecture, but this would require to keep
CONFIG_HOLES_IN_ZONE solely for this purpose.
A cleaner approach is to update ARM's implementation of pfn_valid() to take
into accounting rounding of the freed memory map to pageblock boundaries
and make sure it returns true for PFNs that have memory map entries even if
there is no physical memory backing those PFNs.
[1] https://lore.kernel.org/lkml/2a1592ad-bc9d-4664-fd19-f7448a37edc0@huawei.com
-----BEGIN PGP SIGNATURE-----
iQFHBAABCAAxFiEEeOVYVaWZL5900a/pOQOGJssO/ZEFAmDhzQQTHHJwcHRAbGlu
dXguaWJtLmNvbQAKCRA5A4Ymyw79kXeUCACS0lssuKbaBxFk6OkEe0nbmbwN/n9z
zKd2AWzw9xFxYZkLfOCmi5EPUMI0IeDYjOyZmnj8YDDd7wRLVxZ51LSdyFDZafXY
j6SVYprSmwUjLkuajmqifY5DLbZYeGuI6WFvNVLljltHc0i/GIzx1Tld2yO/M0Jk
NzHQ0/5nXmU74PvvY8LrWk+rRjTYqMuolHvbbl4nNId5e/FYEWNxEqNO5gq6aG5g
+5t1BjyLf1NMp67uc5aLoLmr2ZwK8/UmZeSZ7i9z03gU/5B1srLluhoBsYBPVHFY
hRNRKwWUDRUmqjJnu5/EzI+iQnj7t6zV1hyt+E5B1gb89vuSVcJNOPQt
=wCcY
-----END PGP SIGNATURE-----
Merge tag 'memblock-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock
Pull memblock updates from Mike Rapoport:
"Fix arm crashes caused by holes in the memory map.
The coordination between freeing of unused memory map, pfn_valid() and
core mm assumptions about validity of the memory map in various ranges
was not designed for complex layouts of the physical memory with a lot
of holes all over the place.
Kefen Wang reported crashes in move_freepages() on a system with the
following memory layout [1]:
node 0: [mem 0x0000000080a00000-0x00000000855fffff]
node 0: [mem 0x0000000086a00000-0x0000000087dfffff]
node 0: [mem 0x000000008bd00000-0x000000008c4fffff]
node 0: [mem 0x000000008e300000-0x000000008ecfffff]
node 0: [mem 0x0000000090d00000-0x00000000bfffffff]
node 0: [mem 0x00000000cc000000-0x00000000dc9fffff]
node 0: [mem 0x00000000de700000-0x00000000de9fffff]
node 0: [mem 0x00000000e0800000-0x00000000e0bfffff]
node 0: [mem 0x00000000f4b00000-0x00000000f6ffffff]
node 0: [mem 0x00000000fda00000-0x00000000ffffefff]
These crashes can be mitigated by enabling CONFIG_HOLES_IN_ZONE on ARM
and essentially turning pfn_valid_within() to pfn_valid() instead of
having it hardwired to 1 on that architecture, but this would require
to keep CONFIG_HOLES_IN_ZONE solely for this purpose.
A cleaner approach is to update ARM's implementation of pfn_valid() to
take into accounting rounding of the freed memory map to pageblock
boundaries and make sure it returns true for PFNs that have memory map
entries even if there is no physical memory backing those PFNs"
Link: https://lore.kernel.org/lkml/2a1592ad-bc9d-4664-fd19-f7448a37edc0@huawei.com [1]
* tag 'memblock-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
arm: extend pfn_valid to take into account freed memory map alignment
memblock: ensure there is no overflow in memblock_overlaps_region()
memblock: align freed memory map on pageblock boundaries with SPARSEMEM
memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER
When unused memory map is freed the preserved part of the memory map is
extended to match pageblock boundaries because lots of core mm
functionality relies on homogeneity of the memory map within pageblock
boundaries.
Since pfn_valid() is used to check whether there is a valid memory map
entry for a PFN, make it return true also for PFNs that have memory map
entries even if there is no actual memory populated there.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: Tony Lindgren <tony@atomide.com>
1. These tlb flush functions have been using vma instead mm long time
ago, but there is still some comments use mm as parameter.
2. the actual struct we use is vm_area_struct instead of vma_struct.
3. remove unused flush_kern_tlb_page.
Link: https://lkml.kernel.org/r/87k0oaq311.wl-chenli@uniontech.com
Signed-off-by: Chen Li <chenli@uniontech.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit e220ba6022.
The VERIFY_PERMISSION_FAULT is introduced since 2009 but no
one use it, just revert it and clean unused comment.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Using our knowledge of where the physical kernel sections start
and end we can split mapping of lowmem and kernel apart.
This is helpful when you want to place the kernel independently
from lowmem and not be limited to putting it into lowmem only,
but also into places such as the VMALLOC area.
We extensively rewrite the lowmem mapping code to account for
all cases where the kernel image overlaps with the lowmem in
different ways. This is helpful to handle situations which
occur when the kernel is loaded in different places and makes
it possible to place the kernel in a more random manner
which is done with e.g. KASLR.
We sprinkle some comments with illustrations and pr_debug()
over it so it is also very evident to readers what is happening.
We now use the kernel_sec_start and kernel_sec_end instead
of relying on __pa() (phys_to_virt) to provide this. This
is helpful if we want to resolve physical-to-virtual and
virtual-to-physical mappings at runtime rather than
compiletime, especially if we are not using patch phys to
virt.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
In some configurations when building with gcc-11, prepare_page_table
does not get inline, which causes a build time warning for a section
mismatch:
WARNING: modpost: vmlinux.o(.text.unlikely+0xce8): Section mismatch in reference from the function prepare_page_table() to the (unknown reference) .init.data:(unknown)
The function prepare_page_table() references
the (unknown reference) __initdata (unknown).
This is often because prepare_page_table lacks a __initdata
annotation or the annotation of (unknown) is wrong.
Mark the function as __init to avoid the warning regardless of the
inlining, and remove the 'inline' keyword. The compiler is
free to ignore the 'inline' here and it doesn't result in better
object code or more readable source.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Rather than using "m" (which is the unit of metres, or milli), and
"MB" in the printk statements, use MiB to make it clear that we are
talking about the power-of-2 megabytes, aka mebibytes.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Make the default vmalloc size clearer by using a more natural
multiplication by SZ_1M rather than a shift left by 20 bits.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Rather than storing the start of vmalloc space, store the size, and
move the calculation into adjust_lowmem_limit(). We now have one single
place where this calculation takes place.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Change the current vmalloc_min, which is supposed to be the lowest
address of vmalloc space including the VMALLOC_OFFSET, to vmalloc_start
which does not include VMALLOC_OFFSET.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
We calculate the maximum size of the vmalloc space twice in
early_vmalloc(). Use a temporary variable to hold this value.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
vmalloc_min is currently a void pointer, but everywhere its used
contains a cast - either to a void pointer when setting or back to
an integer type when being used. Eliminate these casts by changing
its type to unsigned long.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Pull swiotlb updates from Konrad Rzeszutek Wilk:
"Christoph Hellwig has taken a cleaver and trimmed off the not-needed
code and nicely folded duplicate code in the generic framework.
This lays the groundwork for more work to add extra DMA-backend-ish in
the future. Along with that some bug-fixes to make this a nice working
package"
* 'stable/for-linus-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb:
swiotlb: don't override user specified size in swiotlb_adjust_size
swiotlb: Fix the type of index
swiotlb: Make SWIOTLB_NO_FORCE perform no allocation
ARM: Qualify enabling of swiotlb_init()
swiotlb: remove swiotlb_nr_tbl
swiotlb: dynamically allocate io_tlb_default_mem
swiotlb: move global variables into a new io_tlb_mem structure
xen-swiotlb: remove the unused size argument from xen_swiotlb_fixup
xen-swiotlb: split xen_swiotlb_init
swiotlb: lift the double initialization protection from xen-swiotlb
xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabs
xen-swiotlb: remove xen_set_nslabs
xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supported
xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_buffer
swiotlb: split swiotlb_tbl_sync_single
swiotlb: move orig addr and size validation into swiotlb_bounce
swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_single
powerpc/svm: stop using io_tlb_start
mem_init_print_info() is called in mem_init() on each architecture, and
pass NULL argument, so using void argument and move it into mm_init().
Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86]
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc]
Acked-by: David Hildenbrand <david@redhat.com>
Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64]
Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm]
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Guo Ren <guoren@kernel.org>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra" <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
page_mapping_file() is only used by some architectures, and then it
is usually only used in one place. Make it a static inline function
so other architectures don't have to carry this dead code.
Link: https://lkml.kernel.org/r/20210317123011.350118-1-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Huang Ying <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
They are not needed after booting, so mark them as __init to move them
to the .init section.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
After commit 5a735583b7 ("arm/ftrace: Use __patch_text()"), the last
and only user of these functions has gone, remove them.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
for_each_mem_range() uses a loop variable, yet looking into code it is
not just iteration counter but more complex entity which encodes
information about memblock. Thus condition i == 0 looks fragile.
Indeed, it broke boot of R-class platforms since it never took i == 0
path (due to i was set to 1). Fix that with restoring original flag
check.
Fixes: b10d6bca87 ("arch, drivers: replace for_each_membock() with for_each_mem_range()")
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>