- Peter Xu has a series (mm/gup: Unify hugetlb, speed up thp") which reduces the special-case code for handling hugetlb pages in GUP. It also speeds up GUP handling of transparent hugepages. - Peng Zhang provides some maple tree speedups ("Optimize the fast path of mas_store()"). - Sergey Senozhatsky has improved te performance of zsmalloc during compaction (zsmalloc: small compaction improvements"). - Domenico Cerasuolo has developed additional selftest code for zswap ("selftests: cgroup: add zswap test program"). - xu xin has doe some work on KSM's handling of zero pages. These changes are mainly to enable the user to better understand the effectiveness of KSM's treatment of zero pages ("ksm: support tracking KSM-placed zero-pages"). - Jeff Xu has fixes the behaviour of memfd's MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED sysctl ("mm/memfd: fix sysctl MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED"). - David Howells has fixed an fscache optimization ("mm, netfs, fscache: Stop read optimisation when folio removed from pagecache"). - Axel Rasmussen has given userfaultfd the ability to simulate memory poisoning ("add UFFDIO_POISON to simulate memory poisoning with UFFD"). - Miaohe Lin has contributed some routine maintenance work on the memory-failure code ("mm: memory-failure: remove unneeded PageHuge() check"). - Peng Zhang has contributed some maintenance work on the maple tree code ("Improve the validation for maple tree and some cleanup"). - Hugh Dickins has optimized the collapsing of shmem or file pages into THPs ("mm: free retracted page table by RCU"). - Jiaqi Yan has a patch series which permits us to use the healthy subpages within a hardware poisoned huge page for general purposes ("Improve hugetlbfs read on HWPOISON hugepages"). - Kemeng Shi has done some maintenance work on the pagetable-check code ("Remove unused parameters in page_table_check"). - More folioification work from Matthew Wilcox ("More filesystem folio conversions for 6.6"), ("Followup folio conversions for zswap"). And from ZhangPeng ("Convert several functions in page_io.c to use a folio"). - page_ext cleanups from Kemeng Shi ("minor cleanups for page_ext"). - Baoquan He has converted some architectures to use the GENERIC_IOREMAP ioremap()/iounmap() code ("mm: ioremap: Convert architectures to take GENERIC_IOREMAP way"). - Anshuman Khandual has optimized arm64 tlb shootdown ("arm64: support batched/deferred tlb shootdown during page reclamation/migration"). - Better maple tree lockdep checking from Liam Howlett ("More strict maple tree lockdep"). Liam also developed some efficiency improvements ("Reduce preallocations for maple tree"). - Cleanup and optimization to the secondary IOMMU TLB invalidation, from Alistair Popple ("Invalidate secondary IOMMU TLB on permission upgrade"). - Ryan Roberts fixes some arm64 MM selftest issues ("selftests/mm fixes for arm64"). - Kemeng Shi provides some maintenance work on the compaction code ("Two minor cleanups for compaction"). - Some reduction in mmap_lock pressure from Matthew Wilcox ("Handle most file-backed faults under the VMA lock"). - Aneesh Kumar contributes code to use the vmemmap optimization for DAX on ppc64, under some circumstances ("Add support for DAX vmemmap optimization for ppc64"). - page-ext cleanups from Kemeng Shi ("add page_ext_data to get client data in page_ext"), ("minor cleanups to page_ext header"). - Some zswap cleanups from Johannes Weiner ("mm: zswap: three cleanups"). - kmsan cleanups from ZhangPeng ("minor cleanups for kmsan"). - VMA handling cleanups from Kefeng Wang ("mm: convert to vma_is_initial_heap/stack()"). - DAMON feature work from SeongJae Park ("mm/damon/sysfs-schemes: implement DAMOS tried total bytes file"), ("Extend DAMOS filters for address ranges and DAMON monitoring targets"). - Compaction work from Kemeng Shi ("Fixes and cleanups to compaction"). - Liam Howlett has improved the maple tree node replacement code ("maple_tree: Change replacement strategy"). - ZhangPeng has a general code cleanup - use the K() macro more widely ("cleanup with helper macro K()"). - Aneesh Kumar brings memmap-on-memory to ppc64 ("Add support for memmap on memory feature on ppc64"). - pagealloc cleanups from Kemeng Shi ("Two minor cleanups for pcp list in page_alloc"), ("Two minor cleanups for get pageblock migratetype"). - Vishal Moola introduces a memory descriptor for page table tracking, "struct ptdesc" ("Split ptdesc from struct page"). - memfd selftest maintenance work from Aleksa Sarai ("memfd: cleanups for vm.memfd_noexec"). - MM include file rationalization from Hugh Dickins ("arch: include asm/cacheflush.h in asm/hugetlb.h"). - THP debug output fixes from Hugh Dickins ("mm,thp: fix sloppy text output"). - kmemleak improvements from Xiaolei Wang ("mm/kmemleak: use object_cache instead of kmemleak_initialized"). - More folio-related cleanups from Matthew Wilcox ("Remove _folio_dtor and _folio_order"). - A VMA locking scalability improvement from Suren Baghdasaryan ("Per-VMA lock support for swap and userfaults"). - pagetable handling cleanups from Matthew Wilcox ("New page table range API"). - A batch of swap/thp cleanups from David Hildenbrand ("mm/swap: stop using page->private on tail pages for THP_SWAP + cleanups"). - Cleanups and speedups to the hugetlb fault handling from Matthew Wilcox ("Change calling convention for ->huge_fault"). - Matthew Wilcox has also done some maintenance work on the MM subsystem documentation ("Improve mm documentation"). -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZO1JUQAKCRDdBJ7gKXxA jrMwAP47r/fS8vAVT3zp/7fXmxaJYTK27CTAM881Gw1SDhFM/wEAv8o84mDenCg6 Nfio7afS1ncD+hPYT8947UnLxTgn+ww= =Afws -----END PGP SIGNATURE----- Merge tag 'mm-stable-2023-08-28-18-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - Some swap cleanups from Ma Wupeng ("fix WARN_ON in add_to_avail_list") - Peter Xu has a series (mm/gup: Unify hugetlb, speed up thp") which reduces the special-case code for handling hugetlb pages in GUP. It also speeds up GUP handling of transparent hugepages. - Peng Zhang provides some maple tree speedups ("Optimize the fast path of mas_store()"). - Sergey Senozhatsky has improved te performance of zsmalloc during compaction (zsmalloc: small compaction improvements"). - Domenico Cerasuolo has developed additional selftest code for zswap ("selftests: cgroup: add zswap test program"). - xu xin has doe some work on KSM's handling of zero pages. These changes are mainly to enable the user to better understand the effectiveness of KSM's treatment of zero pages ("ksm: support tracking KSM-placed zero-pages"). - Jeff Xu has fixes the behaviour of memfd's MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED sysctl ("mm/memfd: fix sysctl MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED"). - David Howells has fixed an fscache optimization ("mm, netfs, fscache: Stop read optimisation when folio removed from pagecache"). - Axel Rasmussen has given userfaultfd the ability to simulate memory poisoning ("add UFFDIO_POISON to simulate memory poisoning with UFFD"). - Miaohe Lin has contributed some routine maintenance work on the memory-failure code ("mm: memory-failure: remove unneeded PageHuge() check"). - Peng Zhang has contributed some maintenance work on the maple tree code ("Improve the validation for maple tree and some cleanup"). - Hugh Dickins has optimized the collapsing of shmem or file pages into THPs ("mm: free retracted page table by RCU"). - Jiaqi Yan has a patch series which permits us to use the healthy subpages within a hardware poisoned huge page for general purposes ("Improve hugetlbfs read on HWPOISON hugepages"). - Kemeng Shi has done some maintenance work on the pagetable-check code ("Remove unused parameters in page_table_check"). - More folioification work from Matthew Wilcox ("More filesystem folio conversions for 6.6"), ("Followup folio conversions for zswap"). And from ZhangPeng ("Convert several functions in page_io.c to use a folio"). - page_ext cleanups from Kemeng Shi ("minor cleanups for page_ext"). - Baoquan He has converted some architectures to use the GENERIC_IOREMAP ioremap()/iounmap() code ("mm: ioremap: Convert architectures to take GENERIC_IOREMAP way"). - Anshuman Khandual has optimized arm64 tlb shootdown ("arm64: support batched/deferred tlb shootdown during page reclamation/migration"). - Better maple tree lockdep checking from Liam Howlett ("More strict maple tree lockdep"). Liam also developed some efficiency improvements ("Reduce preallocations for maple tree"). - Cleanup and optimization to the secondary IOMMU TLB invalidation, from Alistair Popple ("Invalidate secondary IOMMU TLB on permission upgrade"). - Ryan Roberts fixes some arm64 MM selftest issues ("selftests/mm fixes for arm64"). - Kemeng Shi provides some maintenance work on the compaction code ("Two minor cleanups for compaction"). - Some reduction in mmap_lock pressure from Matthew Wilcox ("Handle most file-backed faults under the VMA lock"). - Aneesh Kumar contributes code to use the vmemmap optimization for DAX on ppc64, under some circumstances ("Add support for DAX vmemmap optimization for ppc64"). - page-ext cleanups from Kemeng Shi ("add page_ext_data to get client data in page_ext"), ("minor cleanups to page_ext header"). - Some zswap cleanups from Johannes Weiner ("mm: zswap: three cleanups"). - kmsan cleanups from ZhangPeng ("minor cleanups for kmsan"). - VMA handling cleanups from Kefeng Wang ("mm: convert to vma_is_initial_heap/stack()"). - DAMON feature work from SeongJae Park ("mm/damon/sysfs-schemes: implement DAMOS tried total bytes file"), ("Extend DAMOS filters for address ranges and DAMON monitoring targets"). - Compaction work from Kemeng Shi ("Fixes and cleanups to compaction"). - Liam Howlett has improved the maple tree node replacement code ("maple_tree: Change replacement strategy"). - ZhangPeng has a general code cleanup - use the K() macro more widely ("cleanup with helper macro K()"). - Aneesh Kumar brings memmap-on-memory to ppc64 ("Add support for memmap on memory feature on ppc64"). - pagealloc cleanups from Kemeng Shi ("Two minor cleanups for pcp list in page_alloc"), ("Two minor cleanups for get pageblock migratetype"). - Vishal Moola introduces a memory descriptor for page table tracking, "struct ptdesc" ("Split ptdesc from struct page"). - memfd selftest maintenance work from Aleksa Sarai ("memfd: cleanups for vm.memfd_noexec"). - MM include file rationalization from Hugh Dickins ("arch: include asm/cacheflush.h in asm/hugetlb.h"). - THP debug output fixes from Hugh Dickins ("mm,thp: fix sloppy text output"). - kmemleak improvements from Xiaolei Wang ("mm/kmemleak: use object_cache instead of kmemleak_initialized"). - More folio-related cleanups from Matthew Wilcox ("Remove _folio_dtor and _folio_order"). - A VMA locking scalability improvement from Suren Baghdasaryan ("Per-VMA lock support for swap and userfaults"). - pagetable handling cleanups from Matthew Wilcox ("New page table range API"). - A batch of swap/thp cleanups from David Hildenbrand ("mm/swap: stop using page->private on tail pages for THP_SWAP + cleanups"). - Cleanups and speedups to the hugetlb fault handling from Matthew Wilcox ("Change calling convention for ->huge_fault"). - Matthew Wilcox has also done some maintenance work on the MM subsystem documentation ("Improve mm documentation"). * tag 'mm-stable-2023-08-28-18-26' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (489 commits) maple_tree: shrink struct maple_tree maple_tree: clean up mas_wr_append() secretmem: convert page_is_secretmem() to folio_is_secretmem() nios2: fix flush_dcache_page() for usage from irq context hugetlb: add documentation for vma_kernel_pagesize() mm: add orphaned kernel-doc to the rst files. mm: fix clean_record_shared_mapping_range kernel-doc mm: fix get_mctgt_type() kernel-doc mm: fix kernel-doc warning from tlb_flush_rmaps() mm: remove enum page_entry_size mm: allow ->huge_fault() to be called without the mmap_lock held mm: move PMD_ORDER to pgtable.h mm: remove checks for pte_index memcg: remove duplication detection for mem_cgroup_uncharge_swap mm/huge_memory: work on folio->swap instead of page->private when splitting folio mm/swap: inline folio_set_swap_entry() and folio_swap_entry() mm/swap: use dedicated entry for swap in folio mm/swap: stop using page->private on tail pages for THP_SWAP selftests/mm: fix WARNING comparing pointer to 0 selftests: cgroup: fix test_kmem_memcg_deletion kernel mem check ...
244 lines
7.4 KiB
C
244 lines
7.4 KiB
C
/* SPDX-License-Identifier: GPL-2.0-only */
|
|
/****************************************************************************
|
|
* Driver for Solarflare network controllers and boards
|
|
* Copyright 2005-2006 Fen Systems Ltd.
|
|
* Copyright 2006-2013 Solarflare Communications Inc.
|
|
*/
|
|
|
|
#ifndef EFX_IO_H
|
|
#define EFX_IO_H
|
|
|
|
#include <linux/io.h>
|
|
#include <linux/spinlock.h>
|
|
|
|
/**************************************************************************
|
|
*
|
|
* NIC register I/O
|
|
*
|
|
**************************************************************************
|
|
*
|
|
* The EF10 architecture exposes very few registers to the host and
|
|
* most of them are only 32 bits wide. The only exceptions are the MC
|
|
* doorbell register pair, which has its own latching, and
|
|
* TX_DESC_UPD.
|
|
*
|
|
* The TX_DESC_UPD DMA descriptor pointer is 128-bits but is a special
|
|
* case in the BIU to avoid the need for locking in the host:
|
|
*
|
|
* - It is write-only.
|
|
* - The semantics of writing to this register is such that
|
|
* replacing the low 96 bits with zero does not affect functionality.
|
|
* - If the host writes to the last dword address of the register
|
|
* (i.e. the high 32 bits) the underlying register will always be
|
|
* written. If the collector and the current write together do not
|
|
* provide values for all 128 bits of the register, the low 96 bits
|
|
* will be written as zero.
|
|
*/
|
|
|
|
#if BITS_PER_LONG == 64
|
|
#define EFX_USE_QWORD_IO 1
|
|
#endif
|
|
|
|
/* Hardware issue requires that only 64-bit naturally aligned writes
|
|
* are seen by hardware. Its not strictly necessary to restrict to
|
|
* x86_64 arch, but done for safety since unusual write combining behaviour
|
|
* can break PIO.
|
|
*/
|
|
#ifdef CONFIG_X86_64
|
|
/* PIO is a win only if write-combining is possible */
|
|
#ifdef ioremap_wc
|
|
#define EFX_USE_PIO 1
|
|
#endif
|
|
#endif
|
|
|
|
static inline u32 efx_reg(struct efx_nic *efx, unsigned int reg)
|
|
{
|
|
return efx->reg_base + reg;
|
|
}
|
|
|
|
#ifdef EFX_USE_QWORD_IO
|
|
static inline void _efx_writeq(struct efx_nic *efx, __le64 value,
|
|
unsigned int reg)
|
|
{
|
|
__raw_writeq((__force u64)value, efx->membase + reg);
|
|
}
|
|
static inline __le64 _efx_readq(struct efx_nic *efx, unsigned int reg)
|
|
{
|
|
return (__force __le64)__raw_readq(efx->membase + reg);
|
|
}
|
|
#endif
|
|
|
|
static inline void _efx_writed(struct efx_nic *efx, __le32 value,
|
|
unsigned int reg)
|
|
{
|
|
__raw_writel((__force u32)value, efx->membase + reg);
|
|
}
|
|
static inline __le32 _efx_readd(struct efx_nic *efx, unsigned int reg)
|
|
{
|
|
return (__force __le32)__raw_readl(efx->membase + reg);
|
|
}
|
|
|
|
/* Write a normal 128-bit CSR, locking as appropriate. */
|
|
static inline void efx_writeo(struct efx_nic *efx, const efx_oword_t *value,
|
|
unsigned int reg)
|
|
{
|
|
unsigned long flags __attribute__ ((unused));
|
|
|
|
netif_vdbg(efx, hw, efx->net_dev,
|
|
"writing register %x with " EFX_OWORD_FMT "\n", reg,
|
|
EFX_OWORD_VAL(*value));
|
|
|
|
spin_lock_irqsave(&efx->biu_lock, flags);
|
|
#ifdef EFX_USE_QWORD_IO
|
|
_efx_writeq(efx, value->u64[0], reg + 0);
|
|
_efx_writeq(efx, value->u64[1], reg + 8);
|
|
#else
|
|
_efx_writed(efx, value->u32[0], reg + 0);
|
|
_efx_writed(efx, value->u32[1], reg + 4);
|
|
_efx_writed(efx, value->u32[2], reg + 8);
|
|
_efx_writed(efx, value->u32[3], reg + 12);
|
|
#endif
|
|
spin_unlock_irqrestore(&efx->biu_lock, flags);
|
|
}
|
|
|
|
/* Write a 32-bit CSR or the last dword of a special 128-bit CSR */
|
|
static inline void efx_writed(struct efx_nic *efx, const efx_dword_t *value,
|
|
unsigned int reg)
|
|
{
|
|
netif_vdbg(efx, hw, efx->net_dev,
|
|
"writing register %x with "EFX_DWORD_FMT"\n",
|
|
reg, EFX_DWORD_VAL(*value));
|
|
|
|
/* No lock required */
|
|
_efx_writed(efx, value->u32[0], reg);
|
|
}
|
|
|
|
/* Read a 128-bit CSR, locking as appropriate. */
|
|
static inline void efx_reado(struct efx_nic *efx, efx_oword_t *value,
|
|
unsigned int reg)
|
|
{
|
|
unsigned long flags __attribute__ ((unused));
|
|
|
|
spin_lock_irqsave(&efx->biu_lock, flags);
|
|
value->u32[0] = _efx_readd(efx, reg + 0);
|
|
value->u32[1] = _efx_readd(efx, reg + 4);
|
|
value->u32[2] = _efx_readd(efx, reg + 8);
|
|
value->u32[3] = _efx_readd(efx, reg + 12);
|
|
spin_unlock_irqrestore(&efx->biu_lock, flags);
|
|
|
|
netif_vdbg(efx, hw, efx->net_dev,
|
|
"read from register %x, got " EFX_OWORD_FMT "\n", reg,
|
|
EFX_OWORD_VAL(*value));
|
|
}
|
|
|
|
/* Read a 32-bit CSR or SRAM */
|
|
static inline void efx_readd(struct efx_nic *efx, efx_dword_t *value,
|
|
unsigned int reg)
|
|
{
|
|
value->u32[0] = _efx_readd(efx, reg);
|
|
netif_vdbg(efx, hw, efx->net_dev,
|
|
"read from register %x, got "EFX_DWORD_FMT"\n",
|
|
reg, EFX_DWORD_VAL(*value));
|
|
}
|
|
|
|
/* Write a 128-bit CSR forming part of a table */
|
|
static inline void
|
|
efx_writeo_table(struct efx_nic *efx, const efx_oword_t *value,
|
|
unsigned int reg, unsigned int index)
|
|
{
|
|
efx_writeo(efx, value, reg + index * sizeof(efx_oword_t));
|
|
}
|
|
|
|
/* Read a 128-bit CSR forming part of a table */
|
|
static inline void efx_reado_table(struct efx_nic *efx, efx_oword_t *value,
|
|
unsigned int reg, unsigned int index)
|
|
{
|
|
efx_reado(efx, value, reg + index * sizeof(efx_oword_t));
|
|
}
|
|
|
|
/* default VI stride (step between per-VI registers) is 8K on EF10 and
|
|
* 64K on EF100
|
|
*/
|
|
#define EFX_DEFAULT_VI_STRIDE 0x2000
|
|
#define EF100_DEFAULT_VI_STRIDE 0x10000
|
|
|
|
/* Calculate offset to page-mapped register */
|
|
static inline unsigned int efx_paged_reg(struct efx_nic *efx, unsigned int page,
|
|
unsigned int reg)
|
|
{
|
|
return page * efx->vi_stride + reg;
|
|
}
|
|
|
|
/* Write the whole of RX_DESC_UPD or TX_DESC_UPD */
|
|
static inline void _efx_writeo_page(struct efx_nic *efx, efx_oword_t *value,
|
|
unsigned int reg, unsigned int page)
|
|
{
|
|
reg = efx_paged_reg(efx, page, reg);
|
|
|
|
netif_vdbg(efx, hw, efx->net_dev,
|
|
"writing register %x with " EFX_OWORD_FMT "\n", reg,
|
|
EFX_OWORD_VAL(*value));
|
|
|
|
#ifdef EFX_USE_QWORD_IO
|
|
_efx_writeq(efx, value->u64[0], reg + 0);
|
|
_efx_writeq(efx, value->u64[1], reg + 8);
|
|
#else
|
|
_efx_writed(efx, value->u32[0], reg + 0);
|
|
_efx_writed(efx, value->u32[1], reg + 4);
|
|
_efx_writed(efx, value->u32[2], reg + 8);
|
|
_efx_writed(efx, value->u32[3], reg + 12);
|
|
#endif
|
|
}
|
|
#define efx_writeo_page(efx, value, reg, page) \
|
|
_efx_writeo_page(efx, value, \
|
|
reg + \
|
|
BUILD_BUG_ON_ZERO((reg) != 0x830 && (reg) != 0xa10), \
|
|
page)
|
|
|
|
/* Write a page-mapped 32-bit CSR (EVQ_RPTR, EVQ_TMR (EF10), or the
|
|
* high bits of RX_DESC_UPD or TX_DESC_UPD)
|
|
*/
|
|
static inline void
|
|
_efx_writed_page(struct efx_nic *efx, const efx_dword_t *value,
|
|
unsigned int reg, unsigned int page)
|
|
{
|
|
efx_writed(efx, value, efx_paged_reg(efx, page, reg));
|
|
}
|
|
#define efx_writed_page(efx, value, reg, page) \
|
|
_efx_writed_page(efx, value, \
|
|
reg + \
|
|
BUILD_BUG_ON_ZERO((reg) != 0x180 && \
|
|
(reg) != 0x200 && \
|
|
(reg) != 0x400 && \
|
|
(reg) != 0x420 && \
|
|
(reg) != 0x830 && \
|
|
(reg) != 0x83c && \
|
|
(reg) != 0xa18 && \
|
|
(reg) != 0xa1c), \
|
|
page)
|
|
|
|
/* Write TIMER_COMMAND. This is a page-mapped 32-bit CSR, but a bug
|
|
* in the BIU means that writes to TIMER_COMMAND[0] invalidate the
|
|
* collector register.
|
|
*/
|
|
static inline void _efx_writed_page_locked(struct efx_nic *efx,
|
|
const efx_dword_t *value,
|
|
unsigned int reg,
|
|
unsigned int page)
|
|
{
|
|
unsigned long flags __attribute__ ((unused));
|
|
|
|
if (page == 0) {
|
|
spin_lock_irqsave(&efx->biu_lock, flags);
|
|
efx_writed(efx, value, efx_paged_reg(efx, page, reg));
|
|
spin_unlock_irqrestore(&efx->biu_lock, flags);
|
|
} else {
|
|
efx_writed(efx, value, efx_paged_reg(efx, page, reg));
|
|
}
|
|
}
|
|
#define efx_writed_page_locked(efx, value, reg, page) \
|
|
_efx_writed_page_locked(efx, value, \
|
|
reg + BUILD_BUG_ON_ZERO((reg) != 0x420), \
|
|
page)
|
|
|
|
#endif /* EFX_IO_H */
|