The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly
setting the hardware specific GPU page table entries for 2MB sized
pages. Fix this by adding functions to set and clear PD0 GPU page
table entries.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The function nvkm_vmm_ctor() is not called outside of the file defining
it, so make it static.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
If the BAR initialization failed it may leave the vmm structure in an
unitialized state, leading to a null-pointer-dereference when the vmm is
dereferenced during teardown.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This provides a somewhat more direct method of manipulating the GPU page
tables, which will be required to support SVM.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This will be used to support a privileged client providing PTEs directly,
without a memory object to use as a reference.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
NVKM is currently responsible for managing the allocation of a client's
GPU address-space, but there's various use-cases (ie. HMM address-space
mirroring) where giving a client more direct control is desirable.
This commit allows for a VMM to be created where the area allocated for
NVKM is limited to a client-specified window, the remainder of address-
space is controlled directly by the client.
Leaving a window is necessary to support various internal requirements,
but also to support existing allocation interfaces as not all of the HW
is capable of working with a HMM allocation.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Aside from being a nice cleanup, these will to allow the upcoming direct
page mapping interfaces to play nicely with normal mappings.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Commit 7110c89bb8852ff8b0f88ce05b332b3fe22bd11e ("mmu: swap out round
for ALIGN") replaced two calls to round/rounddown with ALIGN/ALIGN_DOWN,
but erroneously applied ALIGN_DOWN to a different variable (addr) and left
intended variable (tail) not rounded/ALIGNed.
As a result screen corruption, X lockups are observable. An example of kernel
log of affected system with NV98 card where it was bisected:
nouveau 0000:01:00.0: gr: TRAP_M2MF 00000002 [IN]
nouveau 0000:01:00.0: gr: TRAP_M2MF 00320951 400007c0 00000000 04000000
nouveau 0000:01:00.0: gr: 00200000 [] ch 1 [000fbbe000 DRM] subc 4 class 5039
mthd 0100 data 00000000
nouveau 0000:01:00.0: fb: trapped read at 0040000000 on channel 1
[0fbbe000 DRM]
engine 00 [PGRAPH] client 03 [DISPATCH] subclient 04 [M2M_IN] reason 00000006
[NULL_DMAOBJ]
Fixes bug 105173 ("[MCP79][Regression] Unhandled NULL pointer dereference in
nvkm_object_unmap since kernel 4.15")
https://bugs.freedesktop.org/show_bug.cgi?id=105173
Fixes: 7110c89bb885 ("mmu: swap out round for ALIGN ")
Tested-by: Pierre Moreau <pierre.morrow@free.fr>
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr>
Signed-off-by: Maris Nartiss <maris.nartiss@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Cc: stable@vger.kernel.org # v4.15+
The trailing semicolon is an empty statement that does no operation.
Removing it since it doesn't do anything.
Signed-off-by: Luis de Bethencourt <luisbg@kernel.org>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
These are the new priviledged interfaces to the VMM backends, and expose
some functionality that wasn't previously available.
It's now possible to allocate a chunk of address-space (even all of it),
without causing page tables to be allocated up-front, and then map into
it at arbitrary locations. This is the basic primitive used to support
features such as sparse mapping, or to allow userspace control over its
own address-space, or HMM (where the GPU driver isn't in control of the
address-space layout).
Rather than being tied to a subtle combination of memory object and VMA
properties, arguments that control map flags (ro, kind, etc) are passed
explicitly at map time.
The compatibility hacks to implement the old frontend on top of the new
driver backends have been replaced with something similar to implement
the old frontend's interfaces on top of the new frontend.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is the common code to support a rework of the VMM backends.
It adds support for more than 2 levels of page table nesting, which
is required to be able to support GP100's MMU layout.
Sparse mappings (that don't cause MMU faults when accessed) are now
supported, where the backend provides it.
Dual-PT handling had to become more sophisticated to support sparse,
but this also allows us to support an optimisation the MMU provides
on GK104 and newer.
Certain operations can now be combined into a single page tree walk
to avoid some overhead, but also enables optimsations like skipping
PTE unmap writes when the PT will be destroyed anyway.
The old backend has been hacked up to forward requests onto the new
backend, if present, so that it's possible to bisect between issues
in the backend changes vs the upcoming frontend changes.
Until the new frontend has been merged, new backends will leak BAR2
page tables on module unload. This is expected, and it's not worth
the effort of hacking around this as it doesn't effect runtime.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We previously required each VMM user to allocate their own page directory
and fill in the instance block themselves.
It makes more sense to handle this in a common location.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is the first chunk of the new VMM code that provides the structures
needed to describe a GPU virtual address-space layout, as well as common
interfaces to handle VMM creation, and connecting instances to a VMM.
The constructor now allocates the PD itself, rather than having the user
handle that manually. This won't/can't be used until after all backends
have been ported to these interfaces, so a little bit of memory will be
wasted on Fermi and newer for a couple of commits in the series.
Compatibility has been hacked into the old code to allow each GPU backend
to be ported individually.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>