The GPU saves off some stuff to the address specified in this part of RAMFC
when the channel faults, so we should probably point it at a valid address.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We didn't used to be aware that runlist/engine IDs weren't the same thing,
or that there was such variability in configuration between GPUs.
By exposing this information to a client, and giving it explicit control
of which runlist it's allocating a channel on, we're able to make better
choices.
The immediate effect of this is that on GPUs where CE0 is the "GRCE", we
will now be allocating a copy engine running asynchronously to GR for BO
migrations - as intended.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We previously required each VMM user to allocate their own page directory
and fill in the instance block themselves.
It makes more sense to handle this in a common location.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Map flags (access, kind, etc) are currently defined in either the VMA,
or the memory object, which turns out to not be ideal for things like
suballocated buffers, etc.
These will become per-map flags instead, so we need to support passing
these arguments in nvkm_memory_map().
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
There are instances (such as non-recoverable GPU page faults) where
NVKM decides that a channel's context is no longer viable, and will
be removed from the runlist.
This commit notifies the owner of the channel when this happens, so
it has the opportunity to take some kind of recovery action instead
of hanging.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This avoids an issue that occurs when we're attempting to preempt multiple
channels simultaneously. HW seems to ignore preempt requests while it's
still processing a previous one, which, well, makes sense.
Fixes random "fifo: SCHED_ERROR 0d []" + GPCCS page faults during parallel
piglit runs on (at least) GM107.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Cc: stable@vger.kernel.org
A channel may still be processed by the PBDMA even after removal, unless
it is properly kicked. Some chips are more sensible to this than others,
with GM20B triggering the issue very easily (the PBDMA will try to fetch
methods from the previously-removed channel after a new one is added).
Make sure this cannot happen by kicking the channel right after it is
disabled, and before the new runlist is submitted.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The CPU-side tracking of engine runlists was not protected by a lock,
leading to list corruption, eventually causing runlist_update() to
overrun the GPU-side runlist, triggering an OOPS.
Fixes some of the issues noticed during parallel piglit runs.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>