1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00
Commit graph

105 commits

Author SHA1 Message Date
Thomas Zimmermann
45a10ad4e6 drm/msm: Acquire reservation lock in GEM pin/unpin callback
Export msm_gem_pin_pages_locked() and acquire the reservation lock
directly in GEM pin callback. Same for unpin. Prepares for further
changes.

Dma-buf locking semantics require callers to hold the buffer's
reservation lock when invoking the pin and unpin callbacks. Prepare
msm accordingly by pushing locking out of the implementation. A
follow-up patch will fix locking for all GEM code at once.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> # virtio-gpu
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Zack Rusin <zack.rusin@broadcom.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240227113853.8464-5-tzimmermann@suse.de
2024-03-11 13:33:50 +01:00
Rob Clark
a6397e6387 drm/msm/gem: Convert to drm_exec
Replace the ww_mutex locking dance with the drm_exec helper.

v2: Error path fixes, move drm_exec_fini so we only call it once (and
    only if we have drm_exec_init()

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/568342/
2023-12-10 11:19:44 -08:00
Rob Clark
2d7d2c4e84 drm/msm/gem: Split out submit_unpin_objects() helper
Untangle unpinning from unlock/unref loop.  The unpin only happens in
error paths so it is easier to decouple from the normal unlock path.

Since we never have an intermediate state where a subset of buffers
are pinned (ie. we never bail out of the pin or unpin loops) we can
replace the bo state flag bit with a global flag in the submit.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/568335/
2023-12-10 10:23:13 -08:00
Rob Clark
a3dec9cdf4 drm/msm/gem: Remove "valid" tracking
This was a small optimization for pre-soft-pin userspace.  But mesa
switched to soft-pin nearly 5yrs ago.  So lets drop the optimization
and simplify the code.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/568328/
2023-12-10 10:23:13 -08:00
Rob Clark
9902cb999e drm/msm/gem: Add metadata
The EXT_external_objects extension is a bit awkward as it doesn't pass
explicit modifiers, leaving the importer to guess with incomplete
information.  In the case of vk (turnip) exporting and gl (freedreno)
importing, the "OPTIMAL_TILING_EXT" layout depends on VkImageCreateInfo
flags (among other things), which the importer does not know.  Which
unfortunately leaves us with the need for a metadata back-channel.

The contents of the metadata are defined by userspace.  The
EXT_external_objects extension is only required to work between
compatible versions of gl and vk drivers, as defined by device and
driver UUIDs.

v2: add missing metadata kfree
v3: Rework to move copy_from/to_user out from under gem obj lock
    to avoid angering lockdep about deadlocks against fs-reclaim

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/566157/
2023-11-20 18:33:17 -08:00
Rob Clark
7391c282ba drm/msm: Remove vma use tracking
This was not strictly necessary, as page unpinning (ie. shrinker) only
cares about the resv.  It did give us some extra sanity checking for
userspace controlled iova, and was useful to catch issues on kernel and
userspace side when enabling userspace iova.  But if userspace screws
this up, it just corrupts it's own gpu buffers and/or gets iova faults.
So we can just let userspace shoot it's own foot and drop the extra per-
buffer SUBMIT overhead.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Patchwork: https://patchwork.freedesktop.org/patch/551023/
2023-08-10 13:08:03 -07:00
Rob Clark
fc896cf3d6 drm/msm: Take lru lock once per submit_pin_objects()
Split out pin_count incrementing and lru updating into a separate loop
so we can take the lru lock only once for all objs.  Since we are still
holding the obj lock, it is safe to split this up.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/551025/
2023-08-10 13:08:02 -07:00
Rob Clark
6ba5daa5d5 drm/msm: Use drm_gem_object in submit bos table
Basically everywhere wants the base ptr type.  So store that instead of
msm_gem_object.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/551021/
2023-08-10 10:44:02 -07:00
Rob Clark
17b704f1c0 drm/msm/gem: Avoid obj lock in job_run()
Now that everything that controls which LRU an obj lives in *except* the
backing pages is protected by the LRU lock, add a special path to unpin
in the job_run() path, where we are assured that we already have backing
pages and will not be racing against eviction (because the GEM object's
dma_resv contains the fence that will be signaled when the submit/job
completes).

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527845/
Link: https://lore.kernel.org/r/20230320144356.803762-10-robdclark@gmail.com
2023-03-25 16:31:44 -07:00
Rob Clark
6c7c8fb863 drm/msm/gem: Protect pin_count/madv by LRU lock
Since the LRU lock is already acquired when moving an obj between LRUs,
we can use it to protect pin_count and madv, without any significant
change in locking (ie. it just expands the scope of the lock by a hand-
ful of instructions).  This prepares the way to decrement the pin_count
in the job_run() path without needing to hold the obj lock, to avoid a
potential deadlock (or rather stall) caused by the fence-signaling path
(job_run()) blocking on shrinker/reclaim.  (Only a stall because the
wait for fence signaling wait_for_idle() is not infinite.)

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527843/
Link: https://lore.kernel.org/r/20230320144356.803762-9-robdclark@gmail.com
2023-03-25 16:31:44 -07:00
Rob Clark
b14b8c5f0e drm/msm: Decouple vma tracking from obj lock
We need to use the inuse count to track that a BO is pinned until
we have the hw_fence.  But we want to remove the obj lock from the
job_run() path as this could deadlock against reclaim/shrinker
(because it is blocking the hw_fence from eventually being signaled).
So split that tracking out into a per-vma lock with narrower scope.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527839/
Link: https://lore.kernel.org/r/20230320144356.803762-5-robdclark@gmail.com
2023-03-25 16:31:44 -07:00
Rob Clark
fc2f07566a drm/msm/gem: Tidy up VMA API
Stop open coding VMA construction, which will be needed in the next
commit.  And since the VMA already has a ptr to the adress space, stop
passing that around everywhere.  (Also, an aspace always has an mmu so
we can drop a couple pointless NULL checks.)

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527833/
Link: https://lore.kernel.org/r/20230320144356.803762-4-robdclark@gmail.com
2023-03-25 16:31:44 -07:00
Rob Clark
d95c196ddb drm/msm/gem: Convert to lockdep assert
Utilize the power of lockdep for our GEM locking related sanity
checking.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496139/
Link: https://lore.kernel.org/r/20220802155152.1727594-16-robdclark@gmail.com
2022-08-28 08:31:49 -07:00
Rob Clark
d4d7d3630d drm/msm/gem: Add msm_gem_assert_locked()
All use of msm_gem_is_locked() is just for WARN_ON()s, so extract out
into an msm_gem_assert_locked() patch.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496136/
Link: https://lore.kernel.org/r/20220802155152.1727594-15-robdclark@gmail.com
2022-08-27 09:32:45 -07:00
Rob Clark
b352ba54a8 drm/msm/gem: Convert to using drm_gem_lru
This converts over to use the shared GEM LRU/shrinker helpers.  Note
that it means we are no longer tracking purgeable or willneed buffers
that are active separately.  But the most recently pinned buffers should
be at the tail of the various LRUs, and the shrinker is already prepared
to encounter objects which are still active.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496131/
Link: https://lore.kernel.org/r/20220802155152.1727594-11-robdclark@gmail.com
2022-08-27 09:32:45 -07:00
Rob Clark
da53d8b546 drm/msm/gem: Remove active refcnt
At this point the pinned refcnt is sufficient, and the shrinker is
already prepared to encounter objects which are still active according
to fences attached to the resv.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496122/
Link: https://lore.kernel.org/r/20220802155152.1727594-9-robdclark@gmail.com
2022-08-27 09:32:44 -07:00
Rob Clark
e7cd5ee9aa drm/msm/gem: Rename to pin/unpin_pages
Since that is what these fxns actually do.. they are getting *pinned*
pages (as opposed to cases where we need pages, but don't need them
pinned, like CPU mappings).

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496121/
Link: https://lore.kernel.org/r/20220802155152.1727594-7-robdclark@gmail.com
2022-08-27 09:32:44 -07:00
Rob Clark
01780d0263 drm/msm/gem: Check for active in shrinker path
Currently in our shrinker path we shouldn't be encountering anything
that is active, but this will change in subsequent patches.  So check
if there are unsignaled fences.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496117/
Link: https://lore.kernel.org/r/20220802155152.1727594-5-robdclark@gmail.com
2022-08-27 09:32:44 -07:00
Rob Clark
a414fe3a21 drm/msm/gem: Drop obj lock in msm_gem_free_object()
The only reason we grabbed the lock was to satisfy a bunch of places
that WARN_ON() if called without the lock held.  But this angers lockdep
which doesn't realize no one else can be holding the lock by the time we
end up destroying the object (and sees what would otherwise be a locking
inversion between reservation_ww_class_mutex and fs_reclaim).

Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/14
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/489364/
Link: https://lore.kernel.org/r/20220613205032.2652374-1-robdclark@gmail.com
2022-07-06 18:54:41 -07:00
Rob Clark
6867c9aff8 drm/msm: Make msm_gem_free_object() static
Misc small cleanup I noticed.  Not called from another object file since
commit 3c9edd9c85 ("drm/msm: Introduce GEM object funcs")

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/489362/
Link: https://lore.kernel.org/r/20220613204910.2651747-1-robdclark@gmail.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2022-07-05 06:01:07 +03:00
Rob Clark
311e03c29c drm/msm/gem: Separate object and vma unpin
Previously the BO_PINNED state in the submit was tracking two related
but different things: (1) that the buffer object was pinned, and (2)
that the vma (mapping within a set of pagetables) was pinned.  But with
fenced vma unpin (needed so that userspace couldn't race with retire
path for releasing a vma) these two were decoupled.  The fact that the
BO_PINNED flag was already cleared meant that we leaked the bo pin count
which should have been dropped when the submit was retired.

So split this state into BO_OBJ_PINNED and BO_VMA_PINNED, so they can be
dropped independently.

Fixes: 95d1deb02a ("drm/msm/gem: Add fenced vma unpin")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/487559/
Link: https://lore.kernel.org/r/20220527172341.2151005-1-robdclark@gmail.com
2022-06-15 13:06:54 -07:00
Rob Clark
a636a0ff11 drm/msm: Add a way for userspace to allocate GPU iova
The motivation at this point is mainly native userspace mesa driver in a
VM guest.  The one remaining synchronous "hotpath" is buffer allocation,
because guest needs to wait to know the bo's iova before it can start
emitting cmdstream/state that references the new bo.  By allocating the
iova in the guest userspace, we no longer need to wait for a response
from the host, but can just rely on the allocation request being
processed before the cmdstream submission.  Allocation failures (OoM,
etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.

v2: Fix inuse check
v3: Change mismatched iova case to -EBUSY

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Link: https://lore.kernel.org/r/20220411215849.297838-11-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:12 -07:00
Rob Clark
95d1deb02a drm/msm/gem: Add fenced vma unpin
With userspace allocated iova (next patch), we can have a race condition
where userspace observes the fence completion and deletes the vma before
retire_submit() gets around to unpinning the vma.  To handle this, add a
fenced unpin which drops the refcount but tracks the fence, and update
msm_gem_vma_inuse() to check any previously unsignaled fences.

v2: Fix inuse underflow (duplicate unpin)
v3: Fix msm_job_run() vs submit_cleanup() race condition

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220411215849.297838-10-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:12 -07:00
Rob Clark
27674c6668 drm/msm/gem: Split vma lookup and pin
This way we only lookup vma once per object per submit, for both the
submit and retire path.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220411215849.297838-9-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:12 -07:00
Rob Clark
d413e6f971 drm/msm: Drop msm_gem_iova()
There was only a single user, which could just as easily stash the iova
when pinning.

v2: fix prepare->prepare->cleanup->cleanup sequences

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:11 -07:00
Rob Clark
2ee4b5d265 drm/msm/gem: Drop PAGE_SHIFT for address space mm
Get rid of all the unnecessary conversion between address/size and page
offsets.  It just confuses things.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-6-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:11 -07:00
Rob Clark
ca35ab2a20 drm/msm/gem: Split out inuse helper
Prep for a following patch, where it gets a bit more complicated.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-5-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:11 -07:00
Rob Clark
695383a138 drm/msm/gem: Move prototypes
These belong more cleanly in the gem header.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:03:11 -07:00
Rob Clark
1019933385 drm/msm: Remove unused field in submit
Noticed this was unused and never set.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220225202614.225197-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21 15:00:28 -07:00
Rob Clark
bc2112583a drm/msm/gpu: Track global faults per address-space
Other processes don't need to know about faults that they are isolated
from by virtue of address space isolation.  They are only interested in
whether some of their state might have been corrupted.

But to be safe, also track unattributed faults.  This case should really
never happen unless there is a kernel bug (and that would never happen,
right?)

v2: Instead of adding a new param, just change the behavior of the
    existing param to match what userspace actually wants [anholt]

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5934
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220201161618.778455-3-robdclark@gmail.com
Reviewed-by: Emma Anholt <emma@anholt.net>
Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-02-20 09:44:52 -08:00
Maxime Ripard
2f76520561
Merge drm/drm-next into drm-misc-next
Kickstart new drm-misc-next cycle.

Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2021-09-14 09:25:30 +02:00
Daniel Vetter
80bcfbd376 drm/msm: Use scheduler dependency handling
drm_sched_job_init is already at the right place, so this boils down
to deleting code.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Rob Clark <robdclark@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Cc: Sean Paul <sean@poorly.run>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-arm-msm@vger.kernel.org
Cc: freedreno@lists.freedesktop.org
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Link: https://patchwork.freedesktop.org/patch/msgid/20210805104705.862416-13-daniel.vetter@ffwll.ch
2021-08-30 10:59:31 +02:00
Thomas Zimmermann
510410bfc0 drm/msm: Implement mmap as GEM object function
Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective msm functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

v2:
	* rebase onto latest upstream
	* remove declaration of msm_gem_mmap_obj() from msm_fbdev.c

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/r/20210706084753.8194-1-tzimmermann@suse.de
[squash in missing VM_DONTEXPAND flag]
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-08-07 11:48:37 -07:00
Rob Clark
bd0b8e9f9c drm/msm: Drop submit bo_list
This was only used to detect userspace including the same bo multiple
times in a submit.  But ww_mutex can already tell us this.

When we drop struct_mutex around the submit ioctl, we'd otherwise need
to lock the bo before adding it to the bo_list.  But since ww_mutex can
already tell us this, it is simpler just to remove the bo_list.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210728010632.2633470-11-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-28 09:19:00 -07:00
Rob Clark
1d8a5ca436 drm/msm: Conversion to drm scheduler
For existing adrenos, there is one or more ringbuffer, depending on
whether preemption is supported.  When preemption is supported, each
ringbuffer has it's own priority.  A submitqueue (which maps to a
gl context or vk queue in userspace) is mapped to a specific ring-
buffer at creation time, based on the submitqueue's priority.

Each ringbuffer has it's own drm_gpu_scheduler.  Each submitqueue
maps to a drm_sched_entity.  And each submit maps to a drm_sched_job.

Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/4
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-10-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-28 09:19:00 -07:00
Rob Clark
a61acbbe9c drm/msm: Track "seqno" fences by idr
Previously the (non-fd) fence returned from submit ioctl was a raw
seqno, which is scoped to the ring.  But from UABI standpoint, the
ioctls related to seqno fences all specify a submitqueue.  We can
take advantage of that to replace the seqno fences with a cyclic idr
handle.

This is in preperation for moving to drm scheduler, at which point
the submit ioctl will return after queuing the submit job to the
scheduler, but before the submit is written into the ring (and
therefore before a ring seqno has been assigned).  Which means we
need to replace the dma_fence that userspace may need to wait on
with a scheduler fence.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-8-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27 18:09:18 -07:00
Rob Clark
be40596bb5 drm/msm: Consolidate submit bo state
Move all the locked/active/pinned state handling to msm_gem_submit.c.
In particular, for drm/scheduler, we'll need to do all this before
pushing the submit job to the scheduler.  But while we're at it we can
get rid of the dupicate pin and refcnt.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27 18:09:18 -07:00
Rob Clark
030af2b05a drm/msm: drop drm_gem_object_put_locked()
No idea why we were still using this.  It certainly hasn't been needed
for some time.  So drop the pointless twin codepaths.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27 18:09:18 -07:00
Rob Clark
375f9a63a6 drm/msm: Docs and misc cleanup
Fix a couple incorrect or misspelt comments, and add submitqueue doc
comment.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27 18:09:17 -07:00
Rob Clark
e25e92e08e drm/msm: devcoredump iommu fault support
Wire up support to stall the SMMU on iova fault, and collect a devcore-
dump snapshot for easier debugging of faults.

Currently this is a6xx-only, but mostly only because so far it is the
only one using adreno-smmu-priv.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Jordan Crouse <jordan@cosmicpenguin.net>
Link: https://lore.kernel.org/r/20210610214431.539029-6-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-06-23 07:33:55 -07:00
Rob Clark
10f76165d3 drm/msm: Do not unpin/evict exported dma-buf's
Our initial logic for excluding dma-bufs was not quite right.  In
particular we want msm_gem_get/put_pages() path used for exported
dma-bufs to increment/decrement the pin-count.

Also, in case the importer is vmap'ing the dma-buf, we need to be
sure to update the object's status, because it is now no longer
potentially evictable.

Fixes: 63f17ef834 drm/msm: Support evicting GEM objects to swap
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210426235326.1230125-1-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-27 10:10:12 -07:00
Rob Clark
64fcbde772 drm/msm: Track potentially evictable objects
Objects that are potential for swapping out are (1) willneed (ie. if
they are purgable/MADV_WONTNEED we can just free the pages without them
having to land in swap), (2) not on an active list, (3) not dma-buf
imported or exported, and (4) not vmap'd.  This repurposes the purged
list for objects that do not have backing pages (either because they
have not been pinned for the first time yet, or in a later patch because
they have been unpinned/evicted.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:47 -07:00
Rob Clark
f48f356330 drm/msm: Add $debugfs/gem stats on resident objects
Currently nearly everything, other than newly allocated objects which
are not yet backed by pages, is pinned and resident in RAM.  But it will
be nice to have some stats on what is unpinned once that is supported.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-6-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:47 -07:00
Rob Clark
90643a24a7 drm/msm: ratelimit GEM related WARN_ON()s
If you mess something up, you don't really need to see the same warn on
splat 4000 times pumped out a slow debug UART port..

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:47 -07:00
Rob Clark
0054eeb72a drm/msm: Fix spelling "purgable" -> "purgeable"
The previous patch fixes the user visible spelling.  This one fixes the
code.  Oops.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210406151816.1515329-1-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:43 -07:00
Rob Clark
528107c8e6 drm/msm: Improved debugfs gem stats
The last patch lost the breakdown of active vs inactive GEM objects in
$debugfs/gem.  But we can add some better stats to summarize not just
active vs inactive, but also purgable/purged to make up for that.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-5-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:43 -07:00
Rob Clark
6ed0897cd8 drm/msm: Fix debugfs deadlock
In normal cases the gem obj lock is acquired first before mm_lock.  The
exception is iterating the various object lists.  In the shrinker path,
deadlock is avoided by using msm_gem_trylock() and skipping over objects
that cannot be locked.  But for debugfs the straightforward thing is to
split things out into a separate list of all objects protected by it's
own lock.

Fixes: d984457b31 ("drm/msm: Add priv->mm_lock to protect active/inactive lists")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:43 -07:00
Rob Clark
cc8a4d5a1b drm/msm: Avoid mutex in shrinker_count()
When the system is under heavy memory pressure, we can end up with lots
of concurrent calls into the shrinker.  Keeping a running tab on what we
can shrink avoids grabbing a lock in shrinker->count(), and avoids
shrinker->scan() getting called when not profitable.

Also, we can keep purged objects in their own list to avoid re-traversing
them to help cut down time in the critical section further.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-3-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:42 -07:00
Rob Clark
bc90dc33c4 drm/msm: Remove unused freed llist node
Unused since commit c951a9b284 ("drm/msm: Remove msm_gem_free_work")

Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07 11:05:42 -07:00
Dave Airlie
60f2f74978 Merge tag 'drm-msm-next-2020-12-07' of https://gitlab.freedesktop.org/drm/msm into drm-next
* Shutdown hook for GPU (to ensure GPU is idle before iommu goes away)
* GPU cooling device support
* DSI 7nm and 10nm phy/pll updates
* Additional sm8150/sm8250 DPU support (merge_3d and DSPP color
  processing)
* Various DP fixes
* A whole bunch of W=1 fixes from Lee Jones
* GEM locking re-work (no more trylock_recursive in shrinker!)
* LLCC (system cache) support
* Various other fixes/cleanups

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGt0G=H3_RbF_GAQv838z5uujSmFd+7fYhL6Yg=23LwZ=g@mail.gmail.com
2020-12-10 09:42:47 +10:00