History log of /linux-master/include/drm/drm_exec.h
Revision Date Author Comments
# cf41cebf 19-Jan-2024 Thomas Hellström <thomas.hellstrom@linux.intel.com>

drm/exec, drm/gpuvm: Prefer u32 over uint32_t

The relatively recently introduced drm/exec utility was using uint32_t
in its interface, which was then also carried over to drm/gpuvm.

Prefer u32 in new code and update drm/exec and drm/gpuvm accordingly.

Cc: Christian König <christian.koenig@amd.com>
Cc: Danilo Krummrich <dakr@redhat.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Danilo Krummrich <dakr@redhat.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240119090557.6360-1-thomas.hellstrom@linux.intel.com


# 05d24935 20-Nov-2023 Rob Clark <robdclark@chromium.org>

drm/exec: Pass in initial # of objects

In cases where the # is known ahead of time, it is silly to do the table
resize dance.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Patchwork: https://patchwork.freedesktop.org/patch/568338/


# d20b484c 06-Sep-2023 Thomas Hellström <thomas.hellstrom@linux.intel.com>

drm/drm_exec: Work around a WW mutex lockdep oddity

If *any* object of a certain WW mutex class is locked, lockdep will
consider *all* mutexes of that class as locked. Also the lock allocation
tracking code will apparently register only the address of the first
mutex of a given class locked in a sequence.
This has the odd consequence that if that first mutex is unlocked while
other mutexes of the same class remain locked and then its memory then
freed, the lock alloc tracking code will incorrectly assume that memory
is freed with a held lock in there.

For now, work around that for drm_exec by releasing the first grabbed
object lock last.

v2:
- Fix a typo (Danilo Krummrich)
- Reword the commit message a bit.
- Add a Fixes: tag

Related lock alloc tracking warning:
[ 322.660067] =========================
[ 322.660070] WARNING: held lock freed!
[ 322.660074] 6.5.0-rc7+ #155 Tainted: G U N
[ 322.660078] -------------------------
[ 322.660081] kunit_try_catch/4981 is freeing memory ffff888112adc000-ffff888112adc3ff, with a lock still held there!
[ 322.660089] ffff888112adc1a0 (reservation_ww_class_mutex){+.+.}-{3:3}, at: drm_exec_lock_obj+0x11a/0x600 [drm_exec]
[ 322.660104] 2 locks held by kunit_try_catch/4981:
[ 322.660108] #0: ffffc9000343fe18 (reservation_ww_class_acquire){+.+.}-{0:0}, at: test_early_put+0x22f/0x490 [drm_exec_test]
[ 322.660123] #1: ffff888112adc1a0 (reservation_ww_class_mutex){+.+.}-{3:3}, at: drm_exec_lock_obj+0x11a/0x600 [drm_exec]
[ 322.660135]
stack backtrace:
[ 322.660139] CPU: 7 PID: 4981 Comm: kunit_try_catch Tainted: G U N 6.5.0-rc7+ #155
[ 322.660146] Hardware name: ASUS System Product Name/PRIME B560M-A AC, BIOS 0403 01/26/2021
[ 322.660152] Call Trace:
[ 322.660155] <TASK>
[ 322.660158] dump_stack_lvl+0x57/0x90
[ 322.660164] debug_check_no_locks_freed+0x20b/0x2b0
[ 322.660172] slab_free_freelist_hook+0xa1/0x160
[ 322.660179] ? drm_exec_unlock_all+0x168/0x2a0 [drm_exec]
[ 322.660186] __kmem_cache_free+0xb2/0x290
[ 322.660192] drm_exec_unlock_all+0x168/0x2a0 [drm_exec]
[ 322.660200] drm_exec_fini+0xf/0x1c0 [drm_exec]
[ 322.660206] test_early_put+0x289/0x490 [drm_exec_test]
[ 322.660215] ? __pfx_test_early_put+0x10/0x10 [drm_exec_test]
[ 322.660222] ? __kasan_check_byte+0xf/0x40
[ 322.660227] ? __ksize+0x63/0x140
[ 322.660233] ? drmm_add_final_kfree+0x3e/0xa0 [drm]
[ 322.660289] ? _raw_spin_unlock_irqrestore+0x30/0x60
[ 322.660294] ? lockdep_hardirqs_on+0x7d/0x100
[ 322.660301] ? __pfx_kunit_try_run_case+0x10/0x10 [kunit]
[ 322.660310] ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10 [kunit]
[ 322.660319] kunit_generic_run_threadfn_adapter+0x4a/0x90 [kunit]
[ 322.660328] kthread+0x2e7/0x3c0
[ 322.660334] ? __pfx_kthread+0x10/0x10
[ 322.660339] ret_from_fork+0x2d/0x70
[ 322.660345] ? __pfx_kthread+0x10/0x10
[ 322.660349] ret_from_fork_asm+0x1b/0x30
[ 322.660358] </TASK>
[ 322.660818] ok 8 test_early_put

Cc: Christian König <christian.koenig@amd.com>
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Fixes: 09593216bff1 ("drm: execution context for GEM buffers v7")
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Danilo Krummrich <dakr@redhat.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230906095039.3320-4-thomas.hellstrom@linux.intel.com


# 616bceae 31-Jul-2023 Christian König <christian.koenig@amd.com>

drm/exec: use unique instead of local label

GCC forbids to jump to labels in loop conditions and a new clang
check stumbled over this.

So instead using a local label inside the loop condition use an
unique label outside of it.

Fixes: 09593216bff1 ("drm: execution context for GEM buffers v7")
Link: https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html
Link: https://github.com/ClangBuiltLinux/linux/issues/1890
Link: https://github.com/llvm/llvm-project/commit/20219106060208f0c2f5d096eb3aed7b712f5067
Reported-by: Nathan Chancellor <nathan@kernel.org>
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
CC: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230731123625.3766-1-christian.koenig@amd.com


# 09593216 07-Apr-2022 Christian König <christian.koenig@amd.com>

drm: execution context for GEM buffers v7

This adds the infrastructure for an execution context for GEM buffers
which is similar to the existing TTMs execbuf util and intended to replace
it in the long term.

The basic functionality is that we abstracts the necessary loop to lock
many different GEM buffers with automated deadlock and duplicate handling.

v2: drop xarray and use dynamic resized array instead, the locking
overhead is unnecessary and measurable.
v3: drop duplicate tracking, radeon is really the only one needing that.
v4: fixes issues pointed out by Danilo, some typos in comments and a
helper for lock arrays of GEM objects.
v5: some suggestions by Boris Brezillon, especially just use one retry
macro, drop loop in prepare_array, use flags instead of bool
v6: minor changes suggested by Thomas, Boris and Danilo
v7: minor typos pointed out by checkpatch.pl fixed

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Danilo Krummrich <dakr@redhat.com>
Tested-by: Danilo Krummrich <dakr@redhat.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230711133122.3710-2-christian.koenig@amd.com