History log of /linux-master/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.c
Revision Date Author Comments
# cb9c9193 29-Nov-2023 Dave Airlie <airlied@redhat.com>

nouveau/tu102: flush all pdbs on vmm flush

This is a hack around a bug exposed with the GSP code, I'm not sure
what is happening exactly, but it appears some of our flushes don't
result in proper tlb invalidation for out BAR2 and we get a BAR2
fault from GSP and it all dies.

Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231130010852.4034774-1-airlied@gmail.com


# 5bf02571 18-Sep-2023 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/r535: initial support

- Valid VRAM regions are read from GSP-RM, and used to construct our MM
- BAR1/BAR2 VMMs modified to be shared with RM
- Client VMMs have RM VASPACE objects created for them
- Adds FBSR to backup system objects in VRAM across suspend

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230918202149.4343-37-skeggsb@gmail.com


# 743b7fc4 18-Sep-2023 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/tu102-: remove write to 0x100e68 during tlb invalidate

This was cargo-culted from traces of RM when the code was written, but
we probably shouldn't be touching NV_PFB regs while GSP-RM is running.

From traces, it looks like NVIDIA dropped this sometime between 510.54
and 515.48.07, so I guess we can too.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230918202149.4343-2-skeggsb@gmail.com


# 17008293 19-Sep-2023 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/gp100-: always invalidate TLBs at CACHE_LEVEL_ALL

Fixes some issues when running on top of RM.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Acked-by: Danilo Krummrich <me@dakr.org>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230919220442.202488-5-lyude@redhat.com


# 5ec69c91 02-Dec-2020 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu: serialise mmu invalidations with private mutex

nvkm_subdev.mutex is going away.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>


# b9f327f1 09-Jun-2020 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/gp100-: enable mmu invalidate depth optimisation

This causes us to invalidate MMU only at the level we made modifications -
ie: if we've only modified PTEs, there's no need to have MMU dump the PDs
it's fetched into L2.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>


# ab2ee9ff 08-May-2018 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/gp100-: support vmms with gcc/tex replayable faults enabled

Some GPU units are capable of supporting "replayable" page faults, where
the execution unit will wait for SW to fixup GPU page tables rather than
triggering a channel-fatal fault.

This feature isn't useful (it's harmful, even) unless something like HMM
is being used to manage events appearing in the replayable fault buffer,
so, it's disabled by default.

This commit allows a client to request it be enabled.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>


# 71871aa6 09-Jul-2018 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/gp100-: add privileged methods for fault replay/cancel

Host methods exist to do at least some of what we need, but we are not
currently pushing replay/cancels through a channel like UVM does as it's
not clear whether it's necessary in our case (UVM also updates PTEs with
the GPU).

UVM also pushes a software method for fault cancels on Pascal, seemingly
because the host methods don't appear to be sufficient. If/when we want
to push the replay/cancel on the GPU, we can re-purpose the cancellation
code here to implement that swmthd.

Keep it simple for now, until we figure out exactly what we need here.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>


# 2606f291 13-Jun-2018 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu: support initialisation of client-managed address-spaces

NVKM is currently responsible for managing the allocation of a client's
GPU address-space, but there's various use-cases (ie. HMM address-space
mirroring) where giving a client more direct control is desirable.

This commit allows for a VMM to be created where the area allocated for
NVKM is limited to a client-specified window, the remainder of address-
space is controlled directly by the client.

Leaving a window is necessary to support various internal requirements,
but also to support existing allocation interfaces as not all of the HW
is capable of working with a HMM allocation.

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>


# c011b254 16-Jan-2019 Ben Skeggs <bskeggs@redhat.com>

drm/nouveau/mmu/tu102: rename implementation from tu104

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>