History log of /linux-master/drivers/gpu/drm/xen/xen_drm_front_gem.h
Revision Date Author Comments
# 7938f421 04-Feb-2022 Lucas De Marchi <lucas.demarchi@intel.com>

dma-buf-map: Rename to iosys-map

Rename struct dma_buf_map to struct iosys_map and corresponding APIs.
Over time dma-buf-map grew up to more functionality than the one used by
dma-buf: in fact it's just a shim layer to abstract system memory, that
can be accessed via regular load and store, from IO memory that needs to
be acessed via arch helpers.

The idea is to extend this API so it can fulfill other needs, internal
to a single driver. Example: in the i915 driver it's desired to share
the implementation for integrated graphics, which uses mostly system
memory, with discrete graphics, which may need to access IO memory.

The conversion was mostly done with the following semantic patch:

@r1@
@@
- struct dma_buf_map
+ struct iosys_map

@r2@
@@
(
- DMA_BUF_MAP_INIT_VADDR
+ IOSYS_MAP_INIT_VADDR
|
- dma_buf_map_set_vaddr
+ iosys_map_set_vaddr
|
- dma_buf_map_set_vaddr_iomem
+ iosys_map_set_vaddr_iomem
|
- dma_buf_map_is_equal
+ iosys_map_is_equal
|
- dma_buf_map_is_null
+ iosys_map_is_null
|
- dma_buf_map_is_set
+ iosys_map_is_set
|
- dma_buf_map_clear
+ iosys_map_clear
|
- dma_buf_map_memcpy_to
+ iosys_map_memcpy_to
|
- dma_buf_map_incr
+ iosys_map_incr
)

@@
@@
- #include <linux/dma-buf-map.h>
+ #include <linux/iosys-map.h>

Then some files had their includes adjusted and some comments were
update to remove mentions to dma-buf-map.

Since this is not specific to dma-buf anymore, move the documentation to
the "Bus-Independent Device Accesses" section.

v2:
- Squash patches

v3:
- Fix wrong removal of dma-buf.h from MAINTAINERS
- Move documentation from dma-buf.rst to device-io.rst

v4:
- Change documentation title and level

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20220204170541.829227-1-lucas.demarchi@intel.com


# 3153c648 08-Nov-2021 Thomas Zimmermann <tzimmermann@suse.de>

drm/xen: Implement mmap as GEM object function

Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.

The respective xen functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211108102846.309-3-tzimmermann@suse.de


# 49a3f51d 03-Nov-2020 Thomas Zimmermann <tzimmermann@suse.de>

drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends

This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.

TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.

v7:
* init QXL cursor to mapped BO buffer (kernel test robot)
v5:
* update vkms after switch to shmem
v4:
* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
* fix a trailing { in drm_gem_vmap()
* remove several empty functions instead of converting them (Daniel)
* comment uses of raw pointers with a TODO (Daniel)
* TODO list: convert more helpers to use struct dma_buf_map

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20201103093015.1063-7-tzimmermann@suse.de


# 2ea2269e 30-Jun-2019 Sam Ravnborg <sam@ravnborg.org>

drm/xen: drop use of drmP.h

The drmP.h header is deprecated. Drop all uses.
Added includes/forwards to the header files and
then fixed fallout in the .c files.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Acked-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
Cc: xen-devel@lists.xenproject.org
Link: https://patchwork.freedesktop.org/patch/msgid/20190630061922.7254-3-sam@ravnborg.org


# 4394e964 17-Apr-2018 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

drm/xen-front: Remove CMA support

It turns out this was only needed to paper over a bug in the CMA
helpers, which was addressed in

commit 998fb1a0f478b83492220ff79583bf9ad538bdd8
Author: Liviu Dudau <Liviu.Dudau@arm.com>
Date: Fri Nov 10 13:33:10 2017 +0000

drm: gem_cma_helper.c: Allow importing of contiguous scatterlists with nents > 1

Without this the following pipeline didn't work:

domU:
1. xen-front allocates a non-contig buffer
2. creates grants out of it

dom0:
3. converts the grants into a dma-buf. Since they're non-contig, the
scatter-list is huge.
4. imports it into rcar-du, which requires dma-contig memory for
scanout.

-> On this given platform there's an IOMMU, so in theory this should
work. But in practice this failed, because of the huge number of sg
entries, even though the IOMMU driver mapped it all into a dma-contig
range.

With a guest-contig buffer allocated in step 1, this problem doesn't
exist. But there's technically no reason to require guest-contig
memory for xen buffer sharing using grants.

Given all that, the xen-front cma support is not needed and should be
removed.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180417074012.21311-1-andr2000@gmail.com


# c575b7ee 03-Apr-2018 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

drm/xen-front: Add support for Xen PV display frontend

Add support for Xen para-virtualized frontend display driver.
Accompanying backend [1] is implemented as a user-space application
and its helper library [2], capable of running as a Weston client
or DRM master.
Configuration of both backend and frontend is done via
Xen guest domain configuration options [3].

Driver limitations:
1. Only primary plane without additional properties is supported.
2. Only one video mode supported which resolution is configured
via XenStore.
3. All CRTCs operate at fixed frequency of 60Hz.

1. Implement Xen bus state machine for the frontend driver according to
the state diagram and recovery flow from display para-virtualized
protocol: xen/interface/io/displif.h.

2. Read configuration values from Xen store according
to xen/interface/io/displif.h protocol:
- read connector(s) configuration
- read buffer allocation mode (backend/frontend)

3. Handle Xen event channels:
- create for all configured connectors and publish
corresponding ring references and event channels in Xen store,
so backend can connect
- implement event channels interrupt handlers
- create and destroy event channels with respect to Xen bus state

4. Implement shared buffer handling according to the
para-virtualized display device protocol at xen/interface/io/displif.h:
- handle page directories according to displif protocol:
- allocate and share page directories
- grant references to the required set of pages for the
page directory
- allocate xen balllooned pages via Xen balloon driver
with alloc_xenballooned_pages/free_xenballooned_pages
- grant references to the required set of pages for the
shared buffer itself
- implement pages map/unmap for the buffers allocated by the
backend (gnttab_map_refs/gnttab_unmap_refs)

5. Implement kernel modesetiing/connector handling using
DRM simple KMS helper pipeline:

- implement KMS part of the driver with the help of DRM
simple pipepline helper which is possible due to the fact
that the para-virtualized driver only supports a single
(primary) plane:
- initialize connectors according to XenStore configuration
- handle frame done events from the backend
- create and destroy frame buffers and propagate those
to the backend
- propagate set/reset mode configuration to the backend on display
enable/disable callbacks
- send page flip request to the backend and implement logic for
reporting backend IO errors on prepare fb callback

- implement virtual connector handling:
- support only pixel formats suitable for single plane modes
- make sure the connector is always connected
- support a single video mode as per para-virtualized driver
configuration

6. Implement GEM handling depending on driver mode of operation:
depending on the requirements for the para-virtualized environment,
namely requirements dictated by the accompanying DRM/(v)GPU drivers
running in both host and guest environments, number of operating
modes of para-virtualized display driver are supported:
- display buffers can be allocated by either
frontend driver or backend
- display buffers can be allocated to be contiguous
in memory or not

Note! Frontend driver itself has no dependency on contiguous memory for
its operation.

6.1. Buffers allocated by the frontend driver.

The below modes of operation are configured at compile-time via
frontend driver's kernel configuration.

6.1.1. Front driver configured to use GEM CMA helpers
This use-case is useful when used with accompanying DRM/vGPU driver
in guest domain which was designed to only work with contiguous
buffers, e.g. DRM driver based on GEM CMA helpers: such drivers can
only import contiguous PRIME buffers, thus requiring frontend driver
to provide such. In order to implement this mode of operation
para-virtualized frontend driver can be configured to use
GEM CMA helpers.

6.1.2. Front driver doesn't use GEM CMA
If accompanying drivers can cope with non-contiguous memory then, to
lower pressure on CMA subsystem of the kernel, driver can allocate
buffers from system memory.

Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
may require IOMMU support on the platform, so accompanying DRM/vGPU
hardware can still reach display buffer memory while importing PRIME
buffers from the frontend driver.

6.2. Buffers allocated by the backend

This mode of operation is run-time configured via guest domain
configuration through XenStore entries.

For systems which do not provide IOMMU support, but having specific
requirements for display buffers it is possible to allocate such buffers
at backend side and share those with the frontend.
For example, if host domain is 1:1 mapped and has DRM/GPU hardware
expecting physically contiguous memory, this allows implementing
zero-copying use-cases.

Note, while using this scenario the following should be considered:
a) If guest domain dies then pages/grants received from the backend
cannot be claimed back
b) Misbehaving guest may send too many requests to the
backend exhausting its grant references and memory
(consider this from security POV).

Note! Configuration options 1.1 (contiguous display buffers) and 2
(backend allocated buffers) are not supported at the same time.

7. Handle communication with the backend:
- send requests and wait for the responses according
to the displif protocol
- serialize access to the communication channel
- time-out used for backend communication is set to 3000 ms
- manage display buffers shared with the backend

[1] https://github.com/xen-troops/displ_be
[2] https://github.com/xen-troops/libxenbe
[3] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180403112317.28751-2-andr2000@gmail.com