History log of /linux-master/drivers/iommu/iommufd/pages.c
Revision Date Author Comments
# 361d744d 30-Oct-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Add iopt_area_alloc()

We never initialize the two interval tree nodes, and zero fill is not the
same as RB_CLEAR_NODE. This can hide issues where we missed adding the
area to the trees. Factor out the allocation and clear the two nodes.

Fixes: 51fe6141f0f6 ("iommufd: Data structure to provide IOVA to PFN mapping")
Link: https://lore.kernel.org/r/20231030145035.GG691768@ziepe.ca
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# b7c822fa 25-Jul-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Set end correctly when doing batch carry

Even though the test suite covers this it somehow became obscured that
this wasn't working.

The test iommufd_ioas.mock_domain.access_domain_destory would blow up
rarely.

end should be set to 1 because this just pushed an item, the carry, to the
pfns list.

Sometimes the test would blow up with:

BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: 0000 [#1] SMP
CPU: 5 PID: 584 Comm: iommufd Not tainted 6.5.0-rc1-dirty #1236
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
RIP: 0010:batch_unpin+0xa2/0x100 [iommufd]
Code: 17 48 81 fe ff ff 07 00 77 70 48 8b 15 b7 be 97 e2 48 85 d2 74 14 48 8b 14 fa 48 85 d2 74 0b 40 0f b6 f6 48 c1 e6 04 48 01 f2 <48> 8b 3a 48 c1 e0 06 89 ca 48 89 de 48 83 e7 f0 48 01 c7 e8 96 dc
RSP: 0018:ffffc90001677a58 EFLAGS: 00010246
RAX: 00007f7e2646f000 RBX: 0000000000000000 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 00000000fefc4c8d RDI: 0000000000fefc4c
RBP: ffffc90001677a80 R08: 0000000000000048 R09: 0000000000000200
R10: 0000000000030b98 R11: ffffffff81f3bb40 R12: 0000000000000001
R13: ffff888101f75800 R14: ffffc90001677ad0 R15: 00000000000001fe
FS: 00007f9323679740(0000) GS:ffff8881ba540000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000105ede003 CR4: 00000000003706a0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
? show_regs+0x5c/0x70
? __die+0x1f/0x60
? page_fault_oops+0x15d/0x440
? lock_release+0xbc/0x240
? exc_page_fault+0x4a4/0x970
? asm_exc_page_fault+0x27/0x30
? batch_unpin+0xa2/0x100 [iommufd]
? batch_unpin+0xba/0x100 [iommufd]
__iopt_area_unfill_domain+0x198/0x430 [iommufd]
? __mutex_lock+0x8c/0xb80
? __mutex_lock+0x6aa/0xb80
? xa_erase+0x28/0x30
? iopt_table_remove_domain+0x162/0x320 [iommufd]
? lock_release+0xbc/0x240
iopt_area_unfill_domain+0xd/0x10 [iommufd]
iopt_table_remove_domain+0x195/0x320 [iommufd]
iommufd_hw_pagetable_destroy+0xb3/0x110 [iommufd]
iommufd_object_destroy_user+0x8e/0xf0 [iommufd]
iommufd_device_detach+0xc5/0x140 [iommufd]
iommufd_selftest_destroy+0x1f/0x70 [iommufd]
iommufd_object_destroy_user+0x8e/0xf0 [iommufd]
iommufd_destroy+0x3a/0x50 [iommufd]
iommufd_fops_ioctl+0xfb/0x170 [iommufd]
__x64_sys_ioctl+0x40d/0x9a0
do_syscall_64+0x3c/0x80
entry_SYSCALL_64_after_hwframe+0x46/0xb0

Link: https://lore.kernel.org/r/3-v1-85aacb2af554+bc-iommufd_syz3_jgg@nvidia.com
Cc: <stable@vger.kernel.org>
Fixes: f394576eb11d ("iommufd: PFN handling for iopt_pages")
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Reported-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# 0b295316 17-May-2023 Lorenzo Stoakes <lstoakes@gmail.com>

mm/gup: remove unused vmas parameter from pin_user_pages_remote()

No invocation of pin_user_pages_remote() uses the vmas parameter, so
remove it. This forms part of a larger patch set eliminating the use of
the vmas parameters altogether.

Link: https://lkml.kernel.org/r/28f000beb81e45bf538a2aaa77c90f5482b67a32.1684350871.git.lstoakes@gmail.com
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Jarkko Sakkinen <jarkko@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 13a0d1ae 30-Mar-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Do not corrupt the pfn list when doing batch carry

If batch->end is 0 then setting npfns[0] before computing the new value of
pfns will fail to adjust the pfn and result in various page accounting
corruptions. It should be ordered after.

This seems to result in various kinds of page meta-data corruption related
failures:

WARNING: CPU: 1 PID: 527 at mm/gup.c:75 try_grab_folio+0x503/0x740
Modules linked in:
CPU: 1 PID: 527 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
RIP: 0010:try_grab_folio+0x503/0x740
Code: e3 01 48 89 de e8 6d c1 dd ff 48 85 db 0f 84 7c fe ff ff e8 4f bf dd ff 49 8d 47 ff 48 89 45 d0 e9 73 fe ff ff e8 3d bf dd ff <0f> 0b 31 db e9 d0 fc ff ff e8 2f bf dd ff 48 8b 5d c8 31 ff 48 89
RSP: 0018:ffffc90000f37908 EFLAGS: 00010046
RAX: 0000000000000000 RBX: 00000000fffffc02 RCX: ffffffff81504c26
RDX: 0000000000000000 RSI: ffff88800d030000 RDI: 0000000000000002
RBP: ffffc90000f37948 R08: 000000000003ca24 R09: 0000000000000008
R10: 000000000003ca00 R11: 0000000000000023 R12: ffffea000035d540
R13: 0000000000000001 R14: 0000000000000000 R15: ffffea000035d540
FS: 00007fecbf659740(0000) GS:ffff88807dd00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000200011c3 CR3: 000000000ef66006 CR4: 0000000000770ee0
PKRU: 55555554
Call Trace:
<TASK>
internal_get_user_pages_fast+0xd32/0x2200
pin_user_pages_fast+0x65/0x90
pfn_reader_user_pin+0x376/0x390
pfn_reader_next+0x14a/0x7b0
pfn_reader_first+0x140/0x1b0
iopt_area_fill_domain+0x74/0x210
iopt_table_add_domain+0x30e/0x6e0
iommufd_device_selftest_attach+0x7f/0x140
iommufd_test+0x10ff/0x16f0
iommufd_fops_ioctl+0x206/0x330
__x64_sys_ioctl+0x10e/0x160
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x72/0xdc

Cc: <stable@vger.kernel.org>
Fixes: f394576eb11d ("iommufd: PFN handling for iopt_pages")
Link: https://lore.kernel.org/r/3-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reported-by: Pengfei Xu <pengfei.xu@intel.com>
Tested-by: Pengfei Xu <pengfei.xu@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# 727c28c1 30-Mar-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Fix unpinning of pages when an access is present

syzkaller found that the calculation of batch_last_index should use
'start_index' since at input to this function the batch is either empty or
it has already been adjusted to cross any accesses so it will start at the
point we are unmapping from.

Getting this wrong causes the unmap to run over the end of the pages
which corrupts pages that were never mapped. In most cases this triggers
the num pinned debugging:

WARNING: CPU: 0 PID: 557 at drivers/iommu/iommufd/pages.c:294 __iopt_area_unfill_domain+0x152/0x560
Modules linked in:
CPU: 0 PID: 557 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
RIP: 0010:__iopt_area_unfill_domain+0x152/0x560
Code: d2 0f ff 44 8b 64 24 54 48 8b 44 24 48 31 ff 44 89 e6 48 89 44 24 38 e8 fc d3 0f ff 45 85 e4 0f 85 eb 01 00 00 e8 0e d2 0f ff <0f> 0b e8 07 d2 0f ff 48 8b 44 24 38 89 5c 24 58 89 18 8b 44 24 54
RSP: 0018:ffffc9000108baf0 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 00000000ffffffff RCX: ffffffff821e3f85
RDX: 0000000000000000 RSI: ffff88800faf0000 RDI: 0000000000000002
RBP: ffffc9000108bd18 R08: 000000000003ca25 R09: 0000000000000014
R10: 000000000003ca00 R11: 0000000000000024 R12: 0000000000000004
R13: 0000000000000801 R14: 00000000000007ff R15: 0000000000000800
FS: 00007f3499ce1740(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000243 CR3: 00000000179c2001 CR4: 0000000000770ef0
PKRU: 55555554
Call Trace:
<TASK>
iopt_area_unfill_domain+0x32/0x40
iopt_table_remove_domain+0x23f/0x4c0
iommufd_device_selftest_detach+0x3a/0x90
iommufd_selftest_destroy+0x55/0x70
iommufd_object_destroy_user+0xce/0x130
iommufd_destroy+0xa2/0xc0
iommufd_fops_ioctl+0x206/0x330
__x64_sys_ioctl+0x10e/0x160
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x72/0xdc

Also add some useful WARN_ON sanity checks.

Cc: <stable@vger.kernel.org>
Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage")
Link: https://lore.kernel.org/r/2-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reported-by: Pengfei Xu <pengfei.xu@intel.com>
Tested-by: Pengfei Xu <pengfei.xu@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# e4395701 30-Mar-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Check for uptr overflow

syzkaller found that setting up a map with a user VA that wraps past zero
can trigger WARN_ONs, particularly from pin_user_pages weirdly returning 0
due to invalid arguments.

Prevent creating a pages with a uptr and size that would math overflow.

WARNING: CPU: 0 PID: 518 at drivers/iommu/iommufd/pages.c:793 pfn_reader_user_pin+0x2e6/0x390
Modules linked in:
CPU: 0 PID: 518 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
RIP: 0010:pfn_reader_user_pin+0x2e6/0x390
Code: b1 11 e9 25 fe ff ff e8 28 e4 0f ff 31 ff 48 89 de e8 2e e6 0f ff 48 85 db 74 0a e8 14 e4 0f ff e9 4d ff ff ff e8 0a e4 0f ff <0f> 0b bb f2 ff ff ff e9 3c ff ff ff e8 f9 e3 0f ff ba 01 00 00 00
RSP: 0018:ffffc90000f9fa30 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff821e2b72
RDX: 0000000000000000 RSI: ffff888014184680 RDI: 0000000000000002
RBP: ffffc90000f9fa78 R08: 00000000000000ff R09: 0000000079de6f4e
R10: ffffc90000f9f790 R11: ffff888014185418 R12: ffffc90000f9fc60
R13: 0000000000000002 R14: ffff888007879800 R15: 0000000000000000
FS: 00007f4227555740(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000043 CR3: 000000000e748005 CR4: 0000000000770ef0
PKRU: 55555554
Call Trace:
<TASK>
pfn_reader_next+0x14a/0x7b0
? interval_tree_double_span_iter_update+0x11a/0x140
pfn_reader_first+0x140/0x1b0
iopt_pages_rw_slow+0x71/0x280
? __this_cpu_preempt_check+0x20/0x30
iopt_pages_rw_access+0x2b2/0x5b0
iommufd_access_rw+0x19f/0x2f0
iommufd_test+0xd11/0x16f0
? write_comp_data+0x2f/0x90
iommufd_fops_ioctl+0x206/0x330
__x64_sys_ioctl+0x10e/0x160
? __pfx_iommufd_fops_ioctl+0x10/0x10
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x72/0xdc

Cc: <stable@vger.kernel.org>
Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage")
Link: https://lore.kernel.org/r/1-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reported-by: Pengfei Xu <pengfei.xu@intel.com>
Tested-by: Pengfei Xu <pengfei.xu@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# e787a38e 23-Jan-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Use GFP_KERNEL_ACCOUNT for iommu_map()

iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT.

However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain. Many drivers will allocate these at
iommu_map() time and will trivially do the right thing if we pass in
GFP_KERNEL_ACCOUNT.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/5-v3-76b587fe28df+6e3-iommu_map_gfp_jgg@nvidia.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>


# 1369459b 23-Jan-2023 Jason Gunthorpe <jgg@ziepe.ca>

iommu: Add a gfp parameter to iommu_map()

The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()

Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Link: https://lore.kernel.org/r/1-v3-76b587fe28df+6e3-iommu_map_gfp_jgg@nvidia.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>


# a26fa392 07-Dec-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Improve a few unclear bits of code

Correct a few items noticed late in review:

- We should assert that the math in batch_clear_carry() doesn't underflow

- user->locked should be -1 not 0 sicne we just did mmput

- npages should not have been recalculated, it already has that value

No functional change.

Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage")
Link: https://lore.kernel.org/r/2-v1-0362a1a1c034+98-iommufd_fixes1_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# c9b8a83a 07-Dec-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Fix comment typos

Repair some typos in comments that were noticed late in the review
cycle.

Fixes: f394576eb11d ("iommufd: PFN handling for iopt_pages")
Link: https://lore.kernel.org/r/1-v1-0362a1a1c034+98-iommufd_fixes1_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# 52f52858 29-Nov-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Add additional invariant assertions

These are on performance paths so we protect them using the
CONFIG_IOMMUFD_TEST to not take a hit during normal operation.

These are useful when running the test suite and syzkaller to find data
structure inconsistencies early.

Link: https://lore.kernel.org/r/18-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
Tested-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> # s390
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# e26eed4f 29-Nov-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Add some fault injection points

This increases the coverage the fail_nth test gets, as well as via
syzkaller.

Link: https://lore.kernel.org/r/17-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> # s390
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# f4b20bb3 29-Nov-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Add kernel support for testing iommufd

Provide a mock kernel module for the iommu_domain that allows it to run
without any HW and the mocking provides a way to directly validate that
the PFNs loaded into the iommu_domain are correct. This exposes the access
kAPI toward userspace to allow userspace to explore the functionality of
pages.c and io_pagetable.c

The mock also simulates the rare case of PAGE_SIZE > iommu page size as
the mock will operate at a 2K iommu page size. This allows exercising all
of the calculations to support this mismatch.

This is also intended to support syzkaller exploring the same space.

However, it is an unusually invasive config option to enable all of
this. The config option should not be enabled in a production kernel.

Link: https://lore.kernel.org/r/16-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> # s390
Tested-by: Eric Auger <eric.auger@redhat.com> # aarch64
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# 8d160cd4 29-Nov-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: Algorithms for PFN storage

The iopt_pages which represents a logical linear list of full PFNs held in
different storage tiers. Each area points to a slice of exactly one
iopt_pages, and each iopt_pages can have multiple areas and accesses.

The three storage tiers are managed to meet these objectives:

- If no iommu_domain or in-kerenel access exists then minimal memory
should be consumed by iomufd
- If a page has been pinned then an iopt_pages will not pin it again
- If an in-kernel access exists then the xarray must provide the backing
storage to avoid allocations on domain removals
- Otherwise any iommu_domain will be used for storage

In a common configuration with only an iommu_domain the iopt_pages does
not allocate significant memory itself.

The external interface for pages has several logical operations:

iopt_area_fill_domain() will load the PFNs from storage into a single
domain. This is used when attaching a new domain to an existing IOAS.

iopt_area_fill_domains() will load the PFNs from storage into multiple
domains. This is used when creating a new IOVA map in an existing IOAS

iopt_pages_add_access() creates an iopt_pages_access that tracks an
in-kernel access of PFNs. This is some external driver that might be
accessing the IOVA using the CPU, or programming PFNs with the DMA
API. ie a VFIO mdev.

iopt_pages_rw_access() directly perform a memcpy on the PFNs, without
the overhead of iopt_pages_add_access()

iopt_pages_fill_xarray() will load PFNs into the xarray and return a
'struct page *' array. It is used by iopt_pages_access's to extract PFNs
for in-kernel use. iopt_pages_fill_from_xarray() is a fast path when it
is known the xarray is already filled.

As an iopt_pages can be referred to in slices by many areas and accesses
it uses interval trees to keep track of which storage tiers currently hold
the PFNs. On a page-by-page basis any request for a PFN will be satisfied
from one of the storage tiers and the PFN copied to target domain/array.

Unfill actions are similar, on a page by page basis domains are unmapped,
xarray entries freed or struct pages fully put back.

Significant complexity is required to fully optimize all of these data
motions. The implementation calculates the largest consecutive range of
same-storage indexes and operates in blocks. The accumulation of PFNs
always generates the largest contiguous PFN range possible to optimize and
this gathering can cross storage tier boundaries. For cases like 'fill
domains' care is taken to avoid duplicated work and PFNs are read once and
pushed into all domains.

The map/unmap interaction with the iommu_domain always works in contiguous
PFN blocks. The implementation does not require or benefit from any
split/merge optimization in the iommu_domain driver.

This design suggests several possible improvements in the IOMMU API that
would greatly help performance, particularly a way for the driver to map
and read the pfns lists instead of working with one driver call per page
to read, and one driver call per contiguous range to store.

Link: https://lore.kernel.org/r/9-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Tested-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Lixiao Yang <lixiao.yang@intel.com>
Tested-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>


# f394576e 29-Nov-2022 Jason Gunthorpe <jgg@ziepe.ca>

iommufd: PFN handling for iopt_pages

The top of the data structure provides an IO Address Space (IOAS) that is
similar to a VFIO container. The IOAS allows map/unmap of memory into
ranges of IOVA called iopt_areas. Multiple IOMMU domains (IO page tables)
and in-kernel accesses (like VFIO mdevs) can be attached to the IOAS to
access the PFNs that those IOVA areas cover.

The IO Address Space (IOAS) datastructure is composed of:
- struct io_pagetable holding the IOVA map
- struct iopt_areas representing populated portions of IOVA
- struct iopt_pages representing the storage of PFNs
- struct iommu_domain representing each IO page table in the system IOMMU
- struct iopt_pages_access representing in-kernel accesses of PFNs (ie
VFIO mdevs)
- struct xarray pinned_pfns holding a list of pages pinned by in-kernel
accesses

This patch introduces the lowest part of the datastructure - the movement
of PFNs in a tiered storage scheme:
1) iopt_pages::pinned_pfns xarray
2) Multiple iommu_domains
3) The origin of the PFNs, i.e. the userspace pointer

PFN have to be copied between all combinations of tiers, depending on the
configuration.

The interface is an iterator called a 'pfn_reader' which determines which
tier each PFN is stored and loads it into a list of PFNs held in a struct
pfn_batch.

Each step of the iterator will fill up the pfn_batch, then the caller can
use the pfn_batch to send the PFNs to the required destination. Repeating
this loop will read all the PFNs in an IOVA range.

The pfn_reader and pfn_batch also keep track of the pinned page accounting.

While PFNs are always stored and accessed as full PAGE_SIZE units the
iommu_domain tier can store with a sub-page offset/length to support
IOMMUs with a smaller IOPTE size than PAGE_SIZE.

Link: https://lore.kernel.org/r/8-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Tested-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Lixiao Yang <lixiao.yang@intel.com>
Tested-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>