1.. SPDX-License-Identifier: GPL-2.0
2
3======================
4The x86 kvm shadow mmu
5======================
6
7The mmu (in arch/x86/kvm, files mmu.[ch] and paging_tmpl.h) is responsible
8for presenting a standard x86 mmu to the guest, while translating guest
9physical addresses to host physical addresses.
10
11The mmu code attempts to satisfy the following requirements:
12
13- correctness:
14	       the guest should not be able to determine that it is running
15               on an emulated mmu except for timing (we attempt to comply
16               with the specification, not emulate the characteristics of
17               a particular implementation such as tlb size)
18- security:
19	       the guest must not be able to touch host memory not assigned
20               to it
21- performance:
22               minimize the performance penalty imposed by the mmu
23- scaling:
24               need to scale to large memory and large vcpu guests
25- hardware:
26               support the full range of x86 virtualization hardware
27- integration:
28               Linux memory management code must be in control of guest memory
29               so that swapping, page migration, page merging, transparent
30               hugepages, and similar features work without change
31- dirty tracking:
32               report writes to guest memory to enable live migration
33               and framebuffer-based displays
34- footprint:
35               keep the amount of pinned kernel memory low (most memory
36               should be shrinkable)
37- reliability:
38               avoid multipage or GFP_ATOMIC allocations
39
40Acronyms
41========
42
43====  ====================================================================
44pfn   host page frame number
45hpa   host physical address
46hva   host virtual address
47gfn   guest frame number
48gpa   guest physical address
49gva   guest virtual address
50ngpa  nested guest physical address
51ngva  nested guest virtual address
52pte   page table entry (used also to refer generically to paging structure
53      entries)
54gpte  guest pte (referring to gfns)
55spte  shadow pte (referring to pfns)
56tdp   two dimensional paging (vendor neutral term for NPT and EPT)
57====  ====================================================================
58
59Virtual and real hardware supported
60===================================
61
62The mmu supports first-generation mmu hardware, which allows an atomic switch
63of the current paging mode and cr3 during guest entry, as well as
64two-dimensional paging (AMD's NPT and Intel's EPT).  The emulated hardware
65it exposes is the traditional 2/3/4 level x86 mmu, with support for global
66pages, pae, pse, pse36, cr0.wp, and 1GB pages. Emulated hardware also
67able to expose NPT capable hardware on NPT capable hosts.
68
69Translation
70===========
71
72The primary job of the mmu is to program the processor's mmu to translate
73addresses for the guest.  Different translations are required at different
74times:
75
76- when guest paging is disabled, we translate guest physical addresses to
77  host physical addresses (gpa->hpa)
78- when guest paging is enabled, we translate guest virtual addresses, to
79  guest physical addresses, to host physical addresses (gva->gpa->hpa)
80- when the guest launches a guest of its own, we translate nested guest
81  virtual addresses, to nested guest physical addresses, to guest physical
82  addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
83
84The primary challenge is to encode between 1 and 3 translations into hardware
85that support only 1 (traditional) and 2 (tdp) translations.  When the
86number of required translations matches the hardware, the mmu operates in
87direct mode; otherwise it operates in shadow mode (see below).
88
89Memory
90======
91
92Guest memory (gpa) is part of the user address space of the process that is
93using kvm.  Userspace defines the translation between guest addresses and user
94addresses (gpa->hva); note that two gpas may alias to the same hva, but not
95vice versa.
96
97These hvas may be backed using any method available to the host: anonymous
98memory, file backed memory, and device memory.  Memory might be paged by the
99host at any time.
100
101Events
102======
103
104The mmu is driven by events, some from the guest, some from the host.
105
106Guest generated events:
107
108- writes to control registers (especially cr3)
109- invlpg/invlpga instruction execution
110- access to missing or protected translations
111
112Host generated events:
113
114- changes in the gpa->hpa translation (either through gpa->hva changes or
115  through hva->hpa changes)
116- memory pressure (the shrinker)
117
118Shadow pages
119============
120
121The principal data structure is the shadow page, 'struct kvm_mmu_page'.  A
122shadow page contains 512 sptes, which can be either leaf or nonleaf sptes.  A
123shadow page may contain a mix of leaf and nonleaf sptes.
124
125A nonleaf spte allows the hardware mmu to reach the leaf pages and
126is not related to a translation directly.  It points to other shadow pages.
127
128A leaf spte corresponds to either one or two translations encoded into
129one paging structure entry.  These are always the lowest level of the
130translation stack, with optional higher level translations left to NPT/EPT.
131Leaf ptes point at guest pages.
132
133The following table shows translations encoded by leaf ptes, with higher-level
134translations in parentheses:
135
136 Non-nested guests::
137
138  nonpaging:     gpa->hpa
139  paging:        gva->gpa->hpa
140  paging, tdp:   (gva->)gpa->hpa
141
142 Nested guests::
143
144  non-tdp:       ngva->gpa->hpa  (*)
145  tdp:           (ngva->)ngpa->gpa->hpa
146
147  (*) the guest hypervisor will encode the ngva->gpa translation into its page
148      tables if npt is not present
149
150Shadow pages contain the following information:
151  role.level:
152    The level in the shadow paging hierarchy that this shadow page belongs to.
153    1=4k sptes, 2=2M sptes, 3=1G sptes, etc.
154  role.direct:
155    If set, leaf sptes reachable from this page are for a linear range.
156    Examples include real mode translation, large guest pages backed by small
157    host pages, and gpa->hpa translations when NPT or EPT is active.
158    The linear range starts at (gfn << PAGE_SHIFT) and its size is determined
159    by role.level (2MB for first level, 1GB for second level, 0.5TB for third
160    level, 256TB for fourth level)
161    If clear, this page corresponds to a guest page table denoted by the gfn
162    field.
163  role.quadrant:
164    When role.has_4_byte_gpte=1, the guest uses 32-bit gptes while the host uses 64-bit
165    sptes.  That means a guest page table contains more ptes than the host,
166    so multiple shadow pages are needed to shadow one guest page.
167    For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
168    first or second 512-gpte block in the guest page table.  For second-level
169    page tables, each 32-bit gpte is converted to two 64-bit sptes
170    (since each first-level guest page is shadowed by two first-level
171    shadow pages) so role.quadrant takes values in the range 0..3.  Each
172    quadrant maps 1GB virtual address space.
173  role.access:
174    Inherited guest access permissions from the parent ptes in the form uwx.
175    Note execute permission is positive, not negative.
176  role.invalid:
177    The page is invalid and should not be used.  It is a root page that is
178    currently pinned (by a cpu hardware register pointing to it); once it is
179    unpinned it will be destroyed.
180  role.has_4_byte_gpte:
181    Reflects the size of the guest PTE for which the page is valid, i.e. '0'
182    if direct map or 64-bit gptes are in use, '1' if 32-bit gptes are in use.
183  role.efer_nx:
184    Contains the value of efer.nx for which the page is valid.
185  role.cr0_wp:
186    Contains the value of cr0.wp for which the page is valid.
187  role.smep_andnot_wp:
188    Contains the value of cr4.smep && !cr0.wp for which the page is valid
189    (pages for which this is true are different from other pages; see the
190    treatment of cr0.wp=0 below).
191  role.smap_andnot_wp:
192    Contains the value of cr4.smap && !cr0.wp for which the page is valid
193    (pages for which this is true are different from other pages; see the
194    treatment of cr0.wp=0 below).
195  role.smm:
196    Is 1 if the page is valid in system management mode.  This field
197    determines which of the kvm_memslots array was used to build this
198    shadow page; it is also used to go back from a struct kvm_mmu_page
199    to a memslot, through the kvm_memslots_for_spte_role macro and
200    __gfn_to_memslot.
201  role.ad_disabled:
202    Is 1 if the MMU instance cannot use A/D bits.  EPT did not have A/D
203    bits before Haswell; shadow EPT page tables also cannot use A/D bits
204    if the L1 hypervisor does not enable them.
205  role.guest_mode:
206    Indicates the shadow page is created for a nested guest.
207  role.passthrough:
208    The page is not backed by a guest page table, but its first entry
209    points to one.  This is set if NPT uses 5-level page tables (host
210    CR4.LA57=1) and is shadowing L1's 4-level NPT (L1 CR4.LA57=0).
211  mmu_valid_gen:
212    The MMU generation of this page, used to fast zap of all MMU pages within a
213    VM without blocking vCPUs too long. Specifically, KVM updates the per-VM
214    valid MMU generation which causes the mismatch of mmu_valid_gen for each mmu
215    page. This makes all existing MMU pages obsolete. Obsolete pages can't be
216    used. Therefore, vCPUs must load a new, valid root before re-entering the
217    guest. The MMU generation is only ever '0' or '1'. Note, the TDP MMU doesn't
218    use this field as non-root TDP MMU pages are reachable only from their
219    owning root. Thus it suffices for TDP MMU to use role.invalid in root pages
220    to invalidate all MMU pages.
221  gfn:
222    Either the guest page table containing the translations shadowed by this
223    page, or the base page frame for linear translations.  See role.direct.
224  spt:
225    A pageful of 64-bit sptes containing the translations for this page.
226    Accessed by both kvm and hardware.
227    The page pointed to by spt will have its page->private pointing back
228    at the shadow page structure.
229    sptes in spt point either at guest pages, or at lower-level shadow pages.
230    Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
231    at __pa(sp2->spt).  sp2 will point back at sp1 through parent_pte.
232    The spt array forms a DAG structure with the shadow page as a node, and
233    guest pages as leaves.
234  shadowed_translation:
235    An array of 512 shadow translation entries, one for each present pte. Used
236    to perform a reverse map from a pte to a gfn as well as its access
237    permission. When role.direct is set, the shadow_translation array is not
238    allocated. This is because the gfn contained in any element of this array
239    can be calculated from the gfn field when used.  In addition, when
240    role.direct is set, KVM does not track access permission for each of the
241    gfn. See role.direct and gfn.
242  root_count / tdp_mmu_root_count:
243     root_count is a reference counter for root shadow pages in Shadow MMU.
244     vCPUs elevate the refcount when getting a shadow page that will be used as
245     a root page, i.e. page that will be loaded into hardware directly (CR3,
246     PDPTRs, nCR3 EPTP). Root pages cannot be destroyed while their refcount is
247     non-zero. See role.invalid. tdp_mmu_root_count is similar but exclusively
248     used in TDP MMU as an atomic refcount.
249  parent_ptes:
250    The reverse mapping for the pte/ptes pointing at this page's spt. If
251    parent_ptes bit 0 is zero, only one spte points at this page and
252    parent_ptes points at this single spte, otherwise, there exists multiple
253    sptes pointing at this page and (parent_ptes & ~0x1) points at a data
254    structure with a list of parent sptes.
255  ptep:
256    The kernel virtual address of the SPTE that points at this shadow page.
257    Used exclusively by the TDP MMU, this field is a union with parent_ptes.
258  unsync:
259    If true, then the translations in this page may not match the guest's
260    translation.  This is equivalent to the state of the tlb when a pte is
261    changed but before the tlb entry is flushed.  Accordingly, unsync ptes
262    are synchronized when the guest executes invlpg or flushes its tlb by
263    other means.  Valid for leaf pages.
264  unsync_children:
265    How many sptes in the page point at pages that are unsync (or have
266    unsynchronized children).
267  unsync_child_bitmap:
268    A bitmap indicating which sptes in spt point (directly or indirectly) at
269    pages that may be unsynchronized.  Used to quickly locate all unsynchronized
270    pages reachable from a given page.
271  clear_spte_count:
272    Only present on 32-bit hosts, where a 64-bit spte cannot be written
273    atomically.  The reader uses this while running out of the MMU lock
274    to detect in-progress updates and retry them until the writer has
275    finished the write.
276  write_flooding_count:
277    A guest may write to a page table many times, causing a lot of
278    emulations if the page needs to be write-protected (see "Synchronized
279    and unsynchronized pages" below).  Leaf pages can be unsynchronized
280    so that they do not trigger frequent emulation, but this is not
281    possible for non-leafs.  This field counts the number of emulations
282    since the last time the page table was actually used; if emulation
283    is triggered too frequently on this page, KVM will unmap the page
284    to avoid emulation in the future.
285  tdp_mmu_page:
286    Is 1 if the shadow page is a TDP MMU page. This variable is used to
287    bifurcate the control flows for KVM when walking any data structure that
288    may contain pages from both TDP MMU and shadow MMU.
289
290Reverse map
291===========
292
293The mmu maintains a reverse mapping whereby all ptes mapping a page can be
294reached given its gfn.  This is used, for example, when swapping out a page.
295
296Synchronized and unsynchronized pages
297=====================================
298
299The guest uses two events to synchronize its tlb and page tables: tlb flushes
300and page invalidations (invlpg).
301
302A tlb flush means that we need to synchronize all sptes reachable from the
303guest's cr3.  This is expensive, so we keep all guest page tables write
304protected, and synchronize sptes to gptes when a gpte is written.
305
306A special case is when a guest page table is reachable from the current
307guest cr3.  In this case, the guest is obliged to issue an invlpg instruction
308before using the translation.  We take advantage of that by removing write
309protection from the guest page, and allowing the guest to modify it freely.
310We synchronize modified gptes when the guest invokes invlpg.  This reduces
311the amount of emulation we have to do when the guest modifies multiple gptes,
312or when the a guest page is no longer used as a page table and is used for
313random guest data.
314
315As a side effect we have to resynchronize all reachable unsynchronized shadow
316pages on a tlb flush.
317
318
319Reaction to events
320==================
321
322- guest page fault (or npt page fault, or ept violation)
323
324This is the most complicated event.  The cause of a page fault can be:
325
326  - a true guest fault (the guest translation won't allow the access) (*)
327  - access to a missing translation
328  - access to a protected translation
329    - when logging dirty pages, memory is write protected
330    - synchronized shadow pages are write protected (*)
331  - access to untranslatable memory (mmio)
332
333  (*) not applicable in direct mode
334
335Handling a page fault is performed as follows:
336
337 - if the RSV bit of the error code is set, the page fault is caused by guest
338   accessing MMIO and cached MMIO information is available.
339
340   - walk shadow page table
341   - check for valid generation number in the spte (see "Fast invalidation of
342     MMIO sptes" below)
343   - cache the information to vcpu->arch.mmio_gva, vcpu->arch.mmio_access and
344     vcpu->arch.mmio_gfn, and call the emulator
345
346 - If both P bit and R/W bit of error code are set, this could possibly
347   be handled as a "fast page fault" (fixed without taking the MMU lock).  See
348   the description in Documentation/virt/kvm/locking.rst.
349
350 - if needed, walk the guest page tables to determine the guest translation
351   (gva->gpa or ngpa->gpa)
352
353   - if permissions are insufficient, reflect the fault back to the guest
354
355 - determine the host page
356
357   - if this is an mmio request, there is no host page; cache the info to
358     vcpu->arch.mmio_gva, vcpu->arch.mmio_access and vcpu->arch.mmio_gfn
359
360 - walk the shadow page table to find the spte for the translation,
361   instantiating missing intermediate page tables as necessary
362
363   - If this is an mmio request, cache the mmio info to the spte and set some
364     reserved bit on the spte (see callers of kvm_mmu_set_mmio_spte_mask)
365
366 - try to unsynchronize the page
367
368   - if successful, we can let the guest continue and modify the gpte
369
370 - emulate the instruction
371
372   - if failed, unshadow the page and let the guest continue
373
374 - update any translations that were modified by the instruction
375
376invlpg handling:
377
378  - walk the shadow page hierarchy and drop affected translations
379  - try to reinstantiate the indicated translation in the hope that the
380    guest will use it in the near future
381
382Guest control register updates:
383
384- mov to cr3
385
386  - look up new shadow roots
387  - synchronize newly reachable shadow pages
388
389- mov to cr0/cr4/efer
390
391  - set up mmu context for new paging mode
392  - look up new shadow roots
393  - synchronize newly reachable shadow pages
394
395Host translation updates:
396
397  - mmu notifier called with updated hva
398  - look up affected sptes through reverse map
399  - drop (or update) translations
400
401Emulating cr0.wp
402================
403
404If tdp is not enabled, the host must keep cr0.wp=1 so page write protection
405works for the guest kernel, not guest userspace.  When the guest
406cr0.wp=1, this does not present a problem.  However when the guest cr0.wp=0,
407we cannot map the permissions for gpte.u=1, gpte.w=0 to any spte (the
408semantics require allowing any guest kernel access plus user read access).
409
410We handle this by mapping the permissions to two possible sptes, depending
411on fault type:
412
413- kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
414  disallows user access)
415- read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
416  write access)
417
418(user write faults generate a #PF)
419
420In the first case there are two additional complications:
421
422- if CR4.SMEP is enabled: since we've turned the page into a kernel page,
423  the kernel may now execute it.  We handle this by also setting spte.nx.
424  If we get a user fetch or read fault, we'll change spte.u=1 and
425  spte.nx=gpte.nx back.  For this to work, KVM forces EFER.NX to 1 when
426  shadow paging is in use.
427- if CR4.SMAP is disabled: since the page has been changed to a kernel
428  page, it can not be reused when CR4.SMAP is enabled. We set
429  CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note,
430  here we do not care the case that CR4.SMAP is enabled since KVM will
431  directly inject #PF to guest due to failed permission check.
432
433To prevent an spte that was converted into a kernel page with cr0.wp=0
434from being written by the kernel after cr0.wp has changed to 1, we make
435the value of cr0.wp part of the page role.  This means that an spte created
436with one value of cr0.wp cannot be used when cr0.wp has a different value -
437it will simply be missed by the shadow page lookup code.  A similar issue
438exists when an spte created with cr0.wp=0 and cr4.smep=0 is used after
439changing cr4.smep to 1.  To avoid this, the value of !cr0.wp && cr4.smep
440is also made a part of the page role.
441
442Large pages
443===========
444
445The mmu supports all combinations of large and small guest and host pages.
446Supported page sizes include 4k, 2M, 4M, and 1G.  4M pages are treated as
447two separate 2M pages, on both guest and host, since the mmu always uses PAE
448paging.
449
450To instantiate a large spte, four constraints must be satisfied:
451
452- the spte must point to a large host page
453- the guest pte must be a large pte of at least equivalent size (if tdp is
454  enabled, there is no guest pte and this condition is satisfied)
455- if the spte will be writeable, the large page frame may not overlap any
456  write-protected pages
457- the guest page must be wholly contained by a single memory slot
458
459To check the last two conditions, the mmu maintains a ->disallow_lpage set of
460arrays for each memory slot and large page size.  Every write protected page
461causes its disallow_lpage to be incremented, thus preventing instantiation of
462a large spte.  The frames at the end of an unaligned memory slot have
463artificially inflated ->disallow_lpages so they can never be instantiated.
464
465Fast invalidation of MMIO sptes
466===============================
467
468As mentioned in "Reaction to events" above, kvm will cache MMIO
469information in leaf sptes.  When a new memslot is added or an existing
470memslot is changed, this information may become stale and needs to be
471invalidated.  This also needs to hold the MMU lock while walking all
472shadow pages, and is made more scalable with a similar technique.
473
474MMIO sptes have a few spare bits, which are used to store a
475generation number.  The global generation number is stored in
476kvm_memslots(kvm)->generation, and increased whenever guest memory info
477changes.
478
479When KVM finds an MMIO spte, it checks the generation number of the spte.
480If the generation number of the spte does not equal the global generation
481number, it will ignore the cached MMIO information and handle the page
482fault through the slow path.
483
484Since only 18 bits are used to store generation-number on mmio spte, all
485pages are zapped when there is an overflow.
486
487Unfortunately, a single memory access might access kvm_memslots(kvm) multiple
488times, the last one happening when the generation number is retrieved and
489stored into the MMIO spte.  Thus, the MMIO spte might be created based on
490out-of-date information, but with an up-to-date generation number.
491
492To avoid this, the generation number is incremented again after synchronize_srcu
493returns; thus, bit 63 of kvm_memslots(kvm)->generation set to 1 only during a
494memslot update, while some SRCU readers might be using the old copy.  We do not
495want to use an MMIO sptes created with an odd generation number, and we can do
496this without losing a bit in the MMIO spte.  The "update in-progress" bit of the
497generation is not stored in MMIO spte, and is so is implicitly zero when the
498generation is extracted out of the spte.  If KVM is unlucky and creates an MMIO
499spte while an update is in-progress, the next access to the spte will always be
500a cache miss.  For example, a subsequent access during the update window will
501miss due to the in-progress flag diverging, while an access after the update
502window closes will have a higher generation number (as compared to the spte).
503
504
505Further reading
506===============
507
508- NPT presentation from KVM Forum 2008
509  https://www.linux-kvm.org/images/c/c8/KvmForum2008%24kdf2008_21.pdf
510