#
edf95564 |
|
13-Dec-2023 |
Alexandre Ghiti <alexghiti@rivosinc.com> |
riscv: Use accessors to page table entries instead of direct dereference As very well explained in commit 20a004e7b017 ("arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing page tables"), an architecture whose page table walker can modify the PTE in parallel must use READ_ONCE()/WRITE_ONCE() macro to avoid any compiler transformation. So apply that to riscv which is such architecture. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Acked-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/20231213203001.179237-5-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
3e1efe2b |
|
28-Jul-2023 |
Sean Christopherson <seanjc@google.com> |
KVM: Wrap kvm_{gfn,hva}_range.pte in a per-action union Wrap kvm_{gfn,hva}_range.pte in a union so that future notifier events can pass event specific information up and down the stack without needing to constantly expand and churn the APIs. Lockless aging of SPTEs will pass around a bitmap, and support for memory attributes will pass around the new attributes for the range. Add a "KVM_NO_ARG" placeholder to simplify handling events without an argument (creating a dummy union variable is midly annoying). Opportunstically drop explicit zero-initialization of the "pte" field, as omitting the field (now a union) has the same effect. Cc: Yu Zhao <yuzhao@google.com> Link: https://lore.kernel.org/all/CAOUHufagkd2Jk3_HrVoFFptRXM=hX2CV8f+M-dka-hJU4bP8kw@mail.gmail.com Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Acked-by: Yu Zhao <yuzhao@google.com> Link: https://lore.kernel.org/r/20230729004144.1054885-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
|
#
619b5072 |
|
10-Aug-2023 |
David Matlack <dmatlack@google.com> |
KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slots_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. MIPS has calls to kvm_flush_remote_tlbs_memslot(), but they all hold the slots_lock, so the lockdep assertion continues to hold true. Also drop the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT ifdef gating kvm_flush_remote_tlbs_memslot(), since it is no longer necessary. Signed-off-by: David Matlack <dmatlack@google.com> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Shaoqin Huang <shahuang@redhat.com> Acked-by: Anup Patel <anup@brainfault.org> Acked-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230811045127.3308641-7-rananta@google.com
|
#
e290dbb7 |
|
08-Jan-2023 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines The hgatp.VMID mask defines are used before shifting when extracting VMID value from hgatp CSR value so based on the convention followed in the other parts of asm/csr.h, the hgatp.VMID mask defines should not have a _MASK suffix. While we are here, let's use GENMASK() for hgatp.VMID and hgatp.PPN. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
2ed90cb0 |
|
17-Mar-2023 |
David Matlack <dmatlack@google.com> |
KVM: RISC-V: Retry fault if vma_lookup() results become invalid Read mmu_invalidate_seq before dropping the mmap_lock so that KVM can detect if the results of vma_lookup() (e.g. vma_shift) become stale before it acquires kvm->mmu_lock. This fixes a theoretical bug where a VMA could be changed by userspace after vma_lookup() and before KVM reads the mmu_invalidate_seq, causing KVM to install page table entries based on a (possibly) no-longer-valid vma_shift. Re-order the MMU cache top-up to earlier in user_mem_abort() so that it is not done after KVM has read mmu_invalidate_seq (i.e. so as to avoid inducing spurious fault retries). It's unlikely that any sane userspace currently modifies VMAs in such a way as to trigger this race. And even with directed testing I was unable to reproduce it. But a sufficiently motivated host userspace might be able to exploit this race. Note KVM/ARM had the same bug and was fixed in a separate, near identical patch (see Link). Link: https://lore.kernel.org/kvm/20230313235454.2964067-1-dmatlack@google.com/ Fixes: 9955371cc014 ("RISC-V: KVM: Implement MMU notifiers") Cc: stable@vger.kernel.org Signed-off-by: David Matlack <dmatlack@google.com> Tested-by: Anup Patel <anup@brainfault.org> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
c25a6ddd |
|
23-Jan-2023 |
Alexandre Ghiti <alexghiti@rivosinc.com> |
KVM: RISC-V: Fix wrong usage of PGDIR_SIZE to check page sizes At the moment, riscv only supports PMD and PUD hugepages. For sv39, PGDIR_SIZE == PUD_SIZE but not for sv48 and sv57. So fix this by changing PGDIR_SIZE into PUD_SIZE. Fixes: 9d05c1fee837 ("RISC-V: KVM: Implement stage2 page table programming") Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
45b66dc1 |
|
30-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
KVM: RISC-V: Tag init functions and data with __init, __ro_after_init Now that KVM setup is handled directly in riscv_kvm_init(), tag functions and data that are used/set only during init with __init/__ro_after_init. Signed-off-by: Sean Christopherson <seanjc@google.com> Acked-by: Anup Patel <anup@brainfault.org> Message-Id: <20221130230934.1014142-26-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
3e2d4756 |
|
06-Dec-2022 |
Christophe JAILLET <christophe.jaillet@wanadoo.fr> |
RISC-V: KVM: Simplify kvm_arch_prepare_memory_region() In kvm_arch_prepare_memory_region(), if no error occurs, a spin_lock()/ spin_unlock() call can be avoided. Switch to kvm_riscv_gstage_iounmap() that is the same as the current code, but with a better semantic. It also embeds the locking logic. So it is avoided if ret == 0. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Anup Patel <anup@brainfault.org> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
b3f2575a |
|
06-Dec-2022 |
Bo Liu <liubo03@inspur.com> |
RISC-V: KVM: use vma_lookup() instead of find_vma_intersection() vma_lookup() finds the vma of a specific address with a cleaner interface and is more readable. Signed-off-by: Bo Liu <liubo03@inspur.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
20ec3ebd |
|
16-Aug-2022 |
Chao Peng <chao.p.peng@linux.intel.com> |
KVM: Rename mmu_notifier_* to mmu_invalidate_* The motivation of this renaming is to make these variables and related helper functions less mmu_notifier bound and can also be used for non mmu_notifier based page invalidation. mmu_invalidate_* was chosen to better describe the purpose of 'invalidating' a page that those variables are used for. - mmu_notifier_seq/range_start/range_end are renamed to mmu_invalidate_seq/range_start/range_end. - mmu_notifier_retry{_hva} helper functions are renamed to mmu_invalidate_retry{_hva}. - mmu_notifier_count is renamed to mmu_invalidate_in_progress to avoid confusion with mn_active_invalidate_count. - While here, also update kvm_inc/dec_notifier_count() to kvm_mmu_invalidate_begin/end() to match the change for mmu_notifier_count. No functional change intended. Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com> Message-Id: <20220816125322.1110439-3-chao.p.peng@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
659ad6d8 |
|
29-Jul-2022 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Use PAGE_KERNEL_IO in kvm_riscv_gstage_ioremap() When the host has Svpbmt extension, we should use page based memory type 2 (i.e. IO) for IO mappings in the G-stage page table. To achieve this, we replace use of PAGE_KERNEL with PAGE_KERNEL_IO in the kvm_riscv_gstage_ioremap(). Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
c9d57373 |
|
29-Jul-2022 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Add G-stage ioremap() and iounmap() functions The in-kernel AIA IMSIC support requires on-demand mapping / unmapping of Guest IMSIC address to Host IMSIC guest files. To help achieve this, we add kvm_riscv_stage2_ioremap() and kvm_riscv_stage2_iounmap() functions. These new functions for updating G-stage page table mappings will be called in atomic context so we have special "in_atomic" parameter for this purpose. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
6259d2f8 |
|
29-Jul-2022 |
Zhang Jiaming <jiaming@nfschina.com> |
RISC-V: KVM: Fix variable spelling mistake There is a spelling mistake in mmu.c and vcpu_exit.c. Fix it. Signed-off-by: Zhang Jiaming <jiaming@nfschina.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
837f66c7 |
|
22-Jun-2022 |
David Matlack <dmatlack@google.com> |
KVM: Allow for different capacities in kvm_mmu_memory_cache structs Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 with a capacity of 512+ objects without having to increase the capacity of all caches in KVM. This change requires each cache now specify its capacity at runtime, since the cache struct itself no longer has a fixed capacity known at compile time. To protect against someone accidentally defining a kvm_mmu_memory_cache struct directly (without the extra storage), this commit includes a WARN_ON() in kvm_mmu_topup_memory_cache(). In order to support different capacities, this commit changes the objects pointer array to be dynamically allocated the first time the cache is topped-up. While here, opportunistically clean up the stack-allocated kvm_mmu_memory_cache structs in riscv and arm64 to use designated initializers. No functional change intended. Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: David Matlack <dmatlack@google.com> Message-Id: <20220516232138.1783324-22-dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
88573389 |
|
10-Jul-2022 |
Alexandre Ghiti <alexandre.ghiti@canonical.com> |
riscv: Fix missing PAGE_PFN_MASK There are a bunch of functions that use the PFN from a page table entry that end up with the svpbmt upper-bits because they are missing the newly introduced PAGE_PFN_MASK which leads to wrong addresses conversions and then crash: fix this by adding this mask. Fixes: 100631b48ded ("riscv: Fix accessing pfn bits in PTEs for non-32bit variants") Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
13acfec2 |
|
08-May-2022 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Add remote HFENCE functions based on VCPU requests The generic KVM has support for VCPU requests which can be used to do arch-specific work in the run-loop. We introduce remote HFENCE functions which will internally use VCPU requests instead of host SBI calls. Advantages of doing remote HFENCEs as VCPU requests are: 1) Multiple VCPUs of a Guest may be running on different Host CPUs so it is not always possible to determine the Host CPU mask for doing Host SBI call. For example, when VCPU X wants to do HFENCE on VCPU Y, it is possible that VCPU Y is blocked or in user-space (i.e. vcpu->cpu < 0). 2) To support nested virtualization, we will be having a separate shadow G-stage for each VCPU and a common host G-stage for the entire Guest/VM. The VCPU requests based remote HFENCEs helps us easily synchronize the common host G-stage and shadow G-stage of each VCPU without any additional IPI calls. This is also a preparatory patch for upcoming nested virtualization support where we will be having a shadow G-stage page table for each Guest VCPU. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
2415e46e |
|
08-May-2022 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Introduce range based local HFENCE functions Various __kvm_riscv_hfence_xyz() functions implemented in the kvm/tlb.S are equivalent to corresponding HFENCE.GVMA instructions and we don't have range based local HFENCE functions. This patch provides complete set of local HFENCE functions which supports range based TLB invalidation and supports HFENCE.VVMA based functions. This is also a preparatory patch for upcoming Svinval support in KVM RISC-V. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
b4bbb95e |
|
08-May-2022 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Add Sv57x4 mode support for G-stage Latest QEMU supports G-stage Sv57x4 mode so this patch extends KVM RISC-V G-stage handling to detect and use Sv57x4 mode when available. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
26708234 |
|
08-May-2022 |
Anup Patel <apatel@ventanamicro.com> |
RISC-V: KVM: Use G-stage name for hypervisor page table The two-stage address translation defined by the RISC-V privileged specification defines: VS-stage (guest virtual address to guest physical address) programmed by the Guest OS and G-stage (guest physical addree to host physical address) programmed by the hypervisor. To align with above terminology, we replace "stage2" with "gstage" and "Stage2" with "G-stage" name everywhere in KVM RISC-V sources. Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Anup Patel <anup@brainfault.org>
|
#
26fb751c |
|
20-Jan-2022 |
Atish Patra <atishp@rivosinc.com> |
RISC-V: Do not use cpumask data structure for hartid bitmap Currently, SBI APIs accept a hartmask that is generated from struct cpumask. Cpumask data structure can hold upto NR_CPUs value. Thus, it is not the correct data structure for hartids as it can be higher than NR_CPUs for platforms with sparse or discontguous hartids. Remove all association between hartid mask and struct cpumask. Reviewed-by: Anup Patel <anup@brainfault.org> (For Linux RISC-V changes) Acked-by: Anup Patel <anup@brainfault.org> (For KVM RISC-V changes) Signed-off-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
a457fd56 |
|
26-Nov-2021 |
Anup Patel <anup@brainfault.org> |
RISC-V: KVM: Add VM capability to allow userspace get GPA bits The number of GPA bits supported for a RISC-V Guest/VM is based on the MMU mode used by the G-stage translation. The KVM RISC-V will detect and use the best possible MMU mode for the G-stage in kvm_arch_init(). We add a generic VM capability KVM_CAP_VM_GPA_BITS which can be used by the KVM userspace to get the number of GPA (guest physical address) bits supported for a Guest/VM. Signed-off-by: Anup Patel <anup.patel@wdc.com> Reviewed-and-tested-by: Atish Patra <atishp@rivosinc.com>
|
#
cc4f602b |
|
04-Nov-2021 |
Sean Christopherson <seanjc@google.com> |
KVM: RISC-V: Use common KVM implementation of MMU memory caches Use common KVM's implementation of the MMU memory caches, which for all intents and purposes is semantically identical to RISC-V's version, the only difference being that the common implementation will fall back to an atomic allocation if there's a KVM bug that triggers a cache underflow. RISC-V appears to have based its MMU code on arm64 before the conversion to the common caches in commit c1a33aebe91d ("KVM: arm64: Use common KVM implementation of MMU memory caches"), despite having also copy-pasted the definition of KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE in kvm_types.h. Opportunistically drop the superfluous wrapper kvm_riscv_stage2_flush_cache(), whose name is very, very confusing as "cache flush" in the context of MMU code almost always refers to flushing hardware caches, not freeing unused software objects. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Anup Patel <anup.patel@wdc.com>
|
#
6a99c6e3 |
|
06-Dec-2021 |
Sean Christopherson <seanjc@google.com> |
KVM: Stop passing kvm_userspace_memory_region to arch memslot hooks Drop the @mem param from kvm_arch_{prepare,commit}_memory_region() now that its use has been removed in all architectures. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <aa5ed3e62c27e881d0d8bc0acbc1572bc336dc19.1638817640.git.maciej.szmigiero@oracle.com>
|
#
d01495d4 |
|
06-Dec-2021 |
Sean Christopherson <seanjc@google.com> |
KVM: RISC-V: Use "new" memslot instead of userspace memory region Get the slot ID, hva, etc... from the "new" memslot instead of the userspace memory region when preparing/committing a memory region. This will allow a future commit to drop @mem from the prepare/commit hooks once all architectures convert to using "new". Opportunistically wait to get the various "new" values until after filtering out the DELETE case in anticipation of a future commit passing NULL for @new when deleting a memslot. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <543608ab88a1190e73a958efffafc98d2652c067.1638817640.git.maciej.szmigiero@oracle.com>
|
#
537a17b3 |
|
06-Dec-2021 |
Sean Christopherson <seanjc@google.com> |
KVM: Let/force architectures to deal with arch specific memslot data Pass the "old" slot to kvm_arch_prepare_memory_region() and force arch code to handle propagating arch specific data from "new" to "old" when necessary. This is a baby step towards dynamically allocating "new" from the get go, and is a (very) minor performance boost on x86 due to not unnecessarily copying arch data. For PPC HV, copy the rmap in the !CREATE and !DELETE paths, i.e. for MOVE and FLAGS_ONLY. This is functionally a nop as the previous behavior would overwrite the pointer for CREATE, and eventually discard/ignore it for DELETE. For x86, copy the arch data only for FLAGS_ONLY changes. Unlike PPC HV, x86 needs to reallocate arch data in the MOVE case as the size of x86's allocations depend on the alignment of the memslot's gfn. Opportunistically tweak kvm_arch_prepare_memory_region()'s param order to match the "commit" prototype. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> [mss: add missing RISCV kvm_arch_prepare_memory_region() change] Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <67dea5f11bbcfd71e3da5986f11e87f5dd4013f9.1638817639.git.maciej.szmigiero@oracle.com>
|
#
756e1fc1 |
|
04-Nov-2021 |
Sean Christopherson <seanjc@google.com> |
KVM: RISC-V: Unmap stage2 mapping when deleting/moving a memslot Unmap stage2 page tables when a memslot is being deleted or moved. It's the architectures' responsibility to ensure existing mappings are removed when kvm_arch_flush_shadow_memslot() returns. Fixes: 9d05c1fee837 ("RISC-V: KVM: Implement stage2 page table programming") Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Anup Patel <anup.patel@wdc.com>
|
#
bbd5ba8d |
|
27-Oct-2021 |
Bixuan Cui <cuibixuan@linux.alibaba.com> |
RISC-V: KVM: fix boolreturn.cocci warnings Fix boolreturn.cocci warnings: ./arch/riscv/kvm/mmu.c:603:9-10: WARNING: return of 0/1 in function 'kvm_age_gfn' with return type bool ./arch/riscv/kvm/mmu.c:582:9-10: WARNING: return of 0/1 in function 'kvm_set_spte_gfn' with return type bool ./arch/riscv/kvm/mmu.c:621:9-10: WARNING: return of 0/1 in function 'kvm_test_age_gfn' with return type bool ./arch/riscv/kvm/mmu.c:568:9-10: WARNING: return of 0/1 in function 'kvm_unmap_gfn_range' with return type bool Signed-off-by: Bixuan Cui <cuibixuan@linux.alibaba.com> Signed-off-by: Anup Patel <anup.patel@wdc.com>
|
#
9955371c |
|
27-Sep-2021 |
Anup Patel <anup.patel@wdc.com> |
RISC-V: KVM: Implement MMU notifiers This patch implements MMU notifiers for KVM RISC-V so that Guest physical address space is in-sync with Host physical address space. This will allow swapping, page migration, etc to work transparently with KVM RISC-V. Signed-off-by: Anup Patel <anup.patel@wdc.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alexander Graf <graf@amazon.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
|
#
9d05c1fe |
|
27-Sep-2021 |
Anup Patel <anup.patel@wdc.com> |
RISC-V: KVM: Implement stage2 page table programming This patch implements all required functions for programming the stage2 page table for each Guest/VM. At high-level, the flow of stage2 related functions is similar from KVM ARM/ARM64 implementation but the stage2 page table format is quite different for KVM RISC-V. [jiangyifei: stage2 dirty log support] Signed-off-by: Yifei Jiang <jiangyifei@huawei.com> Signed-off-by: Anup Patel <anup.patel@wdc.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
|
#
9f701326 |
|
27-Sep-2021 |
Anup Patel <anup.patel@wdc.com> |
RISC-V: KVM: Handle MMIO exits for VCPU We will get stage2 page faults whenever Guest/VM access SW emulated MMIO device or unmapped Guest RAM. This patch implements MMIO read/write emulation by extracting MMIO details from the trapped load/store instruction and forwarding the MMIO read/write to user-space. The actual MMIO emulation will happen in user-space and KVM kernel module will only take care of register updates before resuming the trapped VCPU. The handling for stage2 page faults for unmapped Guest RAM will be implemeted by a separate patch later. [jiangyifei: ioeventfd and in-kernel mmio device support] Signed-off-by: Yifei Jiang <jiangyifei@huawei.com> Signed-off-by: Anup Patel <anup.patel@wdc.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alexander Graf <graf@amazon.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
|
#
99cdc6c1 |
|
27-Sep-2021 |
Anup Patel <anup.patel@wdc.com> |
RISC-V: Add initial skeletal KVM support This patch adds initial skeletal KVM RISC-V support which has: 1. A simple implementation of arch specific VM functions except kvm_vm_ioctl_get_dirty_log() which will implemeted in-future as part of stage2 page loging. 2. Stubs of required arch specific VCPU functions except kvm_arch_vcpu_ioctl_run() which is semi-complete and extended by subsequent patches. 3. Stubs for required arch specific stage2 MMU functions. Signed-off-by: Anup Patel <anup.patel@wdc.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Alexander Graf <graf@amazon.com> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
|