Lines Matching refs:guest

44  * Number of guest VTLB entries to use, so we can catch inconsistency between
75 * These Config bits may be writable by the guest:
119 * Permit guest FPU mode changes if FPU is enabled and the relevant
199 /* VZ guest has already converted gva to gpa */
218 * timer expiry is asynchronous to vcpu execution therefore defer guest
227 * timer expiry is asynchronous to vcpu execution therefore defer guest
239 * interrupts are asynchronous to vcpu execution therefore defer guest
251 * interrupts are asynchronous to vcpu execution therefore defer guest
329 * VZ guest timer handling.
333 * kvm_vz_should_use_htimer() - Find whether to use the VZ hard guest timer.
336 * Returns: true if the VZ GTOffset & real guest CP0_Count should be used
337 * instead of software emulation of guest timer.
394 * Freeze the soft-timer and sync the guest CP0_Count with it. We do
402 /* restore guest CP0_Cause, as TI may already be set */
450 /* enable guest access to hard timer */
459 * _kvm_vz_save_htimer() - Switch to software emulation of guest timer.
464 * Save VZ guest timer state and switch to software emulation of guest CP0
512 * kvm_vz_save_timer() - Save guest timer state.
515 * Save VZ guest timer state and switch to soft guest timer if hard timer was in
525 /* disable guest use of hard timer */
541 * kvm_vz_lose_htimer() - Ensure hard guest timer is not in use.
544 * Transfers the state of the hard guest timer to the soft guest timer, leaving
545 * guest state intact so it can continue to be used with the soft timer.
554 /* disable guest use of timer */
624 * have been caught by the guest, leaving us with:
681 * @gpa: Output guest physical address.
683 * Convert a guest virtual address (GVA) which is valid according to the guest
684 * context, to a guest physical address (GPA).
739 /* Unmapped, find guest physical address */
785 * @gpa: Output guest physical address.
787 * VZ implementations are permitted to report guest virtual addresses (GVA) in
788 * BadVAddr on a root exception during guest execution, instead of the more
789 * convenient guest physical addresses (GPA). When we get a GVA, this function
790 * converts it to a GPA, taking into account guest segmentation and guest TLB
996 * P5600 generates GPSI on guest MTC0 LLAddr.
997 * Only allow the guest to clear LLB.
1127 /* So far, other platforms support guest hit cache ops */
1187 /* Don't export any other advanced features to guest */
1311 /* complete MTC0 on behalf of guest and advance EPC */
1374 /* Only certain bits are RW to the guest */
1424 * Presumably this is due to MC (guest mode change), so lets trace some
1547 * Handle when the guest attempts to use a coprocessor which hasn't been allowed
1550 * Return: value indicating whether to resume the host or the guest
1561 * If guest FPU not present, the FPU operation should have been
1596 * Handle when the guest attempts to use MSA when it is disabled in the root
1599 * Return: value indicating whether to resume the host or the guest
1605 * If MSA not present or not exposed to guest or FR=0, the MSA operation
1801 ret += __arch_hweight8(cpu_data[0].guest.kscratch_mask);
2031 /* Octeon III has a read-only guest.PRid */
2274 /* Octeon III has a guest.PRid, but its read-only */
2433 /* Returns 1 if the guest TLB may be clobbered */
2447 /* This will clobber guest TLB contents too */
2482 /* Save wired entries from the guest TLB */
2496 /* Load wired entries into the guest TLB */
2509 * Are we entering guest context on a different CPU to last time?
2510 * If so, the VCPU's guest TLB state on this CPU may be stale.
2519 * manipulating guest tlb entries.
2526 * another CPU, as the guest mappings may have changed without
2542 * The Guest TLB only stores a single guest's TLB state, so
2545 * We also flush if we've executed on another CPU, as the guest
2553 * Root ASID dealiases guest GPA mappings in the root TLB.
2570 * If so, any old guest TLB state may be stale.
2576 * If not, any old guest state from this VCPU will have been clobbered.
2583 * restore wired guest TLB entries (while in guest context).
2598 /* Set MC bit if we want to trace guest mode changes */
2654 /* restore KScratch registers if enabled in guest */
2759 /* save KScratch registers if enabled in guest */
2786 /* save HTW registers if enabled in guest */
2806 * kvm_vz_resize_guest_vtlb() - Attempt to resize guest VTLB.
2807 * @size: Number of guest VTLB entries (0 < @size <= root VTLB entries).
2809 * Attempt to resize the guest VTLB by writing guest Config registers. This is
2810 * necessary for cores with a shared root/guest TLB to avoid overlap with wired
2813 * Returns: The resulting guest VTLB size.
2819 /* Write MMUSize - 1 into guest Config registers */
2879 /* Set up guest timer/perfcount IRQ lines */
2902 current_cpu_data.guest.tlbsize = guest_mmu_size;
2904 /* Flush moved entries in new (guest) context */
2909 * ImgTec cores tend to use a shared root/guest TLB. To avoid
2910 * overlap of root wired and guest entries, the guest TLB may
2916 /* Try switching to maximum guest VTLB size for flush */
2918 current_cpu_data.guest.tlbsize = guest_mmu_size + ftlb_size;
2928 current_cpu_data.guest.tlbsize = guest_mmu_size + ftlb_size;
2933 * guest. If this ever happens it suggests an asymmetric number
2938 "Available guest VTLB size mismatch"))
2944 * Enable virtualization features granting guest direct control of
2973 /* clear any pending injected virtual guest interrupts */
2978 /* Control guest CCA attribute */
2991 /* Flush any remaining guest TLB entries */
2997 * Allocate whole TLB for root. Existing guest TLB entries will
2999 * they've already been flushed above while in guest TLB.
3011 current_cpu_data.guest.tlbsize = 0;
3091 * Initialize guest register state to valid architectural reset state.
3114 /* architecturally writable (e.g. from guest) */
3140 /* architecturally writable (e.g. from guest) */
3167 /* architecturally writable (e.g. from guest) */
3208 /* start with no pending virtual guest interrupts */