Lines Matching refs:dirty

385 	 * are related to dirty logging, and many do the TLB flush out of
1447 * Allocation size is twice as large as the actual dirty bitmap size.
1696 * If dirty logging is disabled, nullify the bitmap; the old bitmap
1761 * Free the dirty bitmap as needed; the below check encompasses
2164 * kvm_get_dirty_log - get a snapshot of dirty pages
2167 * @is_dirty: set to '1' if any dirty pages were found
2178 /* Dirty ring tracking may be exclusive to dirty log tracking */
2213 * kvm_get_dirty_log_protect - get a snapshot of dirty pages
2214 * and reenable dirty page tracking for the corresponding pages.
2219 * concurrently. So, to avoid losing track of dirty pages we keep the
2228 * entry. This is not a problem because the page is reported dirty using
2243 /* Dirty ring tracking may be exclusive to dirty log tracking */
2306 * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a slot
2310 * Steps 1-4 below provide general overview of dirty page logging. See
2314 * always flush the TLB (step 4) even if previous step failed and the dirty
2316 * does not preclude user space subsequent dirty log read. Flushing TLB ensures
2317 * writes will be marked dirty for next log read.
2338 * kvm_clear_dirty_log_protect - clear dirty bits in the bitmap
2339 * and reenable dirty page tracking for the corresponding pages.
2341 * @log: slot id and address from which to fetch the bitmap of dirty pages
2355 /* Dirty ring tracking may be exclusive to dirty log tracking */
3141 void kvm_release_pfn(kvm_pfn_t pfn, bool dirty)
3143 if (dirty)
3183 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty)
3198 if (dirty)
3201 kvm_release_pfn(map->pfn, dirty);
3212 * touched (e.g. set dirty) except by its owner".
3279 * directly marking a page dirty/accessed. Unlike the "release" helpers, the