Lines Matching defs:page

23 // Return the page size for this level
39 // Whether an address is aligned to the page size of this level
118 // Utility for coalescing cache line flushes when modifying page tables. This
119 // allows us to mutate adjacent page table entries without having to flush for
166 // Utility for managing consistency of the page tables from a cache and TLB
168 // refer to it, and that changes to the page tables have appropriate visiblity
170 // class, even if the page table change failed.
178 void queue_free(vm_page_t* page) TA_NO_THREAD_SAFETY_ANALYSIS {
181 list_add_tail(&to_free_, &page->queue_node);
213 // support deferring invoking pmm_free() until after we've left the page
237 * @brief Update the cursor to skip over a not-present page table entry.
242 // this page table level.
269 /* attempt to invalidate the page */
286 /* attempt to invalidate the page */
295 * @brief Allocating a new page table
315 * @brief Split the given large page into smaller pages
337 // If this is a PDP_L (i.e. huge page), flags will include the
353 * @brief given a page table entry, return a pointer to the next page table one level down
363 * @brief Walk the page table structures returning the entry and level that maps the address.
393 /* if this is a large page, stop here */
407 /* do the final page table lookup */
430 * @return true if at least one page was unmapped at this level
452 // If the page isn't even mapped, just skip it
461 // If the request covers the entire large page, just unmap it
476 // subsequent page fault clean it up.
491 // If we were requesting to unmap everything in the lower page table,
492 // we know we can unmap the lower level page table. Otherwise, if
514 vm_page_t* page = paddr_to_vm_page(ptable_phys);
516 DEBUG_ASSERT(page);
517 DEBUG_ASSERT_MSG(page->state == VM_PAGE_STATE_MMU,
518 "page %p state %u, paddr %#" PRIxPTR "\n", page, page->state,
520 DEBUG_ASSERT(!list_in_list(&page->queue_node));
522 cm->queue_free(page);
534 // Base case of RemoveMapping for smallest page size.
572 * @return ZX_ERR_NO_MEMORY if intermediate page tables could not be allocated
613 // See if there's a large page in our way
618 // Check if this is a candidate for a new large page
660 // Base case of AddMapping for smallest page size.
731 // If the request covers the entire large page, just change the
747 // page faults will bring it back in.
775 // Base case of UpdateMapping for smallest page size.
862 // the algorithm (e.g. make the cursors aware of the page array).
994 /* based on the return level, parse the page table entry */
997 case PDP_L: /* 1GB page */
1001 case PD_L: /* 2MB page */
1005 case PT_L: /* 4K page */