Lines Matching refs:page

215  *	Array of physical page attribites for managed pages.
216 * One byte per physical page.
289 * for 64 bit, causes the pdpt page containing the pde entry to be mapped,
290 * then returns the mapped address of the pde entry in that page
307 * the single pml4 page per pmap is allocated at pmap create time and exists
308 * for the duration of the pmap. we allocate this page in kernel vm (to save us one
309 * level of page table dynamic mapping.
310 * this returns the address of the requested pml4 entry in the top level page.
320 * maps in the pml4 page, if any, containing the pdpt entry requested
321 * and returns the address of the pdpt entry in that mapped page
365 * maps in the pdpt page, if any, containing the pde entry requested
366 * and returns the address of the pde entry in that mapped page
410 * Because the page tables (top 3 levels) are mapped into per cpu windows,
429 * maps the pde page, if any, containing the pte in and returns
430 * the address of the pte in that mapped page
538 assert(0 == (va & PAGE_MASK)); /* expecting page aligned */
658 * Map the kernel's code and data, and allocate the system page table.
730 * Reserve some special page table entries/VA space for temporary
779 * Clone a new 64-bit 3rd-level page table directory, IdlePML4,
780 * with page bits set for the correct IA-32e operation and so that
782 * This is necessary due to the incompatible use of page bits between
1005 * 3) read and write-protect page zero (for K32)
1006 * 4) map the global page at the appropriate virtual address.
1066 * There's also a size miscalculation here: pend is one page less
1131 * Release zero-filled page padding used for 2M-alignment.
1156 pde = *pdep & PTMASK; /* page attributes from pde */
1158 pde |= pte_phys; /* take page frame from pte */
1170 * the page. Instead we need to compute its address
1184 /* no matter what, kernel page zero is not accessible */
1187 /* map lowmem global page into fixed addr */
1192 /* make sure it is defined on page boundary */
1247 * Check the resident page count
1252 * .. the debug kernel ought to be checking perhaps by page table walk.
1277 "page %d at 0x%llx\n",
1383 /* alloc the pml4 page in kernel vm */
1430 * users with a 4GB page zero.
1444 * loads cr3 with the kernel's page table. In addition to being called
1758 uint32_t page;
1760 for (page = 0; page < size; page++) {
1770 * Extract the physical page address associated
1815 * Allocate a VM page for the pml4 page
1821 * put the page into the pmap's obj list so it
1829 * Zero the page.
1870 * Set the page directory entry for this page table.
1905 * Allocate a VM page for the pdpt page
1911 * put the page into the pmap's obj list so it
1919 * Zero the page.
1960 * Set the page directory entry for this page table.
1989 * (We won't loop forever, since page tables aren't shrunk.)
2020 * Allocate a VM page for the pde entries.
2026 * put the page into the pmap's obj list so it
2034 * Zero the page.
2082 * Set the page directory entry for this page table.
2098 * Invalidates all of the instruction cache on a physical page and
2099 * pushes any dirty data from the data cache for the same physical page
2111 * Write back and invalidate all cachelines on a physical page.
2170 * If the pte page has any wired mappings, we cannot
2185 * Remove the virtual addresses mapped by this pte page.
2193 * Invalidate the page directory pointer.
2200 * And free the pte page itself.
2209 panic("pmap_collect: pte page not in object");
2248 * A page which is not pageable may not take
2249 * a fault; therefore, its page table entry
2481 /* fold in cache attributes for this physical page */
2575 /* TLB flush for this page for this cpu */