• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /freebsd-13-stable/sys/powerpc/booke/

Lines Matching refs:page

47   *   0xc100_0000 - 0xc100_3fff : reserved for page zero/copy
49 * 0xc200_4000 - 0xc200_8fff : guard page + kstack0
63 * kernel_pdir - kernel_pp2d-1 : kernel page directory
64 * kernel_pp2d - . : kernel pointers to page directory
65 * pmap_zero_copy_min - crashdumpmap-1 : reserved for page zero/copy
67 * ptbl_buf_pool_vabase - virtual_avail-1 : user page directories and page tables
451 * Assume the page is cache inhibited and access is guarded unless
688 * Get a rough idea (upper bound) on the size of the page array. The
729 /* Allocate KVA space for page zero/copy operations. */
740 /* Initialize page zero/copy mutexes. */
760 * align all regions. Non-page aligned memory isn't very interesting
789 /* Now page align the start and size of the region. */
920 /* Enter kstack0 into kernel map, provide guard page */
1002 * Get the physical page address for the given pmap/virtual address.
1017 * Extract the physical page address associated with the given
1078 /* Create a UMA zone for page table roots. */
1088 * intended for temporary mappings which do not need page modification or
1105 * Remove page mappings from kernel virtual address space. Intended for
1121 * Map a wired page into kernel virtual address space.
1171 * Remove a page from kernel page table.
1242 * Insert the given physical page at the specified virtual address in the
1243 * target physical map with the protection requested. If specified the page
1433 * The sequence begins with the given page m_start. This page is
1434 * mapped at the given virtual address start. Each subsequent page is
1436 * amount as the page is offset from m_start within the object. The
1437 * last page in the sequence is the page with the largest offset from
1439 * virtual address end. Not every virtual page between start and end
1440 * is mapped; only those for which a resident page exists with the
1483 * It is assumed that the start and end are properly rounded to the page size.
1530 * Remove physical page from all pmaps in which it resides.
1690 * Clear the write and modified bits in each of the given page's mappings.
1699 ("mmu_booke_remove_write: page %p is not managed", m));
1732 * Atomically extract and hold the physical page with the given
1775 * Return whether or not the specified physical page was modified
1786 ("mmu_booke_is_modified: page %p is not managed", m));
1790 * If the page is not busied then this check is racy.
1823 * Return whether or not the specified physical page was referenced
1834 ("mmu_booke_is_referenced: page %p is not managed", m));
1853 * Clear the modify bits on the specified physical page.
1862 ("mmu_booke_clear_modify: page %p is not managed", m));
1891 * Return a count of reference bits for a page, clearing those bits.
1896 * As an optimization, update the page's dirty field if a modified bit is
1912 ("mmu_booke_ts_referenced: page %p is not managed", m));
1949 * The wired attribute of the page table entry is not a hardware feature, so
1975 * page. This count may be changed upwards or downwards in the future; it is
1977 * page aging.
1987 ("mmu_booke_page_exists_quick: page %p is not managed", m));
2004 * Return the number of managed mappings to the given physical page that are
2060 /* We always map a 256MB page at 256M. */
2161 /* Find last page in chunk. */
2917 vm_page_t page;
2922 page = PHYS_TO_VM_PAGE(pa);
2927 TAILQ_FOREACH(pve, &page->md.pv_list, pv_link) {
2932 page->md.pv_tracked = true;
2933 pv_insert(pmap, va, page);
2981 * so it can function as an i/o page