Lines Matching refs:page

169  *  Level 2 page tables map definion ('max' is excluded).
180 * Promotion to a 1MB (PTE1) page mapping requires that the corresponding
181 * 4KB (PTE2) page mappings have identical settings for the following fields:
235 * The boot_pt1 is used temporary in very early boot stage as L1 page table.
265 vm_offset_t virtual_avail; /* VA of first avail page (after kernel bss) */
266 vm_offset_t virtual_end; /* VA of last avail page (end of kernel AS) */
553 * KERNBASE is mapped by first L2 page table in L2 page table page. It
571 * Check L1 and L2 page table entries definitions consistency.
576 * Check L2 page tables page consistency.
590 * All level 2 page tables (PT2s) are mapped continuously and accordingly
592 * be done only if PAGE_SIZE is a multiple of PT2 size. All PT2s in one page
593 * must be used together, but not necessary at once. The first PT2 in a page
617 * Get offset of PT2 in a page
629 * associated with given PT2s page and PT1 index.
640 * associated with given PT2s page and PT1 index.
650 * Get virtual address of PT2s page (mapped in PT2MAP)
705 * (1) strictly only for this stage functions for physical page allocations,
731 * Pre-bootstrap epoch page allocator.
751 * as L1 page table.
758 * 1. The 'boot_pt1' is replaced by real kernel L1 page table 'kern_pt1'.
760 * 3. Basic preboot functions for page allocations and mappings can be used.
764 * 1. To use second TTB register, so kernel and users page tables will be
795 * Allocate and zero page(s) for kernel L1 page table.
805 /* Allocate and zero page(s) for kernel PT2TAB. */
811 /* Allocate and zero page(s) for kernel L2 page tables. */
820 * preallocated pages for kernel L2 page tables so that vm_page
828 * Insert allocated L2 page table pages to PT2TAB and make
829 * link to all PT2s in L1 page table. See how kernel_vm_end
833 * L2 page table, even kernel image mapped by sections.
850 * Get free and aligned space for PT2MAP and make L1 page table links
851 * to L2 page tables held in PT2TAB.
854 * descriptors and PT2TAB page(s) itself is(are) used as PT2s. Thus
855 * each entry in PT2TAB maps all PT2s in a page. This implies that
874 * Choose correct L2 page table and make mappings for allocations
888 /* Make mapping for kernel L1 page table. */
907 * Setup L2 page table page for given KVA.
910 * Note that we have allocated NKPT2PG pages for L2 page tables in advance
912 * enough. Vectors and devices need L2 page tables too. Note that they are
924 /* Just return, if PT2s page exists already. */
933 * Allocate page for PT2s and insert it to PT2TAB.
939 /* Zero all PT2s in allocated page. */
947 * Setup L2 page table for given KVA.
956 /* Setup PT2's page. */
966 * Get L2 page entry associated with given KVA.
983 * Pre-bootstrap epoch page(s) mapping(s).
1025 * Pre-bootstrap epoch page(s) allocation and mapping(s).
1033 /* Allocate physical page(s). */
1048 * Pre-bootstrap epoch page mapping(s) with attributes.
1085 * Extract from the kernel page table the physical address
1102 * page is preserved by promotion in PT2TAB. So even if
1119 * Extract from the kernel page table the physical address
1121 * return L2 page table entry which maps the address.
1161 * NOTE: This is not SMP coherent stage. And physical page allocation is not
1203 * Reserve some special page table entries/VA space for temporary
1244 * initialize phys_avail[] array and no further page allocation
1286 * Add a wired page to the kva.
1322 * Remove a page from the kernel pagetables.
1564 &sp_enabled, 0, "Are large page mappings enabled?");
1574 "1MB page mapping counters");
1578 &pmap_pte1_demotions, 0, "1MB page demotions");
1582 &pmap_pte1_mappings, 0, "1MB page mappings");
1586 &pmap_pte1_p_failures, 0, "1MB page promotion failures");
1590 &pmap_pte1_promotions, 0, "1MB page promotions");
1594 &pmap_pte1_kern_demotions, 0, "1MB page kernel demotions");
1598 &pmap_pte1_kern_promotions, 0, "1MB page kernel promotions");
1611 * 1. Pages for L2 page tables are always not managed. So, pv_list and
1613 * initialization on a page alloc for page tables and reinitialization
1614 * on the page free must be ensured.
1626 * Virtualization for faster way how to zero whole page.
1629 pagezero(void *page)
1632 bzero(page, PAGE_SIZE);
1636 * Zero L2 page table page.
1649 * XXX: For now, we map whole page even if it's already zero,
1679 * Init just allocated page as L2 page table(s) holder
1688 /* Check page attributes. */
1692 /* Zero page and init wire counts. */
1697 * Map page to PT2MAP address space for given pmap.
1725 * Initialize the vm page array entries for kernel pmap's
1726 * L2 page table pages allocated in advance.
1741 ("%s: L2 page table page is out of range", __func__));
1760 * Are large page mappings enabled?
1798 * page modification or references recorded.
1800 * over. The page *must* be wired.
1831 * This routine tears out page mappings from the
1876 /* Note that L2 page table size is not equal to PAGE_SIZE. */
1927 /* Note that L2 page table size is not equal to PAGE_SIZE. */
1953 * Extract the physical page address associated
1980 * Atomically extract and hold the physical page
2019 * Grow the number of kernel L2 page table entries, if needed.
2032 * L2 page table is either not allocated or linked from L1 page table
2070 * Install new PT2s page into kernel PT2TAB.
2079 * QQQ: To link all new L2 page tables from L1 page
2141 * Kernel page table directory and pmap stuff around is already
2145 * Since the L1 page table and PT2TAB is shared with the kernel pmap,
2196 * No need to allocate L2 page table space yet but we do need
2197 * a valid L1 page table and PT2TAB table.
2223 * QQQ: (1) PT2TAB must be contiguous. If PT2TAB is one page
2226 * the stuff needed as other L2 page table pages.
2227 * (2) Note that a process PT2TAB is special L2 page table
2228 * page. Its mapping in kernel_arena is permanent and can
2245 * QQQ: Each L2 page table page vm_page_t has pindex set to
2246 * pte1 index of virtual address mapped by this page.
2358 * Virtual interface for L2 page table wire counting.
2360 * Each L2 page table in a page has own counter which counts a number of
2361 * valid mappings in a table. Global page counter counts mappings in all
2362 * tables in a page plus a single itself mapping in PT2TAB.
2364 * During a promotion we leave the associated L2 page table counter
2365 * untouched, so the table (strictly speaking a page which holds it)
2368 * If a page m->ref_count == 1 then no valid mappings exist in any L2 page
2369 * table in the page and the page itself is only mapped in PT2TAB.
2378 * Note: A page m is allocated with VM_ALLOC_WIRED flag and
2394 * All L2 page tables in a page always belong to the same
2395 * pmap, so we allow only one extra reference for the page.
2467 * This routine is called if the L2 page table
2489 * Install new PT2s page into pmap PT2TAB.
2504 * the L2 page table page may have been allocated.
2535 * This supports switching from a 1MB page to a
2536 * normal 4K page.
2550 * If the L2 page table page is mapped, we just increment the
2570 * Schedule the specified unused L2 page table page to be freed. Specifically,
2571 * add the page to the specified list of pages that will be released to the
2579 * Put page on a list so that it is released after
2590 * Unwire L2 page tables page.
2603 * Unmap all L2 page tables in the page from L1 page table.
2605 * QQQ: Individual L2 page tables (except the last one) can be unmapped
2631 * Unmap the page from PT2TAB.
2642 * the L2 page table page is globally performed before TLB shoot-
2650 * Decrements a L2 page table page's wire count, which is used to record the
2651 * number of valid page table entries within the page. If the wire count
2652 * drops to zero, then the page table page is unmapped. Returns TRUE if the
2653 * page table page was unmapped and FALSE otherwise.
2661 * QQQ: Wire count is zero, so whole page should be zero and
2674 * Drop a L2 page table page's wire count at once, which is used to record
2675 * the number of valid L2 page table entries within the page. If the wire
2676 * count drops to zero, then the L2 page table page is unmapped.
2685 ("%s: PT2 page's pindex is wrong", __func__));
2691 * It's possible that the L2 page table was never used.
2698 * QQQ: We clear L2 page table now, so when L2 page table page
2720 * After removing a L2 page table entry, this routine is used to
2721 * conditionally free the page, and manage the hold/wire counts.
2778 0, "Number of times tried to get a chunk page but failed.");
2792 * Is given page managed?
2865 * Destroy every non-wired, 4 KB page mapping in the chunk.
2911 /* Every freed mapping is for a 4 KB page. */
2951 /* Recycle a freed page table page. */
3092 * Create a pv entry for page at pa for
3160 * page's pv list.
3173 ("pmap_pv_demote_pte1: page %p is not managed", m));
3193 * Transfer the first page's pv entry for this mapping to the
3424 * Tries to promote the NPTE2_IN_PT2, contiguous 4KB page mappings that are
3425 * within a single page table page (PT2) to a single 1MB page mapping.
3426 * For promotion to occur, two conditions must be met: (1) the 4KB page
3427 * mappings must map aligned, contiguous physical memory and (2) the 4KB page
3451 * either invalid, unused, or does not map the first 4KB physical page
3452 * within a 1MB page.
3471 * When page is not modified, PTE2_RO can be set without
3480 * PTE2 maps an unexpected 4KB physical page or does not have identical
3495 * When page is not modified, PTE2_RO can be set
3515 * The page table page in its current state will stay in PT2TAB
3519 * Note that L2 page table size is not equal to PAGE_SIZE.
3523 ("%s: PT2 page is out of range", __func__));
3525 ("%s: PT2 page's pindex is wrong", __func__));
3553 * Zero L2 page table page.
3566 * Removes a 1MB page mapping from the kernel pmap.
3584 panic("%s: missing pt2 page", __func__);
3589 * Initialize the L2 page table.
3602 * as we did not change it. I.e. the L2 page table page
3654 * L2 page table(s) can't be removed from kernel map as
3660 * Get associated L2 page table page.
3661 * It's possible that the page was never allocated.
3670 * Fills L2 page table page with mappings to consecutive physical pages.
3684 * Tries to demote a 1MB page mapping. If demotion fails, the
3685 * 1MB page mapping is invalidated.
3707 ("%s: PT2 page for a wired mapping is missing", __func__));
3710 * Invalidate the 1MB page mapping and return
3712 * allocation of the new page table page fails.
3730 * We init all L2 page tables in the page even if
3731 * we are going to change everything for one L2 page
3752 * the page table page (promoted L2 page tables are not unmapped).
3753 * Otherwise, temporarily map the L2 page table page (m) into
3756 * Note that L2 page table size is not equal to PAGE_SIZE.
3800 * If the L2 page table page is new, initialize it. If the mapping
3801 * has changed attributes, update the page table entries.
3829 * page pv entry might trigger the execution of pmap_pv_reclaim(),
3830 * which might reclaim a newly (re)created per-page pv entry
3833 * the 1mpage to referencing the page table page.
3848 * Insert the given physical page (p) at
3852 * If specified, the page will be wired down, meaning
3857 * insert this page into the given map NOW.
3874 ("%s: invalid to pmap_enter page table pages (va: 0x%x)", __func__,
3914 * In the case that a page table page is not
3929 panic("%s: attempted on 1MB page", __func__);
3932 panic("%s: invalid L1 page table entry va=%#x", __func__, va);
3944 * are valid mappings in them. Hence, if a user page is wired,
3945 * the PT2 page will be also.
4044 * (2) Now, we do it on a page basis.
4086 * If both the L2 page table page and the reservation are fully
4104 * Do the things to unmap a page in a process.
4138 * Remove a single page from a process address space.
4159 * rounded to the page size.
4182 * Special handling of removing one page. A very common
4195 * Calculate address for next L2 page table.
4207 * Weed out invalid mappings. Note: we assume that the L1 page
4215 * Are we removing the entire large page? If not,
4222 /* The large page mapping was destroyed. */
4238 * by the current L2 page table page, or to the end of the
4263 * Removes this physical page from
4285 ("%s: page %p is not managed", __func__, m));
4307 "a 1mpage in page %p's pv list", __func__, m));
4526 * 4. No L2 page table pages.
4545 * In the case that a L2 page table page is not
4554 * Get L1 page table things.
4562 * Each of NPT2_IN_PG L2 page tables on the page can
4563 * come here. Make sure that associated L1 page table
4567 * L2 page tables for newly allocated L2 page
4568 * tables page.
4580 * If the L2 page table page is mapped, we just
4601 * entering the page into the current pmap. In order to support
4673 * Tries to create a read- and/or execute-only 1 MB page mapping. Returns
4698 * Tries to create the specified 1 MB page mapping. Returns KERN_SUCCESS if
4733 * reserved PT page could be freed.
4795 * The sequence begins with the given page m_start. This page is
4796 * mapped at the given virtual address start. Each subsequent page is
4798 * amount as the page is offset from m_start within the object. The
4799 * last page in the sequence is the page with the largest offset from
4801 * virtual address end. Not every virtual page between start and end
4802 * is mapped; only those for which a resident page exists with the
4860 ("%s: invalid page %p", __func__, p));
4864 * Abort the mapping if the first page is not physically
4865 * aligned to a 1MB page boundary.
4872 * Skip the first page. Abort the mapping if the rest of
4880 ("%s: invalid page %p", __func__, p));
4985 * Calculate address for next L2 page table.
4995 * Weed out invalid mappings. Note: we assume that L1 page
4996 * page table is always allocated, and in kernel virtual.
5003 * Are we protecting the entire large page? If not,
5020 * The large page mapping
5039 * by the current L2 page table page, or to the end of the
5122 * Return the number of managed mappings to the given physical page
5145 * physical memory. Otherwise, returns FALSE. Both page and 1mpage
5183 * Return whether or not the specified physical page was modified
5192 ("%s: page %p is not managed", __func__, m));
5195 * If the page is not busied then this check is racy.
5233 * otherwise. Both page and 1mpage mappings are supported.
5269 * Return whether or not the specified physical page was referenced
5278 ("%s: page %p is not managed", __func__, m));
5290 * Return a count of reference bits for a page, clearing those bits.
5295 * As an optimization, update the page's dirty field if a modified bit is
5315 ("%s: page %p is not managed", __func__, m));
5331 * Although "opte1" is mapping a 1MB page, because
5332 * this function is called at a 4KB page granularity,
5333 * we only update the 4KB page under test.
5341 * Apply a simple "hash" function on the physical page
5343 * address to select one 4KB page out of the 256
5346 * to avoid the selection of the same 4KB page
5347 * for every 1MB page mapping.
5351 * subsequent page fault on a demoted wired mapping,
5354 * its reference bit won't affect page replacement.
5382 ("%s: not found a link in page %p's pv list", __func__, m));
5413 * The wired attribute of the page table entry is not a hardware feature,
5442 * Weed out invalid mappings. Note: we assume that L1 page
5443 * page table is always allocated, and in kernel virtual.
5453 * Are we unwiring the entire large page? If not,
5487 * by the current L2 page table page, or to the end of the
5519 * Clear the write and modified bits in each of the given page's mappings.
5532 ("%s: page %p is not managed", __func__, m));
5557 " a section in page %p's pv list", __func__, m));
5576 * modified flags in each mapping and set the mapped page's dirty field.
5619 * The large page mapping was destroyed.
5625 * Unless the page mappings are wired, remove the
5626 * mapping to a single page so that a subsequent
5627 * access may repromote. Since the underlying L2 page
5629 * frees a L2 page table page.
5649 * can be avoided by making the page
5672 * Clear the modify bits on the specified physical page.
5685 ("%s: page %p is not managed", __func__, m));
5706 * single page so that a subsequent
5727 " a section in page %p's pv list", __func__, m));
5740 * Sets the memory attribute for the specified page.
5753 CTR5(KTR_PMAP, "%s: page %p - 0x%08X oma: %d, ma: %d", __func__, m,
5759 * If "m" is a normal page, flush it from the cache.
5761 * First, try to find an existing mapping of the page by sf
5769 * If page is not mapped by sf buffer, map the page
5795 * Returns TRUE if the given page is mapped individually or as part of
5815 * 16 pvs linked to from this page. This count may
5818 * subset of pmaps for proper page aging.
5829 ("%s: page %p is not managed", __func__, m));
5858 * pmap_zero_page zeros the specified hardware page by mapping
5859 * the page into KVM and using bzero to clear its contents.
5883 * pmap_zero_page_area zeros the specified hardware page by mapping
5884 * the page into KVM and using bzero to clear its contents.
5886 * off and size may not cover an area beyond a single hardware page.
5914 * page by mapping the page into virtual memory and using
5915 * bcopy to copy the page, one machine dependent page at a
6068 ("%s: invalid to pmap_copy page tables", __func__));
6098 * referenced until all PT2s in a page are without reference.
6104 ("%s: source page table page is unused", __func__));
6219 * Perform the pmap work for mincore(2). If the page is not both referenced and
6273 ("%s: device mapping not page-sized", __func__));
6292 ("%s: device mapping not page-sized", __func__));
6312 * The range must be within a single page.
6321 ("%s: not on single page", __func__));
6476 * Check in advance that associated L2 page table is mapped into
6477 * PT2MAP space. Note that faulty access to not mapped L2 page
6480 * L1 page table and PT2TAB always exist and are mapped.
6484 panic("%s: missing L2 page table (%p, %#x)",
6501 * Accesss bits for page and section. Note that the entry
6510 /* L2 page table should exist and be mapped. */
6549 * Handle modify bits for page and section. Note that the modify
6559 /* L2 page table should exist and be mapped. */
6633 panic("%s: page %p not zero, va: %p", __func__, m,
6723 /* Note that L2 page table size is not equal to PAGE_SIZE. */