Lines Matching refs:entry

321  * Data for the pv entry allocation mechanism.
529 #define pmap_load_store(table, entry) atomic_swap_64(table, entry)
531 #define pmap_store(table, entry) atomic_store_64(table, entry)
583 * modifying the entry, so for KVA only the entry type may be checked.
586 ("%s: L1 entry %#lx for %#lx is invalid", __func__, l1, va));
588 ("%s: L1 entry %#lx for %#lx is a leaf", __func__, l1, va));
617 * modifying the entry, so for KVA only the entry type may be checked.
620 ("%s: L2 entry %#lx for %#lx is invalid", __func__, l2, va));
622 ("%s: L2 entry %#lx for %#lx is a leaf", __func__, l2, va));
662 * Returns the lowest valid pte block or table entry for a given virtual
710 * If the given pmap has an L{1,2}_BLOCK or L3_PAGE entry at the specified
711 * level that maps the specified virtual address, then a pointer to that entry
772 * where a CPU may call into the KMSAN runtime while the entry is
773 * invalid. If the entry is used to map the current thread structure,
991 /* Check the existing L0 entry */
1003 /* Create a new L0 table entry */
1039 /* Check the existing L1 entry */
1051 /* Create a new L1 table entry */
1083 /* Check the existing L2 entry */
1095 /* Create a new L2 table entry */
1137 * the chunk can be cached using only one TLB entry.
1187 * the chunk can be cached using only one TLB entry.
1617 * pv_table entry for that next segment down by one so
1741 * any cached final-level entry, i.e., either an L{1,2}_BLOCK or L3_PAGE entry.
1742 * Otherwise, just the cached final-level entry is invalidated.
1918 * will return either a valid block/page entry, or NULL.
1976 * entry. We can assume L1_BLOCK == L2_BLOCK.
2056 * A concurrent pmap_update_entry() will clear the entry's valid bit
2057 * but leave the rest of the entry unchanged. Therefore, we treat a
2058 * non-zero entry as being valid, and we ignore the valid bit when
2059 * determining whether the entry maps a block, page, or table.
2133 ("pmap_kenter: Invalid page entry, va: 0x%lx", va));
2173 * the chunk can be cached using only one TLB entry.
2336 ("pmap_qenter: Invalid page entry, va: 0x%lx", va));
2479 * After removing a page table entry, this routine is used to
2575 * Allocate the level 1 entry to use as the root. This will increase
2655 ("%s: L0 entry %#lx is valid", __func__, pmap_load(l0p)));
2695 ("%s: L1 entry %#lx is valid", __func__, pmap_load(l1)));
2739 ("%s: L2 entry %#lx is valid", __func__, pmap_load(l2)));
2805 * Get the page directory entry
2877 ("pmap_release: Invalid l0 entry: %lx", pmap->pm_l0[0]));
2968 ("pmap_growkernel: No level 0 kernel entry"));
2972 /* We need a new PDP entry */
3023 "Current number of pv entry chunks");
3025 "Current number of pv entry chunks allocated");
3027 "Current number of pv entry chunks frees");
3035 "Current number of pv entry frees");
3037 "Current number of pv entry allocs");
3047 * another pv entry chunk.
3199 /* One freed pv entry in locked_pmap is sufficient. */
3337 * Returns a new PV entry, allocating a new PV chunk from the system when
3482 * First find and then remove the pv entry for the specified pmap and virtual
3483 * address from the specified pv list. Returns the pv entry if found and NULL
3504 * destroy the pv entry for the 2MB page mapping and reinstantiate the pv
3526 * Transfer the 2mpage's pv entry for this mapping to the first
3528 * must not be released until the last pv entry is reinstantiated.
3571 * First find and then destroy the pv entry for the specified pmap and virtual
3586 * Conditionally create the PV entry for a 4KB page mapping if the required
3608 * Create the PV entry for a 2MB page mapping. Always returns true unless the
3610 * false if the PV entry cannot be allocated without resorting to reclamation.
3827 * single L3 entry, so we must combine the accessed and dirty bits
3853 * could return while a stale TLB entry
3893 * identified by the given L2 entry.
3963 * could return while a stale TLB entry
4194 ("pmap_remove_all: no page directory entry found"));
4246 * Return if the L2 entry already has the desired access restrictions
4382 ("pmap_protect: Invalid L2 entry after demotion"));
4393 * Go to the next L3 entry if the current one is
4433 * The L3 entry's accessed bit may have changed.
4557 * Performs a break-before-make update of a pmap entry. This is needed when
4579 * Clear the old mapping's valid bit, but leave the rest of the entry
4586 * When promoting, the L{1,2}_TABLE entry that is being replaced might
4622 * entry unchanged, so that a lockless, concurrent pmap_kextract() can
4644 * replace the many pv entries for the 4KB page mappings by a single pv entry
4661 * Transfer the first page's pv entry for this mapping to the 2mpage's
4685 * single level 2 table entry to a single 2MB page mapping. For promotion
4855 * Compute the address of the first L3 entry in the superpage
4864 * Examine the first L3 entry. Abort if this L3E is ineligible for
4879 * If the first L3 entry is a clean read-write mapping, convert it
4991 KASSERT(l1p != NULL, ("va %#lx lost l1 entry", va));
4995 KASSERT(l1p != NULL, ("va %#lx lost l1 entry", va));
5327 * Update the L3 entry
5426 * Returns true if every page table entry in the specified page table is
5453 * and a PV entry allocation failed.
5520 ("pmap_enter_l2: non-zero L2 entry %p", l2));
5529 * entry for the kernel page table page, so request
5531 * the L2_TABLE entry.
5561 * Abort this mapping if its PV entry could not be created.
5683 * Get the L2 entry.
5688 * If the L2 entry is a superpage, we either abort or
5738 * If the L2 entry is a superpage, we either abort or demote
5997 ("pmap_enter_quick_locked: Invalid page entry, va: 0x%lx",
6110 * The wired attribute of the page table entry is not a hardware feature,
6178 ("pmap_unwire: Invalid l2 entry after demotion"));
6209 * the System MMU may write to the entry concurrently.
6387 ("pmap_copy: invalid L2 entry"));
6732 * particular, a page table entry's dirty bit won't change state once
7044 * Return true if and only if the L3 entry for the specified virtual
7140 * The L3 entry's accessed bit may have
7268 /* Rotate the PV list if it has more than one entry. */
7303 * Clear the accessed bit in this L3 entry
7321 /* Rotate the PV list if it has more than one entry. */
7422 ("pmap_advise: invalid L2 entry after demotion"));
7472 * The L3 entry's accessed bit may have
7479 * Check that we did not just destroy this entry so
7490 * Clear the accessed bit in this L3 entry
7672 ("pmap_mapbios: Invalid page entry, va: 0x%lx",
7753 ("pmap_unmapbios: Invalid page entry, va: 0x%lx",
7930 /* We can't demote/promote this entry */
7934 * Find the entry and demote it if the requested change
7936 * the entry.
7991 /* Update the entry */
8007 * We are updating a single block or page entry,
8030 * If moving to a non-cacheable entry flush
8058 ("pmap_demote_l1: Demoting a non-block entry"));
8064 ("pmap_demote_l1: Demoting entry with no-demote flag set"));
8191 ("pmap_demote_l2: Demoting a non-block entry"));
8193 ("pmap_demote_l2: Demoting entry with no-demote flag set"));
8267 ("pmap_demote_l2: L2 entry is writeable but not dirty"));
8303 * in reclaim_pv_chunk() attempting to remove a PV entry from the
8305 * PV entry for the 2MB page mapping that is being demoted.
8317 * Demote the PV entry.
8388 * the rest of the entry unchanged, so that a lockless,
8404 * update a single L2 entry, so we must combine the accessed
8477 * the rest of the entry unchanged, so that a lockless,
8493 * update a single L3 entry, so we must combine the accessed
9057 * XXXMJ as an optimization we could mark the entry
9597 * Determine whether the attributes specified by a page table entry match those