• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /freebsd-13-stable/sys/powerpc/aim/

Lines Matching refs:page

162 #define  TLBIEL_INVAL_PAGE	0x000	/* invalidate a single page */
401 "Number of kernel page table pages allocated on bootup");
852 * Promotion to a 2MB (PDE) page mapping requires that the corresponding 4KB
853 * (PTE) page mappings have identical settings for the following fields:
1022 * Returns TRUE if the given page is mapped individually or as part of
1135 * When the PDE has PG_PROMOTED set, the 2MB page mapping was created
1136 * by a promotion that did not invalidate the 512 4KB page mappings
1138 * may hold both 4KB and 2MB page mappings for the address range [va,
1141 * 4KB page mappings for the address range [va, va + L3_PAGE_SIZE), and so a
1142 * single INVLPG suffices to invalidate the 2MB page mapping from the
1247 * 2MB page mappings.
1273 * After demotion from a 2MB page mapping to 512 4KB page mappings,
1274 * destroy the pv entry for the 2MB page mapping and reinstantiate the pv
1275 * entries for each of the 4KB page mappings.
1295 * page's pv list. Once this transfer begins, the pv list lock
1322 ("pmap_pv_demote_pde: page %p is not managed", m));
1361 * allocate per-page pv entries until repromotion occurs, thereby
1440 * Destroy every non-wired, 4 KB page mapping in the chunk.
1482 /* Every freed mapping is for a 4 KB page. */
1530 /* Recycle a freed page table page. */
1663 * After promotion from 512 4KB page mappings to a single 2MB page mapping,
1664 * replace the many pv entries for the 4KB page mappings by a single pv entry
1665 * for the 2MB page mapping.
1681 * Transfer the first page's pv entry for this mapping to the 2mpage's
1707 * page mappings.
1720 * Conditionally create the PV entry for a 4KB page mapping if the required
1767 vm_paddr_t page;
1769 page = allocpages(1);
1770 pagezero(PHYS_TO_DMAP(page));
1771 return (page);
1778 vm_paddr_t page;
1786 page = alloc_pt_page();
1787 pde_store(pte, page);
1795 page = alloc_pt_page();
1796 pde_store(pte, page);
1805 page = alloc_pt_page();
1806 pde_store(pte, page);
1854 * Create page tables for first 128MB of KVA
1865 * the kernel page table pages need to be preserved in
1883 * preallocated kernel page table pages so that vm_page structures
2075 * Allocate a kernel stack with a guard page for thread0 and map it
2076 * into the kernel page map.
2107 * Reserve some special page table entries/VA space for temporary
2246 * The large page mapping was destroyed.
2252 * Unless the page mappings are wired, remove the
2253 * mapping to a single page so that a subsequent
2254 * access may repromote. Since the underlying page
2255 * table page is fully populated, this removal never
2256 * frees a page table page.
2282 * can be avoided by making the page
2366 "2MB page mapping counters");
2370 &pmap_l3e_demotions, 0, "2MB page demotions");
2374 &pmap_l3e_mappings, 0, "2MB page mappings");
2378 &pmap_l3e_p_failures, 0, "2MB page promotion failures");
2382 &pmap_l3e_promotions, 0, "2MB page promotions");
2385 "1GB page mapping counters");
2389 &pmap_l2e_demotions, 0, "1GB page demotions");
2404 ("pmap_clear_modify: page %p is not managed", m));
2409 * If the page is not PGA_WRITEABLE, then no PTEs can have PG_M set.
2410 * If the object containing the page is locked and the page is not
2440 * single page so that a subsequent
2475 " a 2mpage in page %p's pv list", m));
2570 ("pmap_copy: source page table page is unused"));
2615 * the freed page table pages.
2687 * Tries to promote the 512, contiguous 4KB page mappings that are within a
2688 * single page table page (PTP) to a single 2MB page mapping. For promotion
2689 * to occur, two conditions must be met: (1) the 4KB page mappings must map
2690 * aligned, contiguous physical memory and (2) the 4KB page mappings must have
2705 * either invalid, unused, or does not map the first 4KB physical page
2706 * within a 2MB page.
2728 * PTE maps an unexpected 4KB physical page or does not have identical
2761 * Save the page table page in its current state until the PDE
2768 ("pmap_promote_l3e: page table page is out of range"));
2770 ("pmap_promote_l3e: page table page's pindex is wrong"));
2846 * the page is unmanaged. We do not want to take a fault
2867 * In the case that a page table page is not
2881 * Here if the pte page isn't mapped, or if it has been
2896 panic("pmap_enter: invalid page directory va=%#lx", va);
2917 * are valid mappings in them. Hence, if a user page is wired,
2918 * the PT page will be also.
2926 * Remove the extra PT page reference.
2931 ("pmap_enter: missing reference to page table page,"
2936 * Has the physical page changed?
2964 * The physical page has changed. Temporarily invalidate
3084 * If both the page table page and the reservation are fully
3109 * Tries to create a read- and/or execute-only 2MB page mapping. Returns true
3110 * if successful. Returns false if (1) a page table page cannot be allocated
3138 * Tries to create the specified 2MB page mapping. Returns KERN_SUCCESS if
3142 * KERN_RESOURCE_SHORTAGE if PMAP_ENTER_NOSLEEP was specified and a page table
3143 * page allocation failed. Returns KERN_RESOURCE_SHORTAGE if
3182 * The reference to the PD page that was acquired by
3185 * a reserved PT page could be freed.
3219 * entries that refer to the freed page table
3244 * be any lingering 4KB page mappings in the TLB.)
3309 * In the case that a page table page is not
3317 * Calculate pagetable page index
3324 * Get the page directory entry
3329 * If the page table page is mapped, we just increment
3331 * attempt to allocate a page table page. If this
3374 * entries that refer to the freed page table
3441 * because the page table page is preserved by the
3546 static MALLOC_DEFINE(M_RADIX_PGD, "radix_pgd", "radix page table root directory");
3598 /* L1TF, reserve page @0 unconditionally */
3612 * Initialize the vm page array entries for the kernel pmap's
3613 * page table pages.
3620 ("pmap_init: page table page is out of range size: %lu",
3635 * Are large page mappings enabled?
3755 * Return whether or not the specified physical page was modified
3763 ("pmap_is_modified: page %p is not managed", m));
3767 * If the page is not busied then this check is racy.
3797 ("pmap_is_referenced: page %p is not managed", m));
3805 * Return a count of reference bits for a page, clearing those bits.
3810 * As an optimization, update the page's dirty field if a modified bit is
3837 ("pmap_ts_referenced: page %p is not managed", m));
3867 * Although "oldpde" is mapping a 2MB page, because
3868 * this function is called at a 4KB page granularity,
3869 * we only update the 4KB page under test.
3878 * physical page number, the virtual superpage number,
3879 * and the pmap address to select one 4KB page out of
3883 * same 4KB page for every 2MB page mapping.
3887 * subsequent page fault on a demoted wired mapping,
3890 * its reference bit won't affect page replacement.
3899 ("inconsistent pv lock %p %p for page %p",
3935 ("pmap_ts_referenced: found a 2mpage in page %p's pv list",
3992 ("pmap_object_init_pt: invalid page %p", p));
3996 * Abort the mapping if the first page is not physically
3997 * aligned to a 2MB page boundary.
4004 * Skip the first page. Abort the mapping if the rest of
4012 ("pmap_object_init_pt: invalid page %p", p));
4026 * optimization. If a page directory page
4046 "to page directory page, va: 0x%lx", addr));
4065 ("pmap_page_exists_quick: page %p is not managed", m));
4179 * allocate the page directory page
4200 * This routine is called if the desired page table page does not exist.
4202 * If page table page allocation fails, this routine may sleep before
4205 * Note: If a page allocation fails at page table level two or three,
4218 * Allocate a page table page.
4229 * Indicate the need to retry. While waiting, the page table
4230 * page may have been allocated.
4238 * Map the pagetable page into the process address space, if
4246 /* Wire up a new PDPE page */
4257 /* Wire up a new l2e page */
4271 /* Add reference to l2e page */
4277 /* Now find the pdp page */
4288 /* Wire up a new PTE page */
4316 /* Add reference to the pd page */
4323 /* Now we know where the page directory page is */
4341 /* Add a reference to the pd page. */
4345 /* Allocate a pd page. */
4363 * Calculate pagetable page index
4368 * Get the page directory entry
4373 * This supports switching from a 2MB page to a
4374 * normal 4K page.
4379 * Invalidation of the 2MB page mapping may have caused
4380 * the deallocation of the underlying PD page.
4387 * If the page table page is mapped, we just increment the
4395 * Here if the pte page isn't mapped, or if it has been
4454 * lingering 4KB page mappings from the TLB.
4526 * Check for large page.
4530 * Are we protecting the entire large page? If not,
4539 * The large page mapping was destroyed.
4611 * the page table every time - but go for correctness for
4654 * Page table page management routines.....
4657 * Schedule the specified unused page table page to be freed. Specifically,
4658 * add the page to the specified list of pages that will be released to the
4674 * Inserts the specified page table page into the specified pmap's collection
4675 * of idle page table pages. Each of a pmap's page table pages is responsible
4688 * Removes the page table page mapping the specified virtual address from the
4689 * specified pmap's collection of idle page table pages, and returns it.
4690 * Otherwise, returns NULL if there is no page table page corresponding to the
4702 * Decrements a page table page's wire count, which is used to record the
4703 * number of valid page table entries within the page. If the wire count
4704 * drops to zero, then the page table page is unmapped. Returns TRUE if the
4705 * page table page was unmapped and FALSE otherwise.
4725 * unmap the page table page
4728 /* PDP page */
4733 /* PD page */
4738 /* PTE page */
4760 * Put page on a list so that it is released after
4767 * After removing a page table entry, this routine is used to
4768 * conditionally free the page, and manage the hold/wire counts.
4792 ("pmap_release: pmap has reserved page table page(s)"));
4801 * Create the PV entry for a 2MB page mapping. Always returns true unless the
4828 * Fills a page table page with mappings to consecutive physical pages.
4873 ("pmap_demote_l3e: page table page for a wired mapping"
4877 * Invalidate the 2MB page mapping and return "failure" if the
4879 * page table page fails. If the 2MB page mapping belongs to
4881 * the page allocation request specifies the highest possible
4912 * If the page table page is new, initialize it.
4924 * If the mapping has changed attributes, update the page table
4936 * PV entry for the 2MB page mapping that is being demoted.
4975 panic("pmap_remove_kernel_pde: Missing pt page.");
4980 * Initialize the page table page.
5033 ("pmap_remove_l3e: pte page wire count error"));
5042 * pmap_remove_pte: do the things to unmap a page in a process
5076 * Remove a single page from a process address space
5105 * Removes the specified range of addresses from the page table page.
5172 * special handling of removing one page. a very
5205 * Calculate index for next page table.
5221 * Check for large page.
5225 * Are we removing the entire large page? If not,
5234 /* The large page mapping was destroyed. */
5242 * by the current page table page, or to the end of the
5275 ("pmap_remove_all: page %p is not managed", m));
5317 " a 2mpage in page %p's pv list", m));
5352 * entries, rather than searching the page table. Second, it doesn't
5353 * have to test and clear the page table entries atomically, because
5355 * particular, a page table entry's dirty bit won't change state once
5431 * regular page could be mistaken for
5498 ("pmap_remove_pages: pte page wire count error"));
5550 ("pmap_remove_write: page %p is not managed", m));
5577 ("inconsistent pv lock %p %p for page %p",
5598 ("pmap_remove_write: found a 2mpage in page %p's pv list",
5623 * The wired attribute of the page table entry is not a hardware
5667 * Are we unwiring the entire large page? If not,
5737 /* Compute the physical address of the 4KB page. */
5859 * If "m" is a normal page, update its direct mapping. This update
5906 * Tries to demote a 1GB page mapping.
5936 * Initialize the page directory page.
5980 * because the page table page is preserved by the
6000 * Assume the page is cache inhibited and access is guarded unless
6142 * If the current 1GB page already has the required
6143 * memory type, then we need not demote this page. Just
6144 * increment tmpva to the next 1GB page frame.
6152 * If the current offset aligns with a 1GB page frame
6154 * we need not break down this page into 2MB pages.
6171 * If the current 2MB page already has the required
6172 * memory type, then we need not demote this page. Just
6173 * increment tmpva to the next 2MB page frame.
6181 * If the current offset aligns with a 2MB page frame
6183 * we need not break down this page into 4KB pages.
6405 db_printf("page %p(%lx)\n", m, m->phys_addr);