Lines Matching refs:page

122 static void		vm_page_free_prepare(vm_page_t	page);
129 * Associated with page of user-allocatable memory is a
130 * page structure.
145 * (virtual memory object, offset) to page lookup, employs
208 * The virtual page size is currently implemented as a runtime
214 * All references to the virtual page size outside this
223 * Resident page structures are initialized from
254 * resident page structures that do not refer to
255 * real pages, for example to leave a page with
258 * These page structures are allocated the way
289 * we don't use a real physical page with that
295 * Resident page structures are also chained on
296 * queues that are used by the page replacement
340 * Several page replacement parameters are also
341 * shared with this module, so that page allocation
371 * Sets the page size, perhaps based upon the memory
372 * size. Must be called before any use of page-size
383 panic("vm_set_page_size: page size not a power of two");
480 * Allocates memory for the page cells, and
481 * for the object/offset-to-page hash table headers.
482 * Each page cell is initialized and placed on the free list.
557 * Initialize the page queues.
649 printf("vm_page_bootstrap: WARNING -- strange page hash\n");
674 * Machine-dependent code allocates the resident page table.
675 * It uses vm_page_init to initialize the page frames.
794 * We calculate how many page frames we will have
795 * and then allocate the page structures in one chunk.
805 * Initialize the page frames.
841 if(fill) fillPage(vm_pages[i - 1].phys_page, fillval); /* Fill the page with a know value if requested at boot */
855 if(fill) fillPage(vm_pages[i - 1].phys_page, fillval); /* Fill the page with a know value if requested at boot */
990 * Inserts the given mem entry into the object/object-page
1018 "vm_page_insert, object 0x%X offset 0x%X page 0x%X\n",
1022 * we may not hold the page queue lock
1043 panic("vm_page_insert: page %p for (obj=%p,off=0x%llx) "
1056 * Record the object/offset pair in this page
1098 * Show that the object has one more resident page.
1118 * This page belongs to a purged VM object but hasn't
1137 * remove any existing page at the given offset in object.
1154 * we don't hold the page queue lock
1162 panic("vm_page_replace: page %p for (obj=%p,off=0x%llx) "
1168 * Record the object/offset pair in this page
1176 * replacing any page that might have been there.
1192 * Remove old page from hash list
1207 * insert new page at head of hash list
1215 * there was already a page at the specified
1227 * Removes the given mem entry from the object/offset-page
1228 * table and the object page list.
1244 "vm_page_remove, object 0x%X offset 0x%X page 0x%X\n",
1254 * we don't hold the page queue lock
1296 * page.
1347 * Returns the page associated with the object/offset
1420 * guarantess that the page we're looking for can't exist
1437 * we don't hold the page queue lock
1479 * The encryption key is based on the page's memory object
1480 * (aka "pager") and paging offset. Moving the page to
1491 panic("vm_page_rename: page %p is encrypted\n", mem);
1495 "vm_page_rename, new object 0x%X, offset 0x%X page 0x%X\n",
1500 * Changes to mem->object require the page lock because
1514 * Initialize the fields in a new page.
1550 * once this page goes back into use
1560 * Remove a fictitious page from the free list.
1602 * Release a fictitious page to the zone pool
1624 * 1. we need to carve some page structures out of physical
1646 * Allocate a single page from the zone_map. Do not wait if no physical
1651 * If winner is not vm-privileged, then the page allocation will fail,
1663 * acquire a fictitious page (vm_page_grab_fictitious), fail,
1680 * No page was available. Drop the
1789 * first try to grab a page from the per-cpu free list...
1791 * a page is available, we're done...
1792 * if no page is available, grab the vm_page_queue_free_lock
1853 printf("mk: vm_page_grab(): high wired page count of %d\n",
1859 printf("mk: vm_page_grab(): high gobbled page count of %d\n",
1998 * Return a page to the free list.
2061 * Check if we should wake up someone waiting for page.
2068 * all wakeup, the greedy threads runs first, grabs the page,
2069 * and waits for another page. It will be the first to run
2070 * when the next page is freed.
2073 * The thread we wake might not use the free page.
2075 * while the page goes unused. To forestall this,
2103 * Wait for a page to become available.
2107 * TRUE: There may be another page, try again
2119 * succeeds, the second fails. After the first page is freed,
2206 * Allocate a fictitious page which will be used
2207 * as a guard page. The page will be inserted into
2234 * Removes page from any queue it may be on
2237 * Object and page queues must be locked prior to entry.
2258 panic("vm_page_free: freeing page on free list\n");
2265 * We may have to free a page while it's being laundered
2268 * the page from its VM object, so that we can remove it
2326 * Returns the given page to the free list,
2329 * Object and page queues must be locked prior to entry.
2368 * per page.
2420 * IMPORTANT: we can't set the page "free" here
2421 * because that would make the page eligible for
2425 * cause trouble because the page is not actually
2513 * Wake up one waiter per page we just released.
2527 * Mark this page as wired down by yet
2531 * The page's object and the page queues must be locked.
2545 * In theory, the page should be in an object before it
2547 * to update some fields in the page structure.
2549 * to wire a page before it gets inserted into an object.
2551 * that page and update it at the same time.
2582 * This page is not "re-usable" when it's
2604 * The page could be encrypted, but
2608 * The page will get decrypted in
2620 * Mark this page as consumed by the vm/ipc/xmm subsystems.
2646 * Release one wiring of this page, potentially
2649 * The page's object and the page queues must be locked.
2699 * Returns the given page to the inactive list,
2701 * to this page. [Used by the physical mapping system.]
2703 * The page queues must be locked.
2728 * This page is no longer very interesting. If it was
2745 * if this page is currently on the pageout queue, we can't do the
2749 * reference which is held on the object while the page is in the pageout queue...
2788 * Put the page on the cleaned queue, mark it cleaned, etc.
2790 * does ** NOT ** guarantee that the page is clean!
2811 * if this page is currently on the pageout queue, we can't do the
2815 * reference which is held on the object while the page is in the pageout queue...
2836 * Put the specified page on the active list (if appropriate).
2838 * The page queues must be locked.
2863 * if this page is currently on the pageout queue, we can't do the
2867 * reference which is held on the object while the page is in the pageout queue...
2910 * Put the specified page on the speculative list (if appropriate).
2912 * The page queues must be locked.
2930 * if this page is currently on the pageout queue, we can't do the
2934 * reference which is held on the object while the page is in the pageout queue...
3012 * The page queues must be locked.
3059 * if this page is currently on the pageout queue, we can't do the
3063 * reference which is held on the object while the page is in the pageout queue...
3110 * Transfer the entire throttled queue to a regular LRU page queues.
3132 * Adjust the global page counts.
3203 * Transfer the entire local queue to a regular LRU page queues.
3220 * Adjust the global page counts.
3236 * Zero-fill a part of the page.
3248 * we don't hold the page queue lock
3282 * Zero-fill the specified page.
3289 "vm_page_zero_fill, object 0x%X offset 0x%X page 0x%X\n",
3293 * we don't hold the page queue lock
3306 * copy part of one page to another
3319 * we don't hold the page queue lock
3332 * Copy one page to another
3335 * The source page should not be encrypted. The caller should
3336 * make sure the page is decrypted first, if necessary.
3354 * we don't hold the page queue lock
3364 * The source page should not be encrypted at this point.
3365 * The destination page will therefore not contain encrypted
3369 panic("vm_page_copy: source page %p is encrypted\n", src_m);
3376 * We're copying a page from a code-signed object.
3377 * Whoever ends up mapping the copy page might care about
3378 * the original page's integrity, so let's validate the
3379 * source page now.
3396 * Propagate the cs_tainted bit to the copy page. Do not propagate
3523 panic("vm_page_verify_free_list(color=%u, npages=%u): page %p corrupted prev ptr %p instead of %p\n",
3526 panic("vm_page_verify_free_list(color=%u, npages=%u): page %p not busy\n",
3530 panic("vm_page_verify_free_list(color=%u, npages=%u): page %p wrong color %u instead of %u\n",
3533 panic("vm_page_verify_free_list(color=%u, npages=%u): page %p not free\n",
3543 printf("vm_page_verify_free_list(color=%u, npages=%u): page %p not found phys=%u\n",
3561 printf("vm_page_verify_free_list(color=%u, npages=%u): page %p found phys=%u\n",
3637 * for a 'stealable' page... currently we are pretty conservative... if the page
3638 * meets this criterea and is physically contiguous to the previous page in the 'run'
3639 * we keep developing it. If we hit a page that doesn't fit, we reset our state
3650 * or if the state of the page behind the vm_object lock is no longer viable, we'll
3776 * page is in a transient state
3785 * page needs to be on one of our queues
3800 * sure that a non-free page is not busy and is
3827 * This page is not free.
3832 * move the contents of this page
3833 * into a substitute page.
3850 * page... thus the jump back to 'retry'
3862 * get stuck looking at the same page
3882 * reset our free page limit since we
3900 * start from the very first page.
3901 * Start again from the very first page.
3960 * Clear the "free" bit so that this page
3995 * so that the page list is created in the
4003 * page has already been removed from
4073 * of the page so that it is ready
4084 * this page was used...
4088 * now put the substitute page on the object
4102 * of the page so that it is ready
4131 * page in this run that was
4134 * and 1 more to bump back over this page
4147 * start from the very first page.
4148 * Start again from the very first page.
4161 * reset our free page limit since we
4274 * on the page... however, the majority of the work can be done
4281 * necessary work for each page... we will grab the busy bit on the page
4310 * successfully acquire the object lock of any candidate page
4348 * Add this page to our list of reclaimed pages,
4366 * we might disconnect the page, then someone might
4369 * page on that queue, which we don't want
4375 * this page has been touched since it got cleaned; let's activate it
4459 vm_page_set_offset(vm_page_t page, vm_object_offset_t offset)
4461 page->offset = offset;
4465 vm_page_get_next(vm_page_t page)
4467 return ((vm_page_t) page->pageq.next);
4471 vm_page_get_offset(vm_page_t page)
4473 return (page->offset);
4477 vm_page_get_phys_page(vm_page_t page)
4479 return (page->phys_page);
4605 * the object associated with candidate page is
4615 * page queues lock, we can only 'try' for this one.
4642 * page is not to be cleaned
4675 * page is not to be cleaned
4733 * means this page can't be on the pageout queue so it's
4976 * Somebody is playing with this page.
5008 * If it's clean or purgeable we can discard the page on wakeup.
5041 /* No need to lock page queue for token delete, hibernate_vm_unlock()
5109 Bits zero in the bitmaps => page needs to be saved. All pages default to be saved,