• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /macosx-10.5.8/xnu-1228.15.4/osfmk/vm/

Lines Matching refs:page

63  *	The proverbial page-out daemon.
120 #ifndef VM_PAGEOUT_BURST_INACTIVE_THROTTLE /* maximum iterations of the inactive queue w/o stealing/cleaning a page */
141 #define VM_PAGEOUT_BURST_WAIT 30 /* milliseconds per page */
244 * must hold the page queues lock to
455 /* page. When the bit is on the upl commit code will */
456 /* respect the pageout bit in the target page over the */
457 /* caller's page list indication */
473 * Handle the "target" page(s). These pages are to be freed if
478 * adjacent page and conversion to a target.
490 * Revoke all access to the page. Since the object is
491 * locked, and the page is busy, this prevents the page
495 * Since the page is left "dirty" but "not modifed", we
496 * can detect whether the page was redirtied during
520 * page, so make it active.
534 /* The page was busy so no extraneous activity */
545 /* alternate request page list, write to page_list */
546 /* case. Occurs when the original page was wired */
553 * Set the dirty state according to whether or not the page was
555 * NOT call pmap_clear_modify since the page is still mapped.
556 * If the page were to be dirtied between the 2 calls, this
575 * Wakeup any thread waiting for the page to be un-cleaning.
595 * Purpose: setup a page to be cleaned (made non-dirty), but not
596 * necessarily flushed from the VM page cache.
599 * The page must not be busy, and the object and page
616 "vm_pageclean_setup, obj 0x%X off 0x%X page 0x%X new 0x%X new_off 0x%X\n",
623 * Mark original page as cleaning in place.
630 * Convert the fictitious page to a private shadow of
631 * the real page.
649 * Causes the specified page to be initialized in
654 * The page is moved to a temporary object and paged out.
657 * The page in question must not be on any pageout queues.
659 * The page must be busy, but not hold a paging reference.
662 * Move this page to a completely new object.
674 "vm_pageout_initialize_page, page 0x%X\n",
679 * Verify that we really want to clean this page
700 * If there's no pager, then we can't clean the page. This should
713 /* set the page for future call to vm_fault_list_request */
754 * Given a page, queue it to the appropriate I/O thread,
755 * which will page it out and attempt to clean adjacent pages
758 * The page must be busy, and the object and queues locked. We will take a
763 * The page must not be on any pageout queue.
774 "vm_pageout_cluster, object 0x%X offset 0x%X page 0x%X\n",
778 * Only a certain kind of page is appreciated here.
791 * set the page for future call to vm_fault_list_request
792 * page should already be marked busy
819 * A page is back from laundry. See if there are some pages waiting to
822 * Object and page queues must be locked.
951 * A page is "zero-filled" if it was not paged in from somewhere,
953 * Recalculate the zero-filled page ratio. We use this to apportion
964 /* zf_ratio is the number of zf pages we victimize per normal page */
1040 * page queues lock, we can only 'try' for this one.
1056 * move page to end of active queue and continue
1081 * if the page is BUSY, then we pull it
1100 * Deactivate the page while holding the object
1101 * locked, so we know the page is still not busy.
1103 * and pmap_clear_reference. The page might be
1134 * the page queues lock
1151 * nobody is still waiting for a page.
1307 * blocked waiting for pages... we'll move one page for each of
1426 * Time for a zero-filled inactive page?
1437 * It's either a normal inactive page or nothing.
1462 * the object associated with candidate page is
1473 * page queues lock, we can only 'try' for this one.
1483 * Move page to end and continue.
1581 * inactive pool to page out in order to satisfy all memory
1586 * Move page to end and continue, hoping that
1588 * page out so that the thread which currently
1590 * Don't re-grant the ticket, the page should
1626 * Remove the page from its list.
1649 /* If the object is empty, the page must be reclaimed even if dirty or used. */
1650 /* If the page belongs to a volatile object, we stick it back on. */
1655 /* unmap the page */
1662 /* we saved the cost of cleaning this page ! */
1682 * if this page has already been picked up as part of a
1683 * page-out cluster, it will be busy because it is being
1692 * Somebody is already playing with this page.
1702 * If it's absent or in error, we can reclaim the page.
1738 * If already cleaning this page in place, convert from
1739 * "adjacent" to "target". We can leave the page mapped,
1761 * to make sure the page is unreferenced.
1775 * The page we pulled off the inactive list has
1790 * The page was being used, so put back on active list.
1817 "vm_pageout_scan, replace object 0x%X offset 0x%X page 0x%X\n",
1821 * we've got a candidate page to steal...
1876 * we've got a page that we can steal...
1879 * first take the page BUSY, so that no new
1888 * page was still mapped up to the pmap_disconnect
1891 * we also check for the page being referenced 'late'
1896 * Note that if 'pmapped' is FALSE then the page is not
1899 * have been set in anticipation of likely usage of the page.
1908 /* If m->reference is already set, this page must have
1926 * since the last page was 'stolen'
1931 * If it's clean and not precious, we can free the page.
1939 * The page may have been dirtied since the last check
1941 * if the page was clean then). With the dirty page
2128 * If there is no memory object for the page, create
2141 * Reactivate the page.
2174 * so there is nowhere for the page to go.
2175 * Just free the page... VM_PAGE_FREE takes
2551 * A page list structure, listing the physical pages
2562 * if a page list structure is present
2564 * page is not present, return a non-initialized
2567 * possible copies of the page. Leave pages busy
2568 * in the original object, if a page list structure
2569 * was specified. When a commit of the page list
2572 * If a page list structure is present, return
2573 * all mapped pages. Where a page does not exist
2575 * the original object. If a page list structure
2580 * page cache handling code, will never actually make a request
2761 * successfully acquire the object lock of any candidate page
2804 * page is on inactive list and referenced...
2818 * if we were the page stolen by vm_pageout_scan to be
2821 * then we only need to check for the page being dirty or
2829 * this is a request for a PAGEOUT cluster and this page
2833 * already filtered above based on whether this page is
2834 * currently on the inactive queue or it meets the page
2844 * the page... go on to the next one
2861 * page. We will have to wait.
2871 * Someone else already cleaning the page?
2881 * The caller is gathering this page and might
2883 * page before adding it to the UPL, so that
2893 * mark page as busy while decrypt
2912 * we've buddied up a page for a clustered pageout
2922 * all the pages we will page out that
2959 * Mark original page as cleaning
2979 * Record that this page has been
2995 * We want to deny access to the target page
3001 * vm_pageout_scan() to demote that page
3004 * this page during its scanning while we're
3012 * deny access to the target page
3084 * someone else is writing to the page... wait...
3098 * dump the fictitious page
3122 * physical page by asking the
3131 * need to allocate a page
3168 * successfully acquire the object lock of any candidate page
3205 * The page is going to be encrypted when we
3211 * Otherwise, the page will not contain
3219 panic("need corner case for fictitious page");
3224 * page. We will have to wait.
3258 * Mark original page as cleaning
3295 * deny access to the target page while
3304 * expect the page not to be used
3313 * expect the page to be used
3337 * we are working with a fresh page and we've
3344 * someone is explicitly grabbing this page...
3362 * successfully acquire the object lock of any candidate page
3499 * page to be written out who's offset is beyond the
3783 panic("vm_upl_map: page missing\n");
3787 * Convert the fictitious page to a private
3788 * shadow of the real page.
3795 * since m is a page in the upl it must
3798 * page to the alias
3810 * The virtual page ("m") has to be wired in some way
3811 * here or its physical page ("m->phys_page") could
3814 * get an encrypted page here. Since the encryption
3815 * key depends on the VM page's "pager" object and
3818 * sharing the same physical page: we could end up
3819 * encrypting with one key (via one VM page) and
3820 * decrypting with another key (via the alias VM page).
4008 * successfully acquire the object lock of any candidate page
4033 * No page list to get the code-signing info from !?
4095 * This page is no longer dirty
4116 * for this page.
4125 * releasing the BUSY bit on this page
4136 * This page is no longer dirty
4190 * This page is no longer dirty
4206 * page was re-dirtied after we started
4221 * page has been successfully cleaned
4259 * This page is no longer dirty
4279 * alternate request page list, write to
4281 * page was wired at the time of the list
4294 * between the vm page and the backing store
4318 * Clear the "busy" bit on this page before we
4324 * Wakeup any thread waiting for the page to be un-cleaning.
4344 * successfully acquire the object lock of any candidate page
4481 * successfully acquire the object lock of any candidate page
4551 * If the page was already encrypted,
4594 * reference this page... for
4615 * successfully acquire the object lock of any candidate page
4886 * If the page is encrypted, we need to decrypt it,
4887 * so force a soft page fault.
4927 * top-level placeholder page, if any.
4987 * can't substitute if the page is already wired because
5008 * want anyone refaulting this page in and using
5010 * to find the new page being substituted.
5036 * vm_page_grablo returned the page marked
5049 * Mark the page "busy" to block any future page fault
5050 * on this page. We'll also remove the mapping
5060 * expect the page to be used
5061 * page queues lock must be held to set 'reference'
5086 * someone is explicitly grabbing this page...
5114 * page faults will block.
5116 * can't be accessed without causing a page fault.
5247 * data, so we have to actually remove the encrypted pages from the page
5249 * locate the virtual page in its page table and will trigger a page
5250 * fault. We can then decrypt the page and enter it in the page table
5251 * again. Whenever we allow the user to access the contents of a page,
5264 * a physical page.
5326 * The page should also be kept busy to prevent
5332 vm_page_t page,
5346 if (page != VM_PAGE_NULL && *size == PAGE_SIZE) {
5347 assert(page->busy);
5350 * and just enter the VM page in the kernel address space
5370 /* found a space to map our page ! */
5397 * map the physical page to that virtual address.
5405 if (page->pmapped == FALSE) {
5406 pmap_sync_page_data_phys(page->phys_page);
5408 page->pmapped = TRUE;
5412 * and the actual use of the page by the kernel,
5418 page,
5420 ((int) page->object->wimg_bits &
5433 * addresses. Just map the page in the kernel
5477 * Enter the mapped pages in the page table now.
5482 * until after the kernel is done accessing the page(s).
5492 page = vm_page_lookup(object, offset + page_map_offset);
5493 if (page == VM_PAGE_NULL) {
5494 printf("vm_paging_map_object: no page !?");
5504 if (page->pmapped == FALSE) {
5505 pmap_sync_page_data_phys(page->phys_page);
5507 page->pmapped = TRUE;
5510 //assert(pmap_verify_free(page->phys_page));
5513 page,
5585 * have a different one for each page we encrypt, so that
5690 * Encrypt the given page, for secure paging.
5691 * The page might already be mapped at kernel virtual
5696 * The page's object is locked, but this lock will be released
5698 * The page is busy and not accessible by users (not entered in any pmap).
5702 vm_page_t page,
5720 assert(page->busy);
5721 assert(page->dirty || page->precious);
5723 if (page->encrypted) {
5730 ASSERT_PAGE_DECRYPTED(page);
5737 vm_object_paging_begin(page->object);
5741 * The page hasn't already been mapped in kernel space
5747 page,
5748 page->object,
5749 page->offset,
5755 "could not map page in kernel: 0x%x\n",
5771 * page to obfuscate the encrypted data a bit more and
5776 encrypt_iv.vm.pager_object = page->object->pager;
5778 page->object->paging_offset + page->offset;
5788 * Encrypt the page.
5799 * Unmap the page from the kernel's address space,
5804 vm_paging_unmap_object(page->object,
5813 * The page was kept busy and disconnected from all pmaps,
5819 pmap_clear_refmod(page->phys_page, VM_MEM_REFERENCED | VM_MEM_MODIFIED);
5821 page->encrypted = TRUE;
5823 vm_object_paging_end(page->object);
5829 * Decrypt the given page.
5830 * The page might already be mapped at kernel virtual
5835 * The page's VM object is locked but will be unlocked and relocked.
5836 * The page is busy and not accessible by users (not entered in any pmap).
5840 vm_page_t page,
5854 assert(page->busy);
5855 assert(page->encrypted);
5862 vm_object_paging_begin(page->object);
5866 * The page hasn't already been mapped in kernel space
5872 page,
5873 page->object,
5874 page->offset,
5880 "could not map page in kernel: 0x%x\n",
5893 * used to encrypt that page.
5896 decrypt_iv.vm.pager_object = page->object->pager;
5898 page->object->paging_offset + page->offset;
5908 * Decrypt the page.
5918 * Unmap the page from the kernel's address space,
5923 vm_paging_unmap_object(page->object,
5929 * After decryption, the page is actually clean.
5935 page->dirty = FALSE;
5936 if (page->cs_validated && !page->cs_tainted) {
5939 * This page is no longer dirty
5944 page->cs_validated = FALSE;
5947 pmap_clear_refmod(page->phys_page, VM_MEM_MODIFIED | VM_MEM_REFERENCED);
5949 page->encrypted = FALSE;
5952 * We've just modified the page's contents via the data cache and part
5954 * Since the page is now available and might get gathered in a UPL to
5958 pmap_sync_page_attributes_phys(page->phys_page);
5960 * Since the page is not mapped yet, some code might assume that it
5962 * that page. That code relies on "pmapped" being FALSE, so that the
5963 * caches get synchronized when the page is first mapped.
5965 assert(pmap_verify_free(page->phys_page));
5966 page->pmapped = FALSE;
5967 page->wpmapped = FALSE;
5969 vm_object_paging_end(page->object);
5991 vm_page_t page;
6041 page = vm_page_lookup(shadow_object,
6043 if (page == VM_PAGE_NULL) {
6045 "no page for (obj=%p,off=%lld+%d)!\n",
6051 * Disconnect the page from all pmaps, so that nobody can
6053 * accesses to this page will cause a page fault and block
6054 * while the page is busy being encrypted. After the
6056 * page fault and the page gets decrypted at that time.
6058 pmap_disconnect(page->phys_page);
6059 vm_page_encrypt(page, 0);
6088 __unused vm_page_t page,
6095 __unused vm_page_t page,