Lines Matching refs:page

397     // Add a page to the mapping run.  If this fails, the VmMappingCoalescer is
487 // if committing, then tell it to soft fault in a page
501 // iterate through the range, grabbing a page from the underlying object and
512 // no page to map
514 // fail when we can't commit every requested page
616 // user page fault on non user mapped region
641 // via UnmapVmoRangeLocked(). Since we're responsible for that page, signal to ourself to skip
647 // fault in or grab an existing page
649 vm_page_t* page;
650 zx_status_t status = object_->GetPageLocked(vmo_offset, pf_flags, nullptr, &page, &new_pa);
654 LTRACEF("ERROR: failed to fault in or grab existing page\n");
659 // if we read faulted, make sure we map or modify the page without any write permissions
661 // replace this page with a copy or a new one
674 LTRACEF("queried va, page at pa %#" PRIxPTR ", flags %#x is already there\n", pa,
677 // page was already mapped, are the permissions compatible?
678 // test that the page is already mapped with either the region's mmu flags
684 // assert that we're not accidentally marking the zero page writable
687 // same page, different permission
694 // some other page is mapped there already
695 LTRACEF("thread %s faulted on va %#" PRIxPTR ", different page was present\n",
699 // assert that we're not accidentally mapping the zero page writable
712 TRACEF("failed to map replacement page\n");
721 LTRACEF("mapping pa %#" PRIxPTR " to va %#" PRIxPTR " is zero page %d\n",
724 // assert that we're not accidentally mapping the zero page writable
730 TRACEF("failed to map page\n");
739 // TODO(abdulla): Correctly handle page fault for guest.