Lines Matching defs:page
33 * Though, in most cases, page lock already protects this.
51 * device page from the process address space. Such
52 * page is not CPU accessible and thus is mapped as
54 * count as a valid regular mapping for the page
55 * (and is accounted as such in page maps count).
58 * page mapping ie lock CPU page table and return true.
85 * arbitrary page.
152 * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags
155 * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point
156 * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is
159 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page
163 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry
164 * regardless of which page table level the page is mapped at. @pvmw->pmd is
167 * Returns false if there are no more page table entries for the page in
294 /* Did we cross page table boundary? */
318 * page_mapped_in_vma - check whether a page is really mapped in a VMA
319 * @page: the page to test
322 * Returns 1 if the page is mapped into the page tables of the VMA, 0
323 * if the page is not mapped into the page tables of this VMA. Only
326 int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
329 .pfn = page_to_pfn(page),
335 pvmw.address = vma_address(page, vma);