Lines Matching refs:page
12 * or INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page
18 * This is true for the page protection bit as well.
22 * Pages used for the page tables is a different story. FIXME: more
28 struct page *page, bool delay_rmap, int page_size);
30 struct page *page, unsigned int nr_pages, bool delay_rmap);
42 * Release the page cache reference for a pte removed by
43 * tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page
49 struct page *page, bool delay_rmap, int page_size)
53 free_page_and_swap_cache(page);
58 struct page *page, unsigned int nr_pages, bool delay_rmap)
61 encode_page(page, ENCODED_PAGE_BIT_NR_PAGES_NEXT),
66 VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1));
79 * page table from the tlb.
96 * If the mm uses a two level page table the single pmd is freed
116 * If the mm uses a four level page table the single p4d is freed
134 * If the mm uses a three level page table the single pud is freed