Lines Matching refs:page

38  * For HugeTLB page, there are more metadata to save in the struct page. But
39 * the head struct page cannot meet our needs, so we have to abuse other tail
40 * struct page to store the metadata.
83 * instantiated within the map. The from and to elements are huge page
136 struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
140 unsigned long, unsigned long, struct page *,
145 struct page *ref_page, zap_flags_t zap_flags);
217 * high-level pgtable page, but also PUD entry that can be unshared
227 * pgtable page can go away from under us! It can be done by a pmd
241 * a concurrent pmd unshare, but it makes sure the pgtable page is safe to
461 unsigned long end, struct page *ref_page,
570 * huegtlb page specific state flags. These flags are located in page.private
571 * of the hugetlb head page. Functions created via the below macros should be
574 * HPG_restore_reserve - Set when a hugetlb page consumes a reservation at
575 * allocation time. Cleared when page is fully instantiated. Free
578 * the only reference to page. i.e. After allocation but before use
579 * or when the page is being freed.
580 * HPG_migratable - Set after a newly allocated page is added to the page
581 * cache and/or page tables. Indicates the page is a candidate for
583 * Synchronization: Initially set after new page allocation with no
586 * HPG_temporary - Set on a page that is temporarily allocated from the buddy
588 * are available in the pool. The hugetlb free page path will
590 * Synchronization: Can be set after huge page allocation from buddy when
593 * HPG_freed - Set when page is on the free lists.
595 * HPG_vmemmap_optimized - Set when the vmemmap pages of the page are freed.
596 * HPG_raw_hwp_unreliable - Set when the hugetlb page has a hwpoison sub-page
611 * hugetlb specific page flags.
620 static inline int HPage##uname(struct page *page) \
621 { return test_bit(HPG_##flname, &(page->private)); }
629 static inline void SetHPage##uname(struct page *page) \
630 { set_bit(HPG_##flname, &(page->private)); }
638 static inline void ClearHPage##uname(struct page *page) \
639 { clear_bit(HPG_##flname, &(page->private)); }
645 static inline int HPage##uname(struct page *page) \
652 static inline void SetHPage##uname(struct page *page) \
659 static inline void ClearHPage##uname(struct page *page) \
669 * Create functions associated with hugetlb page flags
681 /* Defines one hugetlb page size */
714 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
898 * It determines whether or not a huge page should be placed on
899 * movable zone or not. Movability of any huge page should be
900 * required only if huge page size is supported for migration.
901 * There won't be any reason for the huge page to be movable if
903 * page should be large enough to be placed under a movable zone
907 * So even though large huge page sizes like the gigantic ones
1028 * Check if a given raw @page in a hugepage is HWPOISON.
1030 bool is_raw_hwpoison_page_in_hugepage(struct page *page);
1046 static inline int isolate_or_dissolve_huge_page(struct page *page,