Lines Matching refs:reserve

690  * Add the huge page range represented by [f, t) to the reserve
754 * Examine the existing reserve map and determine how many
757 * call to region_add that will actually modify the reserve
819 * Delete the specified range [f, t) from the reserve map. If the
824 * Returns the number of huge pages deleted from the reserve map.
928 * the reserve map region for a page. The huge page itself was free'ed
930 * usage count, and the global reserve count if needed. By incrementing
931 * these counts, the reserve map entry which could not be deleted will
956 * Count and return the number of huge pages in the reserve map
1040 * the reserve counters are updated with the hugetlb_lock held. It is safe
1204 * - For MAP_PRIVATE mappings, this is the reserve map which does
1248 /* Returns true if the VMA has associated reserve pages */
1255 * reserve count remains after releasing inode, because this
1293 * call to vma_needs_reserves(). The reserve map for
2776 * is not the case is if a reserve map was changed between calls. It
2785 * vma_del_reservation is used in error paths where an entry in the reserve
2862 * Subtle - The reserve map for private mappings has the
2864 * entry is in the reserve map, it means a reservation exists.
2865 * If an entry exists in the reserve map, it means the
2868 * value returned from reserve map manipulation routines above.
2916 * not set. However, alloc_hugetlb_folio always updates the reserve map.
2919 * global reserve count. But, free_huge_folio does not have enough context
2921 * mappings. Adjust the reserve map here to be consistent with global
2922 * reserve count adjustments to be made by free_huge_folio. Make sure the
2923 * reserve map indicates there is a reservation present.
2925 * In case 2, simply undo reserve map modifications done by alloc_hugetlb_folio.
2935 * Rare out of memory condition in reserve map
2937 * that global reserve count will not be incremented
2943 * accounting of reserve counts.
2953 * This indicates there is an entry in the reserve map
2965 * hugetlb_restore_reserve so that the reserve
2967 * is freed. This reserve will be consumed
2977 * reserve map.
2987 * on the folio so reserve count will be
2988 * incremented when freed. This reserve will
3150 * Examine the region/reserve map to determine if the process
3164 * reserves as indicated by the region/reserve map. Check
3175 * Even though there was no reservation in the region/reserve
5210 unsigned long reserve, start, end;
5222 reserve = (end - start) - region_count(resv, start, end);
5224 if (reserve) {
5226 * Decrement reserve counts. The global reserve count may be
5229 gbl_reserve = hugepage_subpool_put_pages(spool, reserve);
5486 /* Do not use reserve as it's private owned */
5766 * reserve restored. Keep in mind that vma_needs_reservation() changes
5860 * mapping it owns the reserve page for. The intention is to unmap the page
5992 * page is used to determine if the reserve at this address was
7064 * to reserve the full area even if read-only as mprotect() may be
7137 * pages in this range were added to the reserve
7140 * the subpool and reserve counts modified above
7777 pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",