Lines Matching refs:map

548 hugetlb_resv_map_add(struct resv_map *map, struct list_head *rg, long from,
555 nrg = get_file_region_entry_from_cache(map, from, to);
556 record_hugetlb_cgroup_uncharge_info(cg, h, map, nrg);
558 coalesce_file_region(map, nrg);
691 * map. Regions will be taken from the cache to fill in this range.
699 * Return the number of new huge pages added to the map. This number is greater
754 * Examine the existing reserve map and determine how many
758 * map to add the specified range [f, t). region_chg does
760 * map. A number of new file_region structures is added to the cache as a
769 * reservation map for the range [f, t). This number is greater or equal to
819 * Delete the specified range [f, t) from the reserve map. If the
824 * Returns the number of huge pages deleted from the reserve map.
845 * may be a "placeholder" entry in the map which is of the form
928 * the reserve map region for a page. The huge page itself was free'ed
931 * these counts, the reserve map entry which could not be deleted will
956 * Count and return the number of huge pages in the reserve map
1027 * bits of the reservation map pointer, which are always clear due to
1045 * manner to a shared mapping. A shared mapping has a region map associated
1046 * with the underlying file, this region map represents the backing file
1048 * after the page is instantiated. A private mapping has a region map
1050 * reference it, this region map represents those offsets which have consumed
1121 /* Clear out any active regions before we release the map. */
1163 static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
1168 set_vma_private_data(vma, (unsigned long)map);
1204 * - For MAP_PRIVATE mappings, this is the reserve map which does
1271 * be a region map for all pages. The only situation where
1272 * there is no region map is if a hole was punched via
1293 * call to vma_needs_reserves(). The reserve map for
2718 * to the associated reservation map.
2775 * to add the page to the reservation map. If the page allocation fails,
2781 * is not the case is if a reserve map was changed between calls. It
2791 * map was created during huge page allocation and must be removed. It is to
2820 * 1 page, and that adding to resv map a 1 page entry can only
2867 * Subtle - The reserve map for private mappings has the
2869 * entry is in the reserve map, it means a reservation exists.
2870 * If an entry exists in the reserve map, it means the
2873 * value returned from reserve map manipulation routines above.
2921 * not set. However, alloc_hugetlb_folio always updates the reserve map.
2925 * to adjust the reservation map. This case deals primarily with private
2926 * mappings. Adjust the reserve map here to be consistent with global
2928 * reserve map indicates there is a reservation present.
2930 * In case 2, simply undo reserve map modifications done by alloc_hugetlb_folio.
2940 * Rare out of memory condition in reserve map
2958 * This indicates there is an entry in the reserve map
2982 * reserve map.
2984 * For shared mappings, no entry in the map indicates
3155 * Examine the region/reserve map to determine if the process
3169 * reserves as indicated by the region/reserve map. Check
3181 * map, there could be reservations associated with the
3243 * The page was added to the reservation map between
5179 * This new VMA should share its siblings reservation map if present.
5180 * The VMA will only ever have a valid reservation map pointer where
5182 * has a reference to the reservation map it cannot disappear until
7187 * When the last VMA disappears, the region map says how much
7192 * consumed reservations are stored in the map. Hence, nothing
7204 * map between region_chg and region_add. This