Lines Matching defs:zones
167 * `lru_pages' represents the number of on-LRU pages in all the zones which
959 * try to reclaim pages from zones which will satisfy the caller's allocation
965 * b) The zones may be over pages_high but they must go *over* pages_high to
973 static unsigned long shrink_zones(int priority, struct zone **zones,
980 for (i = 0; zones[i] != NULL; i++) {
981 struct zone *zone = zones[i];
1014 unsigned long try_to_free_pages(struct zone **zones, gfp_t gfp_mask)
1033 for (i = 0; zones[i] != NULL; i++) {
1034 struct zone *zone = zones[i];
1047 nr_reclaimed += shrink_zones(priority, zones, &sc);
1081 * Now that we've scanned all the zones at this priority level, note
1089 for (i = 0; zones[i] != 0; i++) {
1090 struct zone *zone = zones[i];
1101 * For kswapd, balance_pgdat() will work across all this node's zones until
1106 * There is special handling here for zones which are full of pinned pages.
1114 * kswapd scans the zones in the highmem->normal->dma direction. It skips
1115 * zones which have free_pages > pages_high, but once a zone is found to have
1116 * free_pages <= pages_high, we scan that zone and the lower zones regardless
1117 * of the number of free pages in the lower zones. This interoperates with
1119 * across the zones.
1196 * cause too much scanning of the lower zones.
1239 * another pass across the zones.
1666 * Note that shrink_slab will free memory on all zones and may
1720 * Only run zone reclaim on the local zone or on zones that do not