Lines Matching defs:reclaiming

282 	 * overestimating the reclaimed amount (potentially under-reclaiming).
284 * Only count such pages for global reclaim to prevent under-reclaiming
1182 * Before reclaiming the folio, try to relocate
1681 * this disrupts the LRU order when reclaiming for lower zones but
1948 * pressure reclaiming all the clean cache. And in some cases,
2397 * proportional to the cost of reclaiming each list, as
3124 static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclaiming)
3144 if (reclaiming)
5663 * Global reclaiming within direct reclaim at DEF_PRIORITY is a normal
5668 * reclaiming implies that kswapd is not keeping up and it is best to
5701 * stop reclaiming one LRU and reduce the amount scanning
5775 * It will give up earlier than that if there is difficulty reclaiming pages.
5819 * inactive lists are large enough, continue reclaiming
6097 * Take care memory controller reclaiming has small influence
6223 * If we're getting trouble reclaiming, start doing
6756 * Returns the order kswapd finished reclaiming at.
6816 * then consider reclaiming from all zones. This has a dual
6871 * referenced before reclaiming. All pages are rotated
6877 * If we're getting trouble reclaiming, start doing writepage
6916 * progress in reclaiming pages
6977 * Return the order kswapd stopped reclaiming at as
7156 * reclaim fails then kswapd falls back to reclaiming for
7158 * for the order it finished reclaiming at (reclaim_order)
7178 * pgdat. It will wake up kcompactd after reclaiming memory. If kswapd reclaim
7233 * LRU order by reclaiming preferentially