Lines Matching refs:end

387 		 * at the end of the enqueue.
681 * and end up with a larger lag than we started with.
1271 * Mark the end of the wait period if dequeueing a
2186 * end try selecting ourselves (current == env->p) as a swap candidate.
3207 unsigned long start, end;
3373 end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
3374 end = min(end, vma->vm_end);
3375 nr_pte_updates = change_prot_numa(vma, start, end);
3386 pages -= (end - start) >> PAGE_SHIFT;
3387 virtpages -= (end - start) >> PAGE_SHIFT;
3389 start = end;
3394 } while (end != vma->vm_end);
3419 * It is possible to reach the end of the VMA list but the last few
3831 * than up-to-date one, we do the update at the end of the
5905 /* end evaluation on encountering a throttled cfs_rq */
5922 /* end evaluation on encountering a throttled cfs_rq */
6165 * Are we near the end of the current quota period?
6783 /* end evaluation on encountering a throttled cfs_rq */
6803 /* end evaluation on encountering a throttled cfs_rq */
6861 /* end evaluation on encountering a throttled cfs_rq */
6893 /* end evaluation on encountering a throttled cfs_rq */
10149 * the spare capacity which is more stable but it can end up
10526 * compare the utilization which is more stable but it can end
10744 * capacity. This might end up creating spare capacity
10853 /******* sched_balance_find_src_group() helpers end here *********************/
10995 * otherwise we might end up to just move the imbalance
12195 * Start with the next CPU after this_cpu so we will end with this_cpu and let a