Lines Matching refs:tasks

72  * Minimal preemption granularity for CPU-bound tasks:
431 * both tasks until we find their ancestors who are siblings of common
867 * 2) from those tasks that meet 1), we select the one
1019 * Tasks are initialized with full load to be seen as heavy tasks until
1031 * With new tasks being created, their initial util_avgs are extrapolated
1040 * To solve this problem, we also cap the util_avg of successive tasks to
1066 * For !fair tasks do:
1253 * Are we enqueueing a waiting task? (for current tasks
1333 * threshold. Above this threshold, individual tasks may be contending
1335 * approximation as the number of running tasks may not be related to
1343 * tasks that remain local when the destination is lightly loaded.
1355 * calculated based on the tasks virtual memory size and
1373 spinlock_t lock; /* nr_tasks, tasks */
1634 * of nodes, and move tasks towards the group with the most
1670 * larger multiplier, in order to group tasks together that are almost
1940 /* The node has spare capacity that can be used to run more tasks. */
1943 * The node is fully used and the tasks don't compete for more CPU
1944 * cycles. Nevertheless, some tasks might wait before running.
1949 * tasks.
2159 * be improved if the source tasks was migrated to the target dst_cpu taking
2219 * be incurred if the tasks were swapped.
2221 * If dst and source tasks are in the same NUMA group, or not
2227 * Do not swap within a group or between tasks that have
2239 * tasks within a group over tiny differences.
2287 * of tasks and also hurt performance due to cache
2367 * more running tasks that the imbalance is ignored as the
2390 * than swapping tasks around, check if a move is possible.
2432 * imbalance and would be the first to start moving tasks about.
2434 * And we want to avoid any moving of tasks about, as that would create
2435 * random movement of tasks -- counter the numa conditions we're trying
2656 * Most memory accesses are shared with other tasks.
2658 * since other tasks may just move the memory elsewhere.
2749 * tasks from numa_groups near each other in the system, and
2852 * Normalize the faults_from, so all tasks in a group
3336 * Scanning the VMAs of short lived tasks add more overhead. So
3431 * Make sure tasks use at least 32x as much time to run other code
3952 * on an 8-core system with 8 tasks each runnable on one CPU shares has
4003 * number include things like RT tasks.
4248 * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the
4250 * runnable section of these tasks overlap (or not). If they were to perfectly
4362 * assuming all tasks are equally runnable.
4568 * avg. The immediate corollary is that all (fair) tasks must be attached.
4607 * a lot of tasks with the rounding problem between 2 updates of
4800 * tasks cannot exit without having gone through wake_up_new_task() ->
5187 * adding tasks with positive lag, or removing tasks with negative lag
5189 * other tasks.
5264 * When joining the competition; the existing tasks will be,
5265 * on average, halfway through their slice, as such start tasks
5450 * when there are only lesser-weight tasks around):
6633 * CFS operations on tasks:
6728 /* Runqueue only has SCHED_IDLE tasks enqueued */
6812 * Since new tasks are assigned an initial util_avg equal to
6813 * half of the spare capacity of their CPU, tiny tasks have the
6817 * for the first enqueue operation of new tasks during the
6821 * the PELT signals of tasks to converge before taking them
6902 /* balance early to pull high priority tasks */
6941 * The load of a CPU is defined by the load of tasks currently enqueued on that
6942 * CPU as well as tasks which are currently sleeping after an execution on that
6998 * Only decay a single time; tasks that have less then 1 wakeup per
7060 * interrupt intensive workload could force all tasks onto one
7584 * kworker thread and the tasks previous CPUs are the same.
7669 * cpu_util() - Estimates the amount of CPU capacity used by CFS tasks.
7678 * CPU utilization is the sum of running time of runnable tasks plus the
7679 * recent utilization of currently non-runnable tasks on that CPU.
7680 * It represents the amount of CPU capacity currently used by CFS tasks in
7686 * runnable tasks on that CPU. It preserves a utilization "snapshot" of
7687 * previously-executed tasks, which helps better deduce how busy a CPU will
7692 * CPU contention for CFS tasks can be detected by CPU runnable > CPU
7699 * of rounding errors as well as task migrations or wakeups of new tasks.
7789 * The utilization of a CPU is defined by the utilization of tasks currently
7790 * enqueued on that CPU as well as tasks which are currently sleeping after an
7902 * NOTE: in case RT tasks are running, by default the
7969 * small tasks on a CPU in order to let other CPUs go in deeper idle states,
7985 * bias new tasks towards specific types of CPUs first, or to try to infer
8385 /* Idle tasks are by definition preempted by non-idle tasks. */
8391 * Batch and idle tasks do not preempt non-idle tasks (their preemption
8706 * We them move tasks around to minimize the imbalance. In the continuous
8735 * Coupled with a limit on how many tasks we can migrate every balance pass,
8803 /* The group has spare capacity that can be used to run more tasks. */
8806 * The group is fully used and the tasks don't compete for more CPU
8807 * cycles. Nevertheless, some tasks might wait before running.
8827 * The tasks' affinity constraints previously prevented the scheduler
8833 * tasks.
8875 struct list_head tasks;
8993 * We do not migrate tasks that are:
9016 * meet load balance goals by pulling other tasks on src_cpu.
9117 * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
9120 * Returns number of detached tasks if successful and 0 otherwise.
9124 struct list_head *tasks = &env->src_rq->cfs_tasks;
9143 while (!list_empty(tasks)) {
9160 /* take a breather every nr_migrate tasks */
9167 p = list_last_entry(tasks, struct task_struct, se.group_node);
9175 * Depending of the number of CPUs and tasks and the
9179 * detaching up to loop_max tasks.
9222 list_add(&p->se.group_node, &env->tasks);
9238 * load/util/tasks.
9245 list_move(&p->se.group_node, tasks);
9285 * attach_tasks() -- attaches all tasks detached by detach_tasks() to their
9290 struct list_head *tasks = &env->tasks;
9297 while (!list_empty(tasks)) {
9298 p = list_first_entry(tasks, struct task_struct, se.group_node);
9519 unsigned int sum_nr_running; /* Nr of all tasks running in the group */
9520 unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
9689 * Imagine a situation of two groups of 4 CPUs each and 4 tasks each with a
9696 * If we were to balance group-wise we'd place two tasks in the first group and
9697 * two tasks in the second group. Clearly this is undesired as it will overload
9702 * moving tasks due to affinity constraints.
9721 * be used by some tasks.
9724 * available capacity for CFS tasks.
9726 * account the variance of the tasks' load and to return true if the available
9749 * group_is_overloaded returns true if the group has more tasks than it can
9752 * with the exact right number of tasks, has no more spare capacity but is not
9876 * to a CPU that doesn't have multiple tasks sharing its CPU capacity.
10050 * Don't try to pull misfit tasks we can't help.
10109 * group because tasks have all compute capacity that they need
10148 * and highest number of running tasks. We could also compare
10151 * CPUs which means less opportunity to pull tasks.
10164 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
10416 /* There is no idlest group to push tasks to */
10461 * idlest group don't try and push any tasks.
10501 * and improve locality if the number of running tasks
10653 * Indicate that the child domain of the busiest group prefers tasks
10692 /* Set imbalance to allow misfit tasks to be balanced. */
10717 /* Reduce number of tasks sharing CPU capacity */
10771 * When prefer sibling, evenly spread running tasks on
10796 /* Number of tasks to move to restore balance */
10817 * busiest group don't try to pull any tasks.
10829 * load, don't try to pull any tasks.
10898 /* There is no busy sibling group to pull tasks from */
10904 /* Misfit tasks should be dealt with regardless of the avg load */
10927 * don't try and pull any tasks.
10934 * between tasks.
10939 * busiest group don't try to pull any tasks.
10949 * Don't pull any tasks if this group is already above the
10965 * Try to move all excess tasks to a sibling domain of the busiest
11004 * busiest doesn't have any tasks waiting to run
11041 * - regular: there are !numa tasks
11042 * - remote: there are numa tasks that run on the 'wrong' node
11045 * In order to avoid migrating ideally placed numa tasks,
11050 * queue by moving tasks around inside the node.
11054 * allow migration of more tasks.
11079 * Make sure we only pull tasks from a CPU of lower priority
11146 * For ASYM_CPUCAPACITY domains with misfit tasks we
11163 * Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
11172 * ASYM_PACKING needs to force migrate tasks from busy but lower
11173 * priority CPUs in order to pack all tasks in the highest priority
11192 * The imbalanced case includes the case of pinned tasks preventing a fair
11251 * However, we bail out if we already have tasks or a wakeup pending,
11302 * tasks if there is an imbalance.
11323 .tasks = LIST_HEAD_INIT(env.tasks),
11360 * Attempt to move tasks. If sched_balance_find_src_group has found
11378 * We've detached some tasks from busiest_rq. Every
11381 * that nobody can manipulate the tasks in parallel.
11396 /* Stop if we tried all running tasks */
11402 * Revisit (affine) tasks on src_cpu that couldn't be moved to
11448 /* All tasks on this runqueue were pinned by CPU affinity */
11535 * constraints. Clear the imbalance flag only if other tasks got
11547 * We reach balance because all tasks are pinned at this level so
11620 * running tasks off the busiest CPU onto idle CPUs. It requires at
11637 * CPUs can become inactive. We should not move tasks from or to
11802 * state even if we migrated tasks. Update it.
11968 * currently idle; in which case, kick the ILB to move tasks
11996 * ensure tasks have enough CPU capacity.
12012 * like this LLC domain has tasks we could move.
12158 * tasks movement depending of flags.
12349 * idle. Attempts to pull tasks from other CPUs.
12352 * < 0 - we released the lock and there are !fair tasks present
12353 * 0 - failed, no new tasks
12354 * > 0 - success, new (fair) tasks present
12382 * Do not pull tasks towards !active CPUs...
12438 * Stop searching for tasks to pull if there are
12439 * now runnable tasks on this rq.
12568 * tasks on this CPU and the forced idle CPU. Ideally, we should
12571 * MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check
12622 * Find an se in the hierarchy for tasks a and b, such that the se's
13176 * Time slice is 0 for SCHED_OTHER tasks that are on an otherwise