• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /netgear-R7000-V1.0.7.12_1.2.5/components/opensource/linux/linux-2.6.36/kernel/

Lines Matching refs:tasks

87 /* These global varaibles are needed to hold prev, next tasks to log context
126 * default timeslice is 100 msecs (used only for SCHED_RR tasks).
335 struct list_head tasks;
350 * leaf cfs_rqs are those that hold tasks (lowest schedulable entity in
362 * the part of load.weight contributed by tasks
479 /* capture load from *all* tasks on this cpu: */
618 * Return the group to which this tasks belongs.
801 * Number of tasks to iterate in a single balance run.
837 * part of the period that we allow rt tasks to run in us.
1375 * of tasks with abnormal "nice" values across CPUs the contribution that
1377 * scheduling class and "nice" value. For SCHED_NORMAL tasks this is just a
1426 /* Time spent by the tasks of the cpu accounting group executing in ... */
1629 * If there are currently no tasks on the cpu pretend there
1878 * SCHED_IDLE tasks get minimal weight:
1966 * be boosted by RT tasks, or might be boosted by
1974 * If we are RT tasks or we were boosted to RT priority,
2252 * Don't tell them about moving exiting tasks or
2487 * changing the task state if and only if any tasks are woken up.
2704 * prepare_task_switch - prepare to switch tasks
3429 * Note that the thread group might have other running tasks as well,
3431 * running tasks might have.
3854 * Optimization: we know that if all tasks are in
4118 * number) then we wake all the non-exclusive tasks and one exclusive task.
4146 * changing the task state if and only if any tasks are woken up.
4188 * changing the task state if and only if any tasks are woken up.
4227 * changing the task state if and only if any tasks are woken up.
4247 * changing the task state if and only if any tasks are woken up.
4646 * RT tasks are offset by -200. Normal tasks are centered
4764 * Allow unprivileged RT tasks to decrease priority:
4781 * Like positive nice levels, dont allow tasks to
4816 * Do not allow realtime tasks into groups that have no runtime
5170 * This function yields the current CPU to other tasks. If there are no
5458 * Only show locks if all tasks are dumped:
5516 * The idle tasks have their own, simple scheduling class:
5754 * While a dead CPU has no uninterruptible tasks queued at this point,
5756 * for performance reasons the counter is not stricly tracking tasks to
5773 /* Run through task list and migrate tasks from the dead cpu. */
5859 /* release_task() removes task from tasklist, so we won't find dead tasks. */
5878 * remove the tasks which were accounted by rq from calc_load_tasks.
6628 * should be one that prevents unnecessary balancing, but also spreads tasks
7756 INIT_LIST_HEAD(&cfs_rq->tasks);
7939 * system cpu resource is divided among the tasks of
7944 * In other words, if init_task_group has 10 tasks of weight
7950 * We achieve this by letting init_task_group's tasks sit
8108 * Only normalize user tasks:
8123 * tasks back to 0:
8624 * Ensure we don't starve existing RT tasks.
8773 /* Don't accept realtime tasks when there is no way for them to run */
8790 * There's always some RT tasks in the root group
8882 /* We don't support RT-tasks being in separate groups */
9015 /* track cpu usage of a group of tasks and its child groups */