• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /netgear-R7000-V1.0.7.12_1.2.5/components/opensource/linux/linux-2.6.36/kernel/

Lines Matching defs:works

21  * one extra for works which are better served by workers which are
61 GCWQ_HIGHPRI_PENDING = 1 << 4, /* highpri works on queue */
67 WORKER_PREP = 1 << 3, /* preparing to run works */
138 struct list_head scheduled; /* L: scheduled works */
150 * and all works are queued and processed here regardless of their
155 struct list_head worklist; /* L: list of pending works */
189 /* L: nr of in_flight works */
190 int nr_active; /* L: nr of active works */
191 int max_active; /* L: max active works */
192 struct list_head delayed_works; /* L: delayed works */
883 * other HIGHPRI works; otherwise, at the end of the queue. This
885 * there are HIGHPRI works pending.
945 * lying around lazily while there are works to be processed.
1122 * reentrance detection for delayed works.
1226 * idle state or fetches works without dropping lock, it can guarantee
1496 * sent to all rescuers with works scheduled on @gcwq to resolve
1598 * The caller can safely start processing works on false return. On
1641 * move_linked_works - move linked works to a list
1642 * @work: start of series of works to be scheduled
1646 * Schedule linked works starting from @work to @head. Work series to
1674 * multiple works to the scheduled queue, the next position
1707 /* ignore uncolored works */
1726 /* are there still in-flight works? */
1815 * CPU intensive works don't participate in concurrency
1870 * process_scheduled_works - process scheduled works
1873 * Process all scheduled works. Please note that the scheduled list
1895 * these per each cpu. These workers process all works regardless of
1896 * their specific target workqueue. The only exception is works which
1985 * developing into deadlock if some works currently on the same queue
1990 * workqueues which have works queued on the gcwq and let them process
1991 * those works so that forward progress can be guaranteed.
2028 * Slurp in all works issued via this workqueue and
2106 /* there can already be other linked works, inherit and set */
2195 * We sleep until all works which were queued on entry have been handled,
2621 struct work_struct __percpu *works;
2623 works = alloc_percpu(struct work_struct);
2624 if (!works)
2630 struct work_struct *work = per_cpu_ptr(works, cpu);
2637 flush_work(per_cpu_ptr(works, cpu));
2640 free_percpu(works);
3020 * gcwq which make migrating pending and scheduled works very
3022 * gcwqs serve mix of short, long and very long running works making
3046 * knows that there will be no new works on the worklist
3184 * may be frozen works in freezeable cwqs. Don't declare
3219 * Either all works have been scheduled and cpu is down, or
3235 * currently scheduled works by scheduling the rebind_work.
3347 * the ones which are still executing works from
3443 * freezeable workqueues will queue new works to their frozen_works
3530 * frozen works are transferred to their respective gcwq worklists.