Lines Matching defs:workers

66 	 * While associated (!DISASSOCIATED), all workers are bound to the
70 * While DISASSOCIATED, the cpu may be offline and all workers have
83 POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
120 * Rescue workers are used only on emergencies and shared by
202 int nr_workers; /* L: total number of workers */
203 int nr_idle; /* L: currently idle workers */
205 struct list_head idle_list; /* L: list of idle workers */
209 struct timer_list mayday_timer; /* L: SOS timer for workers */
211 /* a workers is either on busy_hash or idle_list, or the manager */
213 /* L: hash of busy workers */
216 struct list_head workers; /* A: attached workers */
217 struct list_head dying_workers; /* A: workers about to die */
218 struct completion *detach_completion; /* all workers detached */
243 PWQ_STAT_REPATRIATED, /* unbound workers brought back into scope */
571 * for_each_pool_worker - iterate through all workers of a worker_pool
573 * @pool: worker_pool to iterate workers of
581 list_for_each_entry((worker), &(pool)->workers, node) \
935 * running workers.
937 * Note that, because unbound workers never contribute to nr_running, this
946 /* Can I start working? Called from busy but !running workers. */
952 /* Do I need to keep working? Called from currently running workers. */
964 /* Do we have too many workers and should some go away? */
1195 * A single work shouldn't be executed concurrently by multiple workers.
1198 * @work is not executed concurrently by multiple workers from the same
1270 * now. If this becomes pronounced, we can skip over workers which are
1435 * workers, also reach here, let's not access anything before
1528 * to sleep. It's used by psi to identify aggregation workers during
2730 * details. BH workers are, while per-CPU, always DISASSOCIATED.
2742 list_add_tail(&worker->node, &pool->workers);
2770 if (list_empty(&pool->workers) && list_empty(&pool->dying_workers))
2932 * idle_worker_timeout - check if some idle workers can now be deleted.
2970 * idle_cull_fn - cull workers that have been idle for too long.
2971 * @work: the pool's work for handling these idle workers
2973 * This goes through a pool's idle workers and gets rid of those that have been
3159 * interaction with other workers on the same cpu, queueing and
3223 * workers such as the UNBOUND and CPU_INTENSIVE ones.
3366 * The worker thread function. All workers belong to a worker_pool -
3367 * either a per-cpu one or dynamic unbound one. These workers process all
3488 * pwq(s) queued. This can happen by non-rescuer workers consuming
3557 * Leave this pool. Notify regular workers; otherwise, we end up
3635 bh_worker(list_first_entry(&pool->workers, struct worker, node));
3662 bh_worker(list_first_entry(&pool->workers, struct worker, node));
4635 INIT_LIST_HEAD(&pool->workers);
4798 * Become the manager and destroy all workers. This prevents
4799 * @pool's workers from blocking on attach_mutex. We're the last
4832 if (!list_empty(&pool->workers) || !list_empty(&pool->dying_workers))
5284 * with a cpumask spanning multiple pods, the workers which were already
6203 pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers);
6338 * We've blocked all attach/detach operations. Make all workers
6339 * unbound and set DISASSOCIATED. Before this, all workers
6356 * are served by workers tied to the pool.
6377 * rebind_workers - rebind all workers of a pool to the associated CPU
6380 * @pool->cpu is coming online. Rebind all workers to the CPU.
6389 * Restore CPU affinity of all workers. As all idle workers should
6392 * of all workers first and then clear UNBOUND. As we're called
6433 * restore_unbound_workers_cpumask - restore cpumask of unbound workers
6440 * online CPU before, cpus_allowed of all its workers should be restored.
6519 /* unbinding per-cpu workers should happen on the local CPU */
6856 * nice RW int : nice value of the workers
6857 * cpumask RW mask : bitmask of allowed CPUs for the workers
7338 * Show workers that might prevent the processing of pending work items.
7339 * The only candidates are CPU-bound workers in the running state.
7375 pr_info("Showing backtraces of running workers in stalled CPU-bound worker pools:\n");
7719 * workers and enable future kworker creations.
7751 * Create the initial workers. A BH pool has one pseudo worker that
7753 * affected by hotplug events. Create the BH pseudo workers for all