Lines Matching refs:nodes

1591  * between these nodes are slowed down, to allow things to settle down.
1600 /* Handle placement on systems where not all nodes are directly connected. */
1608 * All nodes are directly connected, and the same distance
1618 * which should be OK given the number of nodes rarely exceeds 8.
1625 * The furthest away nodes in the system are not interesting
1633 * of nodes, and move tasks towards the group with the most
1634 * memory accesses. When comparing two nodes at distance
1635 * "hoplimit", only nodes closer by than "hoplimit" are part
1636 * of each group. Skip other nodes.
1641 /* Add up the faults from nearby nodes. */
1649 * no fixed "groups of nodes". Instead, nodes that are not
1651 * nodes; a numa_group can occupy any set of nodes.
1670 * evenly spread out between numa nodes.
1833 * Cannot migrate to memoryless nodes.
2469 * Look at other nodes in these cases:
2472 * multiple NUMA nodes; in order to better consolidate the group,
2488 /* Only consider nodes where both task and groups benefit */
2502 * If the task is part of a workload that spans multiple NUMA nodes,
2503 * and is migrating into one of the workload's active nodes, remember
2565 * Find out how many nodes the workload is actively running on. Do this by
2566 * tracking the nodes from which NUMA hinting faults are triggered. This can
2567 * be different from the set of nodes where the workload's memory is currently
2717 nodemask_t nodes;
2720 /* Direct connections between all NUMA nodes. */
2726 * scores nodes according to the number of NUMA hinting faults on
2727 * both the node itself, and on nearby nodes.
2751 * inside the highest scoring group of nodes. The nodemask tricks
2754 nodes = node_states[N_CPU];
2760 /* Are there nodes at this distance from each other? */
2764 for_each_node_mask(a, nodes) {
2770 for_each_node_mask(b, nodes) {
2774 node_clear(b, nodes);
2790 /* Next round, evaluate the nodes within max_group. */
2793 nodes = max_group;
3121 * If a workload spans multiple NUMA nodes, a shared fault that
3122 * occurs wholly within the set of nodes that the workload is