1Each CPU has a "base" scheduling domain (struct sched_domain). These are 2accessed via cpu_sched_domain(i) and this_sched_domain() macros. The domain 3hierarchy is built from these base domains via the ->parent pointer. ->parent 4MUST be NULL terminated, and domain structures should be per-CPU as they 5are locklessly updated. 6 7Each scheduling domain spans a number of CPUs (stored in the ->span field). 8A domain's span MUST be a superset of it child's span (this restriction could 9be relaxed if the need arises), and a base domain for CPU i MUST span at least 10i. The top domain for each CPU will generally span all CPUs in the system 11although strictly it doesn't have to, but this could lead to a case where some 12CPUs will never be given tasks to run unless the CPUs allowed mask is 13explicitly set. A sched domain's span means "balance process load among these 14CPUs". 15 16Each scheduling domain must have one or more CPU groups (struct sched_group) 17which are organised as a circular one way linked list from the ->groups 18pointer. The union of cpumasks of these groups MUST be the same as the 19domain's span. The intersection of cpumasks from any two of these groups 20MUST be the empty set. The group pointed to by the ->groups pointer MUST 21contain the CPU to which the domain belongs. Groups may be shared among 22CPUs as they contain read only data after they have been set up. 23 24Balancing within a sched domain occurs between groups. That is, each group 25is treated as one entity. The load of a group is defined as the sum of the 26load of each of its member CPUs, and only when the load of a group becomes 27out of balance are tasks moved between groups. 28 29In kernel/sched.c, rebalance_tick is run periodically on each CPU. This 30function takes its CPU's base sched domain and checks to see if has reached 31its rebalance interval. If so, then it will run load_balance on that domain. 32rebalance_tick then checks the parent sched_domain (if it exists), and the 33parent of the parent and so forth. 34 35*** Implementing sched domains *** 36The "base" domain will "span" the first level of the hierarchy. In the case 37of SMT, you'll span all siblings of the physical CPU, with each group being 38a single virtual CPU. 39 40In SMP, the parent of the base domain will span all physical CPUs in the 41node. Each group being a single physical CPU. Then with NUMA, the parent 42of the SMP domain will span the entire machine, with each group having the 43cpumask of a node. Or, you could do multi-level NUMA or Opteron, for example, 44might have just one domain covering its one NUMA level. 45 46The implementor should read comments in include/linux/sched.h: 47struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of 48the specifics and what to tune. 49 50For SMT, the architecture must define CONFIG_SCHED_SMT and provide a 51cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of 52all "i"'s siblings as well as "i" itself. 53 54Architectures may retain the regular override the default SD_*_INIT flags 55while using the generic domain builder in kernel/sched.c if they wish to 56retain the traditional SMT->SMP->NUMA topology (or some subset of that). This 57can be done by #define'ing ARCH_HASH_SCHED_TUNE. 58 59Alternatively, the architecture may completely override the generic domain 60builder by #define'ing ARCH_HASH_SCHED_DOMAIN, and exporting your 61arch_init_sched_domains function. This function will attach domains to all 62CPUs using cpu_attach_domain. 63 64Implementors should change the line 65#undef SCHED_DOMAIN_DEBUG 66to 67#define SCHED_DOMAIN_DEBUG 68in kernel/sched.c as this enables an error checking parse of the sched domains 69which should catch most possible errors (described above). It also prints out 70the domain structure in a visual format. 71