#
bad677be |
|
04-Aug-2022 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/scheduler: Remove "inline" attribute from PeekThread. This will allow it to be invoked outside scheduler_cpu.cpp, and GCC should automatically inline this function within that file anyway. No functional change intended.
|
#
e632208b |
|
10-Sep-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/scheduler: enable cpu load tracking after boot when the cpufreq module is loaded, we let the scheduler update its policy. Improve assert report CoreEntry::GetLoad() could return more than kMaxLoad. Change-Id: I127f9b3e8062b5996872aae30b4021b9904fa179 Reviewed-on: https://review.haiku-os.org/c/haiku/+/3216 Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
|
#
a57a7a8c |
|
09-Mar-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Fix load update on idle cores To make sure that load statistics are accurate on idle cores each time idle thread is scheduled a timer is set to update load when current load measurement interval elapses. However, core load is defined as the average load during last measurement interval and idle core may be still considered busy if it was not idle during entire measurement interval. Since, load update timer is a one shot timer that information will not be updated until the core becomes active again. To mitigate that issue load update timer is set to fire after two load measurement intervals had elapsed.
|
#
667b23dd |
|
04-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Always update core heaps after thread migration The main purpose of this patch is to eliminate the delay between thread migration and result of that migration being visible in load statistics. Such delay, in certain circumstances, may cause some cores to become overloaded because the scheduler migrates too many threads to them before the effect of migration becomes apparent.
|
#
230d1fcf |
|
03-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Update load of idle cores In order to keep the scheduler tickless core load is computed and updated only during various scheduler events (i.e. thread enqueue, reschedule, etc). The problem it creates is that if a core becomes idle its load may remain outdated for an extended period of time thus resulting in suboptimal thread migration decisions. The solution to this problem is to add a timer each time an idle thread is scheudled which, after kLoadMeasureInterval, would fire and force load update.
|
#
f116370e |
|
30-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Relax penalty cancellation requirements Priority penalties were made more strict in order to prevent situation when two or more high priority threads uses up all available CPU time in such manner that they do not receive a penalty but starve low priority threads. However, a significant change to thread priorites has been made since and now priority of all non real time threads varies in a range from 1 to static priority minus penalty. This means that the scheduler is able to prevent thread starvation without any complex penalty policies.
|
#
6155ab7b |
|
30-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Provide more stable core load statistics Originially, core load was a sum of eastimated loads of all currently running or ready threads on a given core. Such value is changing very rapidly preventing the thread migration logic from making any reasonable decisions. This patch changes the way core load is computed to make it more stable thus improving the qualitiy of decisions made by the thread migration logic. Currently core load is a sum of estimated loads of all threads that have been ready during last load measurement interval and haven't been migrated or killed.
|
#
5d79095e |
|
21-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Do not update load of disabled cores
|
#
a2634874 |
|
08-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Estimate the load thread is able to produce Previous implementation based on the actual load of each core and share each thread has in that load turned up to be very problematic when balancing load on very heavily loaded systems (i.e. more threads consuming all available CPU time than there is logical CPUs). The new approach is to estimate how much load would a thread produce if it had all CPU time only for itself. Summing such load estimations of each thread assigned to a given core we get a rank that contains much more information than just simple actual core load.
|
#
d36098e0 |
|
07-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Keep track of the number of the ready threads
|
#
9c465cc8 |
|
07-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Improve recognition of CPU bound threads
|
#
a5f45afa |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Remove unnecessary check against disabled CPU
|
#
8235bbc9 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Improve thread creation performance
|
#
cb66faef |
|
04-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Work around GCC2 limitations in function inlining GCC2 won't inline a function if it is used before its definition.
|
#
e4ea6372 |
|
03-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Disable load tracking when not needed
|
#
26592750 |
|
30-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Protect per CPU run queue with its own lock
|
#
1524fbf7 |
|
29-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Fix divide error in _RequestPerformanceLevel
|
#
ef8e55a1 |
|
28-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Use single ended heap for CPU heap
|
#
335c6055 |
|
26-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Remove CPUEntry::IncreaseActiveTime()
|
#
96dcc73b |
|
26-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Add scheduler profiler A bit hackish implementation of a profiler for the scheduler. SCHEDULER_ENTER_FUNCTION at the begining of each function aren't nice and usage of __PRETTY_FUNCTION__ isn't any better (both gcc and clang support it though), but it was quick to implement and doesn't lose information on inlined functions. It's just a tool, not an integral part of the kernal anyway.
|
#
b24ea642 |
|
23-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate ThreadData fields
|
#
a08b40d4 |
|
23-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate CPUEntry fields
|
#
e1e7235c |
|
23-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate CoreEntry fields
|
#
60e198f2 |
|
22-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate PackageEntry fields Apart from the refactoring this commit takes the opportunity and removes unnecessary read locks when choosing a package and a core from idle lists. The data structures are accessed in a thread safe way and it does not really matter whether the obtained data becomes outdated just when we release the lock or during our search for the appropriate package/core.
|
#
9116eec2 |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Allow calling UpdatePriority() for disabled CPU
|
#
c08ed2db |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Try to keep thread on the same logical CPU Some SMT implementations (e.g. recent AMD microarchitectures) have separate L1d cache for each SMT thread (which AMD decides to call "cores"). This means that we shouldn't move threads to another logical processor too often even if it belongs to the same core. We aren't very strict about this as it would complicate load balancing, but we try to reduce unnecessary migrations.
|
#
ad6b9a1d |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Use sequential locks instead of atomic 64 bit access
|
#
b258298c |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Protect cpu_ent::active_time with sequential lock atomic_{get, set}64() are problematic on architectures without 64 bit compare and swap. Also, using sequential lock instead of atomic access ensures that any reads from cpu_ent::active_time won't require any writes to shared memory.
|
#
1b06228f |
|
17-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Propagate scheduler modes to cpu{freq, idle} modules
|
#
d287274d |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Code refactoring
|
#
a57a7a8c6df4934623bcc4535f4f1b8bd46a39af |
|
09-Mar-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Fix load update on idle cores To make sure that load statistics are accurate on idle cores each time idle thread is scheduled a timer is set to update load when current load measurement interval elapses. However, core load is defined as the average load during last measurement interval and idle core may be still considered busy if it was not idle during entire measurement interval. Since, load update timer is a one shot timer that information will not be updated until the core becomes active again. To mitigate that issue load update timer is set to fire after two load measurement intervals had elapsed.
|
#
667b23ddc2944789ab4c62402bb361529997f4f4 |
|
04-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Always update core heaps after thread migration The main purpose of this patch is to eliminate the delay between thread migration and result of that migration being visible in load statistics. Such delay, in certain circumstances, may cause some cores to become overloaded because the scheduler migrates too many threads to them before the effect of migration becomes apparent.
|
#
230d1fcfeaedb4d034e3f03e1697957ca633e6ea |
|
03-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Update load of idle cores In order to keep the scheduler tickless core load is computed and updated only during various scheduler events (i.e. thread enqueue, reschedule, etc). The problem it creates is that if a core becomes idle its load may remain outdated for an extended period of time thus resulting in suboptimal thread migration decisions. The solution to this problem is to add a timer each time an idle thread is scheudled which, after kLoadMeasureInterval, would fire and force load update.
|
#
f116370edda18472a248387a7256e2b4e528c666 |
|
30-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Relax penalty cancellation requirements Priority penalties were made more strict in order to prevent situation when two or more high priority threads uses up all available CPU time in such manner that they do not receive a penalty but starve low priority threads. However, a significant change to thread priorites has been made since and now priority of all non real time threads varies in a range from 1 to static priority minus penalty. This means that the scheduler is able to prevent thread starvation without any complex penalty policies.
|
#
6155ab7b25c5f0b32bb014bab5916f4594dbb685 |
|
30-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Provide more stable core load statistics Originially, core load was a sum of eastimated loads of all currently running or ready threads on a given core. Such value is changing very rapidly preventing the thread migration logic from making any reasonable decisions. This patch changes the way core load is computed to make it more stable thus improving the qualitiy of decisions made by the thread migration logic. Currently core load is a sum of estimated loads of all threads that have been ready during last load measurement interval and haven't been migrated or killed.
|
#
5d79095e4487b6af0aa15342adf257e254976328 |
|
21-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Do not update load of disabled cores
|
#
a2634874ed5e33a36fe83c272614e2042fafde1d |
|
08-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Estimate the load thread is able to produce Previous implementation based on the actual load of each core and share each thread has in that load turned up to be very problematic when balancing load on very heavily loaded systems (i.e. more threads consuming all available CPU time than there is logical CPUs). The new approach is to estimate how much load would a thread produce if it had all CPU time only for itself. Summing such load estimations of each thread assigned to a given core we get a rank that contains much more information than just simple actual core load.
|
#
d36098e0430bdec4c5202673c3a8bff776dd03db |
|
07-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Keep track of the number of the ready threads
|
#
9c465cc83bbd40732475db43bd870221b99bdbb7 |
|
07-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Improve recognition of CPU bound threads
|
#
a5f45afa6c2f39d00951d01a0a4a2865b4b80059 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Remove unnecessary check against disabled CPU
|
#
8235bbc9965b083b294b366ea5438d2ff274dbf7 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Improve thread creation performance
|
#
cb66faef24f64af40a51f23300ff546d975535b3 |
|
04-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Work around GCC2 limitations in function inlining GCC2 won't inline a function if it is used before its definition.
|
#
e4ea637227d7cf9a53bc89317990b8a22a76780a |
|
03-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Disable load tracking when not needed
|
#
265927509dc56e82b12cd68750ae1e96601fd558 |
|
30-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Protect per CPU run queue with its own lock
|
#
1524fbf74231cf23490a81cb08cea0a478e5a524 |
|
29-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Fix divide error in _RequestPerformanceLevel
|
#
ef8e55a1d09185c714afac7b5d00f28064af3428 |
|
28-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Use single ended heap for CPU heap
|
#
335c60552c275dc13e1ca4fac0af2cd11be7f4aa |
|
26-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Remove CPUEntry::IncreaseActiveTime()
|
#
96dcc73b39cc68a59c276a35690f8af1886214ef |
|
26-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Add scheduler profiler A bit hackish implementation of a profiler for the scheduler. SCHEDULER_ENTER_FUNCTION at the begining of each function aren't nice and usage of __PRETTY_FUNCTION__ isn't any better (both gcc and clang support it though), but it was quick to implement and doesn't lose information on inlined functions. It's just a tool, not an integral part of the kernal anyway.
|
#
b24ea642d759ad6e6b30007cb112b3cdfad35204 |
|
23-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate ThreadData fields
|
#
a08b40d4087b35c586959dc7da44035171d4cf15 |
|
23-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate CPUEntry fields
|
#
e1e7235c60d942d4fd58ac7caedf4a9715efcc7a |
|
23-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate CoreEntry fields
|
#
60e198f2cbf2e26b584370c0d32c37cb3dce556c |
|
22-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Encapsulate PackageEntry fields Apart from the refactoring this commit takes the opportunity and removes unnecessary read locks when choosing a package and a core from idle lists. The data structures are accessed in a thread safe way and it does not really matter whether the obtained data becomes outdated just when we release the lock or during our search for the appropriate package/core.
|
#
9116eec24c3a9e8be6ba721487fa29604fb5b944 |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Allow calling UpdatePriority() for disabled CPU
|
#
c08ed2db65267bea18a3ba424f98fffde9da6c25 |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Try to keep thread on the same logical CPU Some SMT implementations (e.g. recent AMD microarchitectures) have separate L1d cache for each SMT thread (which AMD decides to call "cores"). This means that we shouldn't move threads to another logical processor too often even if it belongs to the same core. We aren't very strict about this as it would complicate load balancing, but we try to reduce unnecessary migrations.
|
#
ad6b9a1df8ccdb1093c4b122764f8692d6f7ca2c |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Use sequential locks instead of atomic 64 bit access
|
#
b258298c70249e60ea7c65c60bd5ee1250609921 |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Protect cpu_ent::active_time with sequential lock atomic_{get, set}64() are problematic on architectures without 64 bit compare and swap. Also, using sequential lock instead of atomic access ensures that any reads from cpu_ent::active_time won't require any writes to shared memory.
|
#
1b06228f136128b8094c8a7d954815e29775cfe4 |
|
17-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Propagate scheduler modes to cpu{freq, idle} modules
|
#
d287274dcec634da4973a1b92c97dd14d7c5ecd0 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Code refactoring
|