#
76681bd9 |
|
22-Sep-2023 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel: Rewrite B_DEBUG_SPINLOCK_CONTENTION. * Replace count_low/count_high with bigtime_t fields plus an int32. sizeof(spinlock) is now 32 bytes with the debug option enabled. * Adjust and clean up all spinlock code to use the new fields. * Fold DEBUG_SPINLOCK_LATENCIES into the new code. Remove the bootloader option and other flags for it (these were not compiled in by default.) The new code should be much easier to understand and also more powerful. However, the information transmitted to userland isn't as useful now; the KDL command output will have the interesting information. (Things could be reworked to transmit more interesting information to userland again if desired, but as this code clearly hadn't been compiled for many years, as it referred to global spinlocks that have been gone for a very long time.) Change-Id: I2cb34078bfdc7604f288a297b6cd1aa7ff9cc512 Reviewed-on: https://review.haiku-os.org/c/haiku/+/6943 Reviewed-by: waddlesplash <waddlesplash@gmail.com> Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
|
#
8be37ed4 |
|
23-Nov-2021 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/smp: Avoid casting spinlocks, which are structures. The lock entry is the first thing in the struct, so this is a no-op change, but it is safer to do in case of changes, of course. Spinlocks have been structures for quite a long time, so this was probably just missed in the conversion.
|
#
026c8b9c |
|
10-Sep-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/smp: add call_single_cpu() to call a function on the target cpu. Early mechanism not available. Change-Id: I9d049e618c319c59729d1ab53fb313b748f82315 Reviewed-on: https://review.haiku-os.org/c/haiku/+/3212 Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
|
#
9c8119e0 |
|
03-Jul-2017 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel/smp: Add a comment for some obsecure knowledge * I was ready to rip this out until PulkoMandy set me stright. * Add a comment so others understand the impact here.
|
#
9a633145 |
|
04-Nov-2014 |
Michael Lotz <mmlr@mlotz.ch> |
kernel: Fix build with KDEBUG_LEVEL < 2. The lock caller info isn't available in such a configuration.
|
#
eac94f5d |
|
01-Nov-2014 |
Michael Lotz <mmlr@mlotz.ch> |
kernel: Also push lock caller in acquire_spinlock_nocheck.
|
#
41418981 |
|
01-Nov-2014 |
Michael Lotz <mmlr@mlotz.ch> |
kernel: Sync panic messages across acquire_spinlock versions. * Always include last caller and lock value on both UP and MP path. * Change lock value printing to hex format, as 0xdeadbeef is more obvious than its decimal counterpart.
|
#
3ed7ce75 |
|
25-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Relax atomic loads in SMP code The main purpose of using atomic_get() was the necessity of a compiler barrier to prevent the compiler from optimizing busy loops. However, each such loop contains in its body at least one statement that acts as a compiler barrier (namely, cpu_wait() or cpu_pause()) making atomic_get() redundant (well, atomic_get() is stronger - it also issues a load barrier but in these particular cases we do not need it).
|
#
e31212e4 |
|
24-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Fix acquire_read_spinlock() acquire checks If the initial attempt to acquire read spinlock fails we use more relaxed loop (which doesn't require CPU to lock the bus). However, check in that loop, incorrectly, didn't allow a lock to be acquired when there was at least one other reader.
|
#
82bcd89b |
|
23-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Add CPUSet::{Clear, Set}BitAtomic() functions
|
#
4ca31ac9 |
|
06-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Fix ABA problem in try_acquire_read_spinlock()
|
#
8cf8e537 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/x86: Inline atomic functions and memory barriers
|
#
b258298c |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Protect cpu_ent::active_time with sequential lock atomic_{get, set}64() are problematic on architectures without 64 bit compare and swap. Also, using sequential lock instead of atomic access ensures that any reads from cpu_ent::active_time won't require any writes to shared memory.
|
#
e3d001ff |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: Implement multicast ICIs
|
#
3106f832 |
|
06-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/smp: Fix warning
|
#
3e0e3be7 |
|
06-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
boot, kernel: Replace MAX_BOOT_CPUS with SMP_MAX_CPUS
|
#
7629d527 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Use CPUSet in ICI code instead of cpu_mask_t
|
#
52b442a6 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: smp_cpu_rendezvous(): Use counter instead of bitmap
|
#
3514fd77 |
|
28-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Reduce lock contention when processing ICIs
|
#
e736a456 |
|
28-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Forbid implicit casts between spinlock and int32
|
#
7db89e8d |
|
25-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Rework cpuidle module * Create new interface for cpuidle modules (similar to the cpufreq interface) * Generic cpuidle module is no longer needed * Fix and update Intel C-State module
|
#
cec16c2d |
|
24-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
spinlock: Fix panic messages Thanks Jérôme for pointing this out.
|
#
024541a4 |
|
20-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Improve rw_spinlock implementation * Add more debug checks * Reduce the number of executed instructions that lock the bus.
|
#
288a2664 |
|
12-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Remove sSchedulerInternalLock * pin idle threads to their specific CPUs * allow scheduler to implement SMP_MSG_RESCHEDULE handler * scheduler_set_thread_priority() reworked * at reschedule: enqueue old thread after dequeueing the new one
|
#
defee266 |
|
06-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Add read write spinlock implementation
|
#
273f2f38 |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Improve spinlock implementation atomic_or() and atomic_and() are not supported by x86 are need to be emulated using CAS. Use atomic_get_and_set() and atomic_set() instead.
|
#
077c84eb |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: atomic_*() functions rework * No need for the atomically changed variables to be declared as volatile. * Drop support for atomically getting and setting unaligned data. * Introduce atomic_get_and_set[64]() which works the same as atomic_set[64]() used to. atomic_set[64]() does not return the previous value anymore.
|
#
4824f763 |
|
04-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Add sequential lock implementation
|
#
7ea42e7a |
|
20-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Remove invoke_scheduler_if_idle
|
#
146f9669 |
|
15-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Fixed a mistake in 11d3892, changed a parameter type to addr_t that shouldn't have been changed.
|
#
11d3892d |
|
14-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Changed ICI data argument types from uint32 to addr_t. Since ICI arguments are used to send addresses in some places, uint32 is not sufficient on x86_64. addr_t still refers to the same type as uint32 (unsigned long) on other platforms, so this change only really affects x86_64.
|
#
0e88a887 |
|
13-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
First round of 64-bit safety fixes in the kernel. * Most of this is incorrect printf format strings. Changed all strings causing errors to use the B_PRI* format string definitions, which means the strings should be correct across all platforms. * Some other fixes for errors, casts required, etc.
|
#
920e575c |
|
20-Aug-2011 |
Jérôme Duval <korli@users.berlios.de> |
As suggested by Ingo, revert r42648 and apply patch from Alex Smith provided in #7872. Thanks! git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42650 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dc3ba981 |
|
13-Jun-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added try_acquire_spinlock(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42180 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
07655104 |
|
26-Nov-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Eliminated _acquire_spinlock(). Since the macro is defined after acquire_spinlock_inline(), there's actually no undesired recursion. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39647 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
91897430 |
|
17-Aug-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* Style cleanup. * Made an enum out of the mailbox type. * Rearranged some code to get rid of CID 1328 which was not a bug, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38209 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
45bd7bb3 |
|
25-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Removed unnecessary inclusions of <boot/kernel_args.h> in private kernel headers and respectively added includes in source files. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37259 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bd4454cb |
|
11-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Simplified smp_trap_non_boot_cpus() and smp_wake_up_non_boot_cpus(): We don't need a spinlock per CPU; a single variable suffices. * Extended call_all_cpus[_sync]() to work before smp_wake_up_non_boot_cpus() (even before smp_init()). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37105 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7f987e49 |
|
30-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added a rendez-vous variable parameter to smp_trap_non_boot_cpus() and make boot CPU wait until all other CPUs are ready to wait. This solves a theoretical problem in main(): The boot CPU could run fully through the early initialization and reset sCpuRendezvous2 before the other CPUs left smp_cpu_rendezvous(). It's very unlikely on real hardware that the non-boot CPUs are so much slower, but it might be a concern in emulation. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36558 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4d6b1f03 |
|
30-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Documented smp_cpu_rendezvous(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36556 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3fb2a94d |
|
11-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Inline {acquire,release}_spinlock(), when spinlock debugging is disabled. * Use atomic_{and,or}() instead of atomic_set(), as there are no built-ins for the latter. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35021 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3533b659 |
|
10-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Reintroduced the SMP_MSG_RESCHEDULE_IF_IDLE ICI message. This time implemented by means of an additional member in cpu_ent. * Removed thread::keep_scheduled and the related functions. The feature wasn't used yet and wouldn't have worked as implemented anyway. * Resurrected an older, SMP aware version of our simple scheduler and made it the default instead of the affine scheduler. The latter is in no state to be used yet. It causes enormous latencies (I've seen up to 0.1s) even when six or seven CPUs were idle at the same time, totally killing parallelism. That's also the reason why a -j8 build was slower than a -j2. This is no longer the case. On my machine the -j2 build takes about 10% less time now and the -j8 build saves another 20%. The latter is not particularly impressive (compared with Linux), but that seems to be due to lock contention. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34615 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6242fd59 |
|
08-Nov-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* Added the possibility to debug latency issues with spinlocks. * When DEBUG_SPINLOCK_LATENCIES is 1, the system will panic if any spinlock is held longer than DEBUG_LATENCY micro seconds (currently 200). If your system doesn't boot anymore, a new safemode setting can disable the panic. * Besides some problems during boot when the MTRRs are set up, 200 usecs work fine here if all debug output is turned off (the output stuff is definitely problematic, though I don't have a good idea on how to improve upon it a lot). * Renamed the formerly BeOS compatible safemode settings to look better; there is no need to be compatible there. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33953 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
29bd9bfd |
|
21-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Remove SMP_MSG_RESCHEDULE_IF_IDLE as it is not used anymore. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32574 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
66dde0a8 |
|
20-Aug-2009 |
Rene Gollent <anevilyak@gmail.com> |
Fix build with TRACE_SMP enabled. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32550 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
152132f0 |
|
18-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
mmlr+anevilyak: * Keep track of the currently running threads. * Make use of that info to decide if a thread that becomes ready should preempt the running thread. * If we should preempt we send the target CPU a reschedule message. * This preemption strategy makes keeping track of idle CPUs by means of a bitmap superflous and it is therefore removed. * Right now only other CPUs are preempted though, not the current one. * Add missing initialization of the quantum tracking code. * Do not extend the quantum of the idle thread based quantum tracking as we want it to not run longer than necessary. Once the preemption works completely adding a quantum timer for the idle thread will become unnecessary though. * Fix thread stealing code, it did missed the last thread in the run queue. * When stealing, try to steal the highest priority thread that is currently waiting by taking priorities into account when finding the target run queue. * Simplify stealing code a bit as well. * Minor cleanups. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32503 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
671a2442 |
|
31-Jul-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
More work towards making our double fault handler less triple fault prone: * SMP: - Added smp_send_broadcast_ici_interrupts_disabled(), which is basically equivalent to smp_send_broadcast_ici(), but is only called with interrupts disabled and gets the CPU index, so it doesn't have to use smp_get_current_cpu() (which dereferences the current thread). - Added cpu index parameter to smp_intercpu_int_handler(). * x86: - arch_int.c -> arch_int.cpp - Set up an IDT per CPU. We were using a single IDT for all CPUs, but that can't work, since we need different tasks for the double fault interrupt vector. - Set the per CPU double fault task gates correctly. - Renamed set_intr_gate() to set_interrupt_gate and set_system_gate() to set_trap_gate() and documented them a bit. - Renamed double_fault_exception() x86_double_fault_exception() and fixed it not to use smp_get_current_cpu(). Instead we have the new x86_double_fault_get_cpu() that deducts the CPU index from the used stack. - Fixed the double_fault interrupt handler: It no longer calls int_bottom to avoid accessing the current thread. * debug.cpp: - Introduced explicit debug_double_fault() to enter the kernel debugger from a double fault handler. - Avoid using smp_get_current_cpu(). - Don't use kprintf() before sDebuggerOnCPU is set. Otherwise acquire_spinlock() is invoked by arch_debug_serial_puts(). Things look a bit better when the current thread pointer is broken -- we run into kernel_debugger_loop() and successfully print the "Welcome to KDL" message -- but we still dereference the thread pointer afterwards, so that we don't get a usable kernel debugger yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32050 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
880d0bde |
|
26-Mar-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
acquire_spinlock[_nocheck]() do now panic() when they couldn't acquire the spinlock for a long time. That should help to analyze system "freezes" involving spinlocks. In VMware on a Core 2 Duo 2.2 GHz the panic() is triggered after 20-30 seconds. The time will be shorter on faster machines. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29732 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
59dbd26f |
|
20-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved more debug macros to kernel_debug_config.h. * Turned the checks for all those macros to "#if"s instead of "#ifdef"s. * Introduced macro KDEBUG_LEVEL which serves as a master setting. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28248 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
05fd6d79 |
|
20-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed bug introduced in r28223: The counter whose modulo was used as index into the sLastCaller array is vint32, so after overflowing the modulo operation would yield negative indices. This would cause the 256 bytes before the array to be overwritten. Might also be the cause of #2866. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28245 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7ab39de9 |
|
17-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed unused SMP_MSG_RESCHEDULE ICI message. * Introduced flag "invoke_scheduler" in the per CPU structure. It is evaluated in hardware_interrupt() (x86 only ATM). * Introduced SMP_MSG_RESCHEDULE_IF_IDLE message, which enters the scheduler when the CPU currently runs an idle thread. * Don't do dprintf() "CPU x halted!" when handling a SMP_MSG_CPU_HALT ICI message. It uses nested spinlocks and could thus potentially deadlock itself (acquire_spinlock() processes ICI messages, so it could already hold one of the locks). This is a pretty likely scenario on machines with more than two CPUs, but is also possible when the panic()ing thread holds the threads spinlock. Probably fixes #2572. * Reworked the way the kernel debugger is entered and added a "cpu" command that allows switching the CPU once in KDL. It is thus possible to get a stack trace of the thread not on the panic()ing CPU. * When a thread is added to the run queue, we do now check, if another CPU is idle and ask it to reschedule, if it is. Before this change, the CPU was continuing to idle until the quantum of the idle thread expired. Speeds up the libbe.so build about 8% on my machine (haven't tested the full Haiku image build yet). * When spinlock debugging is enabled (DEBUG_SPINLOCKS) we also record the spinlock acquirer on non-smp machines. Added "spinlock" debugger command to get the info. * Added debugger commands "ici" and "ici_message", printing info on pending ICI message respectively on a given one. * Process not only a single ICI message in acquire_spinlock() and other places, but all pending ones. * Also process ICI messages when waiting for a free one -- avoids a potential deadlock. * Mask out non-existing CPUs in send_multicast_ici(). panic() instead of just returning when there's no target CPU left. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28223 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
78c90d44 |
|
17-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved definition of the PAUSE macro to <cpu.h>, respectively <arch/cpu.h>. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28221 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6bbe7eb8 |
|
07-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* smp.c -> smp.cpp * Added smp_send_multicast_ici(), which sends the message to all CPUs specified via a mask. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27910 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9a6331459f21e0d4ac4e9fa4e951390159011039 |
|
04-Nov-2014 |
Michael Lotz <mmlr@mlotz.ch> |
kernel: Fix build with KDEBUG_LEVEL < 2. The lock caller info isn't available in such a configuration.
|
#
eac94f5db9c4fc3e20c9b92ad2a6d2c66a80eef6 |
|
01-Nov-2014 |
Michael Lotz <mmlr@mlotz.ch> |
kernel: Also push lock caller in acquire_spinlock_nocheck.
|
#
41418981f43f24600cead1a6b4cbc7cfb90bde9a |
|
01-Nov-2014 |
Michael Lotz <mmlr@mlotz.ch> |
kernel: Sync panic messages across acquire_spinlock versions. * Always include last caller and lock value on both UP and MP path. * Change lock value printing to hex format, as 0xdeadbeef is more obvious than its decimal counterpart.
|
#
3ed7ce75b3e4b16ed2506508579839990309878e |
|
25-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Relax atomic loads in SMP code The main purpose of using atomic_get() was the necessity of a compiler barrier to prevent the compiler from optimizing busy loops. However, each such loop contains in its body at least one statement that acts as a compiler barrier (namely, cpu_wait() or cpu_pause()) making atomic_get() redundant (well, atomic_get() is stronger - it also issues a load barrier but in these particular cases we do not need it).
|
#
e31212e4d7b5e2f623e2295fe64d5c9b07d0c55d |
|
24-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Fix acquire_read_spinlock() acquire checks If the initial attempt to acquire read spinlock fails we use more relaxed loop (which doesn't require CPU to lock the bus). However, check in that loop, incorrectly, didn't allow a lock to be acquired when there was at least one other reader.
|
#
82bcd89b92f9c7934845782a1e34f433d51d2f9c |
|
23-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Add CPUSet::{Clear, Set}BitAtomic() functions
|
#
4ca31ac964da8520e1804b7cfb1f4d4479a80497 |
|
06-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Fix ABA problem in try_acquire_read_spinlock()
|
#
8cf8e537740789b1b103f0aa0736dbfcf55359c2 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/x86: Inline atomic functions and memory barriers
|
#
b258298c70249e60ea7c65c60bd5ee1250609921 |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Protect cpu_ent::active_time with sequential lock atomic_{get, set}64() are problematic on architectures without 64 bit compare and swap. Also, using sequential lock instead of atomic access ensures that any reads from cpu_ent::active_time won't require any writes to shared memory.
|
#
e3d001ff02e087a2392c2c46a7ac2d78d3bc12f6 |
|
19-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: Implement multicast ICIs
|
#
3106f832a9d4438498de6a606de4c040edb8addb |
|
06-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/smp: Fix warning
|
#
3e0e3be7604ed12ab61b58789c44bc6d7333f48b |
|
06-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
boot, kernel: Replace MAX_BOOT_CPUS with SMP_MAX_CPUS
|
#
7629d527c5ee0f402c5a16d0f42c2b79a5571b07 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Use CPUSet in ICI code instead of cpu_mask_t
|
#
52b442a687680ddd6a55478baeaa42ec87077f49 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: smp_cpu_rendezvous(): Use counter instead of bitmap
|
#
3514fd77f702359e815201419aebded48f032ad8 |
|
28-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Reduce lock contention when processing ICIs
|
#
e736a456ba4d12621654a3f95c394d11fcaac243 |
|
28-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Forbid implicit casts between spinlock and int32
|
#
7db89e8dc395db73368479fd9817b2b67899f3f6 |
|
25-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Rework cpuidle module * Create new interface for cpuidle modules (similar to the cpufreq interface) * Generic cpuidle module is no longer needed * Fix and update Intel C-State module
|
#
cec16c2dcfb0bddb0d9dc11fb63793c4ca9a53e0 |
|
24-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
spinlock: Fix panic messages Thanks Jérôme for pointing this out.
|
#
024541a4c8d35f9b3c5e27995b8f07be68d7c09a |
|
20-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Improve rw_spinlock implementation * Add more debug checks * Reduce the number of executed instructions that lock the bus.
|
#
288a2664a2de429f159d746beaab87373184cd3d |
|
12-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
scheduler: Remove sSchedulerInternalLock * pin idle threads to their specific CPUs * allow scheduler to implement SMP_MSG_RESCHEDULE handler * scheduler_set_thread_priority() reworked * at reschedule: enqueue old thread after dequeueing the new one
|
#
defee266db232f7477d62a5ff8f10a0a498cad1e |
|
06-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Add read write spinlock implementation
|
#
273f2f38cd4b219ac8197888962d0710c149d606 |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Improve spinlock implementation atomic_or() and atomic_and() are not supported by x86 are need to be emulated using CAS. Use atomic_get_and_set() and atomic_set() instead.
|
#
077c84eb27b25430428d356f3d13afabc0cc0d13 |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: atomic_*() functions rework * No need for the atomically changed variables to be declared as volatile. * Drop support for atomically getting and setting unaligned data. * Introduce atomic_get_and_set[64]() which works the same as atomic_set[64]() used to. atomic_set[64]() does not return the previous value anymore.
|
#
4824f7630b2ca9c5750f93c4daa837dfcac3059e |
|
04-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Add sequential lock implementation
|
#
7ea42e7addccef196b840aa5e4921adfe13be44d |
|
20-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Remove invoke_scheduler_if_idle
|
#
146f966921d878727c3895b18dbee0ab3314bffc |
|
15-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Fixed a mistake in 11d3892, changed a parameter type to addr_t that shouldn't have been changed.
|
#
11d3892d285a72e161f5b13365dcce6e05a32374 |
|
14-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Changed ICI data argument types from uint32 to addr_t. Since ICI arguments are used to send addresses in some places, uint32 is not sufficient on x86_64. addr_t still refers to the same type as uint32 (unsigned long) on other platforms, so this change only really affects x86_64.
|
#
0e88a887b4a9ecaaf1062078d9ca9bfca78fcf3a |
|
13-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
First round of 64-bit safety fixes in the kernel. * Most of this is incorrect printf format strings. Changed all strings causing errors to use the B_PRI* format string definitions, which means the strings should be correct across all platforms. * Some other fixes for errors, casts required, etc.
|
#
920e575c03b4817d93424a4ed7bc46a5ed288660 |
|
20-Aug-2011 |
Jérôme Duval <korli@users.berlios.de> |
As suggested by Ingo, revert r42648 and apply patch from Alex Smith provided in #7872. Thanks! git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42650 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dc3ba981d47a9ed00cf02cffee5c9ae08a752696 |
|
13-Jun-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added try_acquire_spinlock(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42180 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
07655104d58ba54837514d07eae7e9d9a651368b |
|
26-Nov-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Eliminated _acquire_spinlock(). Since the macro is defined after acquire_spinlock_inline(), there's actually no undesired recursion. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@39647 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9189743037890932704bb1a489d7449d1716c4a8 |
|
17-Aug-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* Style cleanup. * Made an enum out of the mailbox type. * Rearranged some code to get rid of CID 1328 which was not a bug, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38209 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
45bd7bb3db9d9e4dcb02b89a3e7c2bf382c0a88c |
|
25-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Removed unnecessary inclusions of <boot/kernel_args.h> in private kernel headers and respectively added includes in source files. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37259 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bd4454cb956c1488c3a76b50e9e7d3f2bf7f6dac |
|
11-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Simplified smp_trap_non_boot_cpus() and smp_wake_up_non_boot_cpus(): We don't need a spinlock per CPU; a single variable suffices. * Extended call_all_cpus[_sync]() to work before smp_wake_up_non_boot_cpus() (even before smp_init()). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37105 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7f987e49d77f20a4a98a4f88f1e007f838ea2975 |
|
30-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added a rendez-vous variable parameter to smp_trap_non_boot_cpus() and make boot CPU wait until all other CPUs are ready to wait. This solves a theoretical problem in main(): The boot CPU could run fully through the early initialization and reset sCpuRendezvous2 before the other CPUs left smp_cpu_rendezvous(). It's very unlikely on real hardware that the non-boot CPUs are so much slower, but it might be a concern in emulation. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36558 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4d6b1f03da9dace5f967919509deb106e37113ed |
|
30-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Documented smp_cpu_rendezvous(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36556 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3fb2a94dfb528d9fc0640e60cf632da1f7d8e354 |
|
11-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Inline {acquire,release}_spinlock(), when spinlock debugging is disabled. * Use atomic_{and,or}() instead of atomic_set(), as there are no built-ins for the latter. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35021 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3533b6597db4ad65493632da8a92c1f6ea3de149 |
|
10-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Reintroduced the SMP_MSG_RESCHEDULE_IF_IDLE ICI message. This time implemented by means of an additional member in cpu_ent. * Removed thread::keep_scheduled and the related functions. The feature wasn't used yet and wouldn't have worked as implemented anyway. * Resurrected an older, SMP aware version of our simple scheduler and made it the default instead of the affine scheduler. The latter is in no state to be used yet. It causes enormous latencies (I've seen up to 0.1s) even when six or seven CPUs were idle at the same time, totally killing parallelism. That's also the reason why a -j8 build was slower than a -j2. This is no longer the case. On my machine the -j2 build takes about 10% less time now and the -j8 build saves another 20%. The latter is not particularly impressive (compared with Linux), but that seems to be due to lock contention. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34615 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6242fd597772185600b922845b95c5223f8b3336 |
|
08-Nov-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* Added the possibility to debug latency issues with spinlocks. * When DEBUG_SPINLOCK_LATENCIES is 1, the system will panic if any spinlock is held longer than DEBUG_LATENCY micro seconds (currently 200). If your system doesn't boot anymore, a new safemode setting can disable the panic. * Besides some problems during boot when the MTRRs are set up, 200 usecs work fine here if all debug output is turned off (the output stuff is definitely problematic, though I don't have a good idea on how to improve upon it a lot). * Renamed the formerly BeOS compatible safemode settings to look better; there is no need to be compatible there. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33953 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
29bd9bfd7df23f442fcd3ab4f5ba484cf35dfef3 |
|
21-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Remove SMP_MSG_RESCHEDULE_IF_IDLE as it is not used anymore. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32574 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
66dde0a85d1ad58173447492472eb95a3c145965 |
|
20-Aug-2009 |
Rene Gollent <anevilyak@gmail.com> |
Fix build with TRACE_SMP enabled. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32550 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
152132f08ac3f84228ea7d94513a28eb137229a3 |
|
18-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
mmlr+anevilyak: * Keep track of the currently running threads. * Make use of that info to decide if a thread that becomes ready should preempt the running thread. * If we should preempt we send the target CPU a reschedule message. * This preemption strategy makes keeping track of idle CPUs by means of a bitmap superflous and it is therefore removed. * Right now only other CPUs are preempted though, not the current one. * Add missing initialization of the quantum tracking code. * Do not extend the quantum of the idle thread based quantum tracking as we want it to not run longer than necessary. Once the preemption works completely adding a quantum timer for the idle thread will become unnecessary though. * Fix thread stealing code, it did missed the last thread in the run queue. * When stealing, try to steal the highest priority thread that is currently waiting by taking priorities into account when finding the target run queue. * Simplify stealing code a bit as well. * Minor cleanups. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32503 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
671a2442d93f46c5343ef34e01306befa760c16a |
|
31-Jul-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
More work towards making our double fault handler less triple fault prone: * SMP: - Added smp_send_broadcast_ici_interrupts_disabled(), which is basically equivalent to smp_send_broadcast_ici(), but is only called with interrupts disabled and gets the CPU index, so it doesn't have to use smp_get_current_cpu() (which dereferences the current thread). - Added cpu index parameter to smp_intercpu_int_handler(). * x86: - arch_int.c -> arch_int.cpp - Set up an IDT per CPU. We were using a single IDT for all CPUs, but that can't work, since we need different tasks for the double fault interrupt vector. - Set the per CPU double fault task gates correctly. - Renamed set_intr_gate() to set_interrupt_gate and set_system_gate() to set_trap_gate() and documented them a bit. - Renamed double_fault_exception() x86_double_fault_exception() and fixed it not to use smp_get_current_cpu(). Instead we have the new x86_double_fault_get_cpu() that deducts the CPU index from the used stack. - Fixed the double_fault interrupt handler: It no longer calls int_bottom to avoid accessing the current thread. * debug.cpp: - Introduced explicit debug_double_fault() to enter the kernel debugger from a double fault handler. - Avoid using smp_get_current_cpu(). - Don't use kprintf() before sDebuggerOnCPU is set. Otherwise acquire_spinlock() is invoked by arch_debug_serial_puts(). Things look a bit better when the current thread pointer is broken -- we run into kernel_debugger_loop() and successfully print the "Welcome to KDL" message -- but we still dereference the thread pointer afterwards, so that we don't get a usable kernel debugger yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32050 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
880d0bde5ac04aa7897a1aeec53e82b76e644a84 |
|
26-Mar-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
acquire_spinlock[_nocheck]() do now panic() when they couldn't acquire the spinlock for a long time. That should help to analyze system "freezes" involving spinlocks. In VMware on a Core 2 Duo 2.2 GHz the panic() is triggered after 20-30 seconds. The time will be shorter on faster machines. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29732 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
59dbd26f5f41a6c1272f6cac9c8cda4b19b79097 |
|
20-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved more debug macros to kernel_debug_config.h. * Turned the checks for all those macros to "#if"s instead of "#ifdef"s. * Introduced macro KDEBUG_LEVEL which serves as a master setting. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28248 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
05fd6d79fecc6159551570fdf2e72e50303fd7fd |
|
20-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed bug introduced in r28223: The counter whose modulo was used as index into the sLastCaller array is vint32, so after overflowing the modulo operation would yield negative indices. This would cause the 256 bytes before the array to be overwritten. Might also be the cause of #2866. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28245 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7ab39de9895775d10669a1a85ce3ff60b1ca7b55 |
|
17-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed unused SMP_MSG_RESCHEDULE ICI message. * Introduced flag "invoke_scheduler" in the per CPU structure. It is evaluated in hardware_interrupt() (x86 only ATM). * Introduced SMP_MSG_RESCHEDULE_IF_IDLE message, which enters the scheduler when the CPU currently runs an idle thread. * Don't do dprintf() "CPU x halted!" when handling a SMP_MSG_CPU_HALT ICI message. It uses nested spinlocks and could thus potentially deadlock itself (acquire_spinlock() processes ICI messages, so it could already hold one of the locks). This is a pretty likely scenario on machines with more than two CPUs, but is also possible when the panic()ing thread holds the threads spinlock. Probably fixes #2572. * Reworked the way the kernel debugger is entered and added a "cpu" command that allows switching the CPU once in KDL. It is thus possible to get a stack trace of the thread not on the panic()ing CPU. * When a thread is added to the run queue, we do now check, if another CPU is idle and ask it to reschedule, if it is. Before this change, the CPU was continuing to idle until the quantum of the idle thread expired. Speeds up the libbe.so build about 8% on my machine (haven't tested the full Haiku image build yet). * When spinlock debugging is enabled (DEBUG_SPINLOCKS) we also record the spinlock acquirer on non-smp machines. Added "spinlock" debugger command to get the info. * Added debugger commands "ici" and "ici_message", printing info on pending ICI message respectively on a given one. * Process not only a single ICI message in acquire_spinlock() and other places, but all pending ones. * Also process ICI messages when waiting for a free one -- avoids a potential deadlock. * Mask out non-existing CPUs in send_multicast_ici(). panic() instead of just returning when there's no target CPU left. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28223 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
78c90d44cad4b0e03bdd9d0590525d07dafb3bc4 |
|
17-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved definition of the PAUSE macro to <cpu.h>, respectively <arch/cpu.h>. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28221 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6bbe7eb8caad128659bb2edc617d0ccb2f2d89dd |
|
07-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* smp.c -> smp.cpp * Added smp_send_multicast_ici(), which sends the message to all CPUs specified via a mask. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27910 a95241bf-73f2-0310-859d-f6bbb57e9c96
|