History log of /seL4-camkes-master/projects/musllibc/src/thread/__lock.c
Revision Date Author Comments
# e803829e 20-Sep-2013 Rich Felker <dalias@aerifal.cx>

fix potential deadlock bug in libc-internal locking logic

if a multithreaded program became non-multithreaded (i.e. all other
threads exited) while one thread held an internal lock, the remaining
thread would fail to release the lock. the the program then became
multithreaded again at a later time, any further attempts to obtain
the lock would deadlock permanently.

the underlying cause is that the value of libc.threads_minus_1 at
unlock time might not match the value at lock time. one solution would
be returning a flag to the caller indicating whether the lock was
taken and needs to be unlocked, but there is a simpler solution: using
the lock itself as such a flag.

note that this flag is not needed anyway for correctness; if the lock
is not held, the unlock code is harmless. however, the memory
synchronization properties associated with a_store are costly on some
archs, so it's best to avoid executing the unlock code when it is
unnecessary.


# 4750cf42 24-Apr-2012 Rich Felker <dalias@aerifal.cx>

ditch the priority inheritance locks; use malloc's version of lock

i did some testing trying to switch malloc to use the new internal
lock with priority inheritance, and my malloc contention test got
20-100 times slower. if priority inheritance futexes are this slow,
it's simply too high a price to pay for avoiding priority inversion.
maybe we can consider them somewhere down the road once the kernel
folks get their act together on this (and perferably don't link it to
glibc's inefficient lock API)...

as such, i've switch __lock to use malloc's implementation of
lightweight locks, and updated all the users of the code to use an
array with a waiter count for their locks. this should give optimal
performance in the vast majority of cases, and it's simple.

malloc is still using its own internal copy of the lock code because
it seems to yield measurably better performance with -O3 when it's
inlined (20% or more difference in the contention stress test).


# e7655ed3 24-Apr-2012 Rich Felker <dalias@aerifal.cx>

internal locks: new owner of contended lock must set waiters flag

this bug probably would have gone unnoticed since it's only used in
the fallback code for systems where priority-inheritance locking
fails. unfortunately this approach results in one spurious wake
syscall on the final unlock, when there are no waiters remaining. the
alternative (possibly better) would be to use broadcast wakes instead
of reflagging the waiter unconditionally, and let each waiter reflag
itself; this saves one syscall at the expense of invoking the
"thundering herd" effect (worse performance degredation) when there
are many waiters.

ideally we would be able to update all of our locks to use an array of
two ints rather than a single int, and use a separate counter system
like proper mutexes use; then we could avoid all spurious wake calls
without resorting to broadcasts. however, it's not clear to me that
priority inheritance futexes support this usage. the kernel sets the
waiters flag for them (just like we're doing now) and i can't tell if
it's safe to bypass the kernel when unlocking just because we know
(from private data, the waiter count) that there are no waiters. this
is something that could be explored in the future.


# f34d0ea5 24-Apr-2012 Rich Felker <dalias@aerifal.cx>

new internal locking primitive; drop spinlocks

we use priority inheritance futexes if possible so that the library
cannot hit internal priority inversion deadlocks in the presence of
realtime priority scheduling (full support to be added later).


# 813d3783 16-Sep-2011 Rich Felker <dalias@aerifal.cx>

use a_swap rather than old name a_xchg


# 6232b96f 13-Jun-2011 Rich Felker <dalias@aerifal.cx>

minor locking optimizations


# c2cd25bf 06-Apr-2011 Rich Felker <dalias@aerifal.cx>

consistency: change all remaining syscalls to use SYS_ rather than __NR_ prefix


# 685e40bb 19-Mar-2011 Rich Felker <dalias@aerifal.cx>

syscall overhaul part two - unify public and internal syscall interface

with this patch, the syscallN() functions are no longer needed; a
variadic syscall() macro allows syscalls with anywhere from 0 to 6
arguments to be made with a single macro name. also, manually casting
each non-integer argument with (long) is no longer necessary; the
casts are hidden in the macros.

some source files which depended on being able to define the old macro
SYSCALL_RETURNS_ERRNO have been modified to directly use __syscall()
instead of syscall(). references to SYSCALL_SIGSET_SIZE and SYSCALL_LL
have also been changed.

x86_64 has not been tested, and may need a follow-up commit to fix any
minor bugs/oversights.


# 0b44a031 11-Feb-2011 Rich Felker <dalias@aerifal.cx>

initial check-in, version 0.5.0