History log of /seL4-camkes-master/projects/musllibc/src/thread/__wait.c
Revision Date Author Comments
# 339cc250 09-Feb-2015 Szabolcs Nagy <nsz@port70.net>

use the internal macro name FUTEX_PRIVATE in __wait

the name was recently added for the setxid/synccall rework,
so use the name now that we have it.


# f5fb20b0 25-Aug-2014 Rich Felker <dalias@aerifal.cx>

refrain from spinning on locks when there is already a waiter

if there is already a waiter for a lock, spinning on the lock is
essentially an attempt to steal it from whichever waiter would obtain
it via any priority rules in place, and is therefore undesirable. in
the current implementation, there is always an inherent race window at
unlock during which a newly-arriving thread may steal the lock from
the existing waiters, but we should aim to keep this window minimal
rather than enlarging it.


# b8a9c90e 25-Aug-2014 Rich Felker <dalias@aerifal.cx>

sanitize number of spins in userspace before futex wait

the previous spin limit of 10000 was utterly unreasonable.
empirically, it could consume up to 200000 cycles, whereas a failed
futex wait (EAGAIN) typically takes 1000 cycles or less, and even a
true wait/wake round seems much less expensive.

the new counts (100 for general wait, 200 in barrier) were simply
chosen to be in the range of what's reasonable without having adverse
effects on casual micro-benchmark tests I have been running. they may
still be too high, from a standpoint of not wasting cpu cycles, but at
least they're a lot better than before. rigorous testing across
different archs and cpu models should be performed at some point to
determine whether further adjustments should be made.


# b8ca9eb5 22-Aug-2014 Rich Felker <dalias@aerifal.cx>

fix fallback checks for kernels without private futex support

for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.


# bc09d58c 15-Aug-2014 Rich Felker <dalias@aerifal.cx>

make futex operations use private-futex mode when possible

private-futex uses the virtual address of the futex int directly as
the hash key rather than requiring the kernel to resolve the address
to an underlying backing for the mapping in which it lies. for certain
usage patterns it improves performance significantly.

in many places, the code using futex __wake and __wait operations was
already passing a correct fixed zero or nonzero flag for the priv
argument, so no change was needed at the site of the call, only in the
__wake and __wait functions themselves. in other places, especially
where the process-shared attribute for a synchronization object was
not previously tracked, additional new code is needed. for mutexes,
the only place to store the flag is in the type field, so additional
bit masking logic is needed for accessing the type.

for non-process-shared condition variable broadcasts, the futex
requeue operation is unable to requeue from a private futex to a
process-shared one in the mutex structure, so requeue is simply
disabled in this case by waking all waiters.

for robust mutexes, the kernel always performs a non-private wake when
the owner dies. in order not to introduce a behavioral regression in
non-process-shared robust mutexes (when the owning thread dies), they
are simply forced to be treated as process-shared for now, giving
correct behavior at the expense of performance. this can be fixed by
adding explicit code to pthread_exit to do the right thing for
non-shared robust mutexes in userspace rather than relying on the
kernel to do it, and will be fixed in this way later.

since not all supported kernels have private futex support, the new
code detects EINVAL from the futex syscall and falls back to making
the call without the private flag. no attempt to cache the result is
made; caching it and using the cached value efficiently is somewhat
difficult, and not worth the complexity when the benefits would be
seen only on ancient kernels which have numerous other limitations and
bugs anyway.


# eca335fc 06-Jan-2014 Rich Felker <dalias@aerifal.cx>

eliminate explicit (long) casts when making syscalls

this practice came from very early, before internal/syscall.h defined
macros that could accept pointer arguments directly and handle them
correctly. aside from being ugly and unnecessary, it looks like it
will be problematic when we add support for 32-bit ABIs on archs where
registers (and syscall arguments) are 64-bit, e.g. x32 and mips n32.


# 77f15d10 06-May-2011 Rich Felker <dalias@aerifal.cx>

reduce some ridiculously large spin counts

these should be tweaked according to testing. offhand i know 1000 is
too low and 5000 is likely to be sufficiently high. consider trying to
add futexes to file locking, too...


# c2cd25bf 06-Apr-2011 Rich Felker <dalias@aerifal.cx>

consistency: change all remaining syscalls to use SYS_ rather than __NR_ prefix


# 685e40bb 19-Mar-2011 Rich Felker <dalias@aerifal.cx>

syscall overhaul part two - unify public and internal syscall interface

with this patch, the syscallN() functions are no longer needed; a
variadic syscall() macro allows syscalls with anywhere from 0 to 6
arguments to be made with a single macro name. also, manually casting
each non-integer argument with (long) is no longer necessary; the
casts are hidden in the macros.

some source files which depended on being able to define the old macro
SYSCALL_RETURNS_ERRNO have been modified to directly use __syscall()
instead of syscall(). references to SYSCALL_SIGSET_SIZE and SYSCALL_LL
have also been changed.

x86_64 has not been tested, and may need a follow-up commit to fix any
minor bugs/oversights.


# 0b44a031 11-Feb-2011 Rich Felker <dalias@aerifal.cx>

initial check-in, version 0.5.0