History log of /seL4-test-master/projects/musllibc/src/thread/pthread_mutex_unlock.c
Revision Date Author Comments
# 384d103d 27-Jun-2016 Rich Felker <dalias@aerifal.cx>

fix failure to obtain EOWNERDEAD status for process-shared robust mutexes

Linux's documentation (robust-futex-ABI.txt) claims that, when a
process dies with a futex on the robust list, bit 30 (0x40000000) is
set to indicate the status. however, what actually happens is that
bits 0-30 are replaced with the value 0x40000000, i.e. bits 0-29
(containing the old owner tid) are cleared at the same time bit 30 is
set.

our userspace-side code for robust mutexes was written based on that
documentation, assuming that kernel would never produce a futex value
of 0x40000000, since the low (owner) bits would always be non-zero.
commit d338b506e39b1e2c68366b12be90704c635602ce introduced this
assumption explicitly while fixing another bug in how non-recoverable
status for robust mutexes was tracked. presumably the tests conducted
at that time only checked non-process-shared robust mutexes, which are
handled in pthread_exit (which implemented the documented kernel
protocol, not the actual one) rather than by the kernel.

change pthread_exit robust list processing to match the kernel
behavior, clearing bits 0-29 while setting bit 30, and use the value
0x7fffffff instead of 0x40000000 to encode non-recoverable status. the
choice of value here is arbitrary; any value with at least one of bits
0-29 set should work just as well,


# f08ab9e6 10-Apr-2015 Rich Felker <dalias@aerifal.cx>

redesign and simplify vmlock system

this global lock allows certain unlock-type primitives to exclude
mmap/munmap operations which could change the identity of virtual
addresses while references to them still exist.

the original design mistakenly assumed mmap/munmap would conversely
need to exclude the same operations which exclude mmap/munmap, so the
vmlock was implemented as a sort of 'symmetric recursive rwlock'. this
turned out to be unnecessary.

commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the
interval during which mmap/munmap held their side of the lock, but
left the inappropriate lock design and some inefficiency.

the new design uses a separate function, __vm_wait, which does not
hold any lock itself and only waits for lock users which were already
present when it was called to release the lock. this is sufficient
because of the way operations that need to be excluded are sequenced:
the "unlock-type" operations using the vmlock need only block
mmap/munmap operations that are precipitated by (and thus sequenced
after) the atomic-unlock they perform while holding the vmlock.

this allows for a spectacular lack of synchronization in the __vm_wait
function itself.


# df7d0dfb 31-Aug-2014 Jens Gustedt <Jens.Gustedt@inria.fr>

use weak symbols for the POSIX functions that will be used by C threads

The intent of this is to avoid name space pollution of the C threads
implementation.

This has two sides to it. First we have to provide symbols that wouldn't
pollute the name space for the C threads implementation. Second we have
to clean up some internal uses of POSIX functions such that they don't
implicitly drag in such symbols.


# de7e99c5 16-Aug-2014 Rich Felker <dalias@aerifal.cx>

make pointers used in robust list volatile

when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.

previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.

in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.


# d338b506 16-Aug-2014 Rich Felker <dalias@aerifal.cx>

fix robust mutex unrecoverable status, and related clean-up

a robust mutex should not enter the unrecoverable status until it's
unlocked without marking it consistent. previously, flag 8 in the type
was used as an indication of unrecoverable, but only honored after
successful locking; this resulted in a race window where the
unrecoverable mutex could appear to a second thread as locked/busy
again while the first thread was in the process of observing it as
unrecoverable.

now, flag 8 is used to mean that the mutex is in the process of being
recovered, but not yet marked consistent. the flag only takes effect
in pthread_mutex_unlock, where it causes the value 0x40000000 (owner
dead flag, with old owner tid 0, an otherwise impossible state) to be
stored in the lock. subsequent lock attempts will interpret this state
as unrecoverable.


# fffc5cda 16-Aug-2014 Rich Felker <dalias@aerifal.cx>

fix false ownership of mutexes due to tid reuse, using robust list

per the resolution of Austin Group issue 755, the POSIX requirement
that ownership be enforced for recursive and error-checking mutexes
does not allow a random new thread to acquire ownership of an orphaned
mutex just because it happened to be assigned the same tid as the
original owner that exited with the mutex locked.

one possible fix for this issue would be to disallow the kernel thread
to terminate when it exited with mutexes held, permanently reserving
the tid against reuse. however, this does not solve the problem for
process-shared mutexes where lifetime cannot be controlled, so it was
not used.

the alternate approach I've taken is to reuse the robust mutex system
for non-robust recursive and error-checking mutexes. when a thread
exits, the kernel (or the new userspace robust-list code added in
commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the
owner-died bit for these orphaned mutexes, but since the mutex-type is
not robust, pthread_mutex_trylock will not allow a new owner to
acquire them. instead, they remain in a state of being permanently
locked, as desired.


# bc09d58c 15-Aug-2014 Rich Felker <dalias@aerifal.cx>

make futex operations use private-futex mode when possible

private-futex uses the virtual address of the futex int directly as
the hash key rather than requiring the kernel to resolve the address
to an underlying backing for the mapping in which it lies. for certain
usage patterns it improves performance significantly.

in many places, the code using futex __wake and __wait operations was
already passing a correct fixed zero or nonzero flag for the priv
argument, so no change was needed at the site of the call, only in the
__wake and __wait functions themselves. in other places, especially
where the process-shared attribute for a synchronization object was
not previously tracked, additional new code is needed. for mutexes,
the only place to store the flag is in the type field, so additional
bit masking logic is needed for accessing the type.

for non-process-shared condition variable broadcasts, the futex
requeue operation is unable to requeue from a private futex to a
process-shared one in the mutex structure, so requeue is simply
disabled in this case by waking all waiters.

for robust mutexes, the kernel always performs a non-private wake when
the owner dies. in order not to introduce a behavioral regression in
non-process-shared robust mutexes (when the owning thread dies), they
are simply forced to be treated as process-shared for now, giving
correct behavior at the expense of performance. this can be fixed by
adding explicit code to pthread_exit to do the right thing for
non-shared robust mutexes in userspace rather than relying on the
kernel to do it, and will be fixed in this way later.

since not all supported kernels have private futex support, the new
code detects EINVAL from the futex syscall and falls back to making
the call without the private flag. no attempt to cache the result is
made; caching it and using the cached value efficiently is somewhat
difficult, and not worth the complexity when the benefits would be
seen only on ancient kernels which have numerous other limitations and
bugs anyway.


# df15168c 10-Jun-2014 Rich Felker <dalias@aerifal.cx>

replace all remaining internal uses of pthread_self with __pthread_self

prior to version 1.1.0, the difference between pthread_self (the
public function) and __pthread_self (the internal macro or inline
function) was that the former would lazily initialize the thread
pointer if it was not already initialized, whereas the latter would
crash in this case. since lazy initialization is no longer supported,
use of pthread_self no longer makes sense; it simply generates larger,
slower code.


# da8d0fc4 17-Aug-2012 Rich Felker <dalias@aerifal.cx>

fix extremely rare but dangerous race condition in robust mutexes

if new shared mappings of files/devices/shared memory can be made
between the time a robust mutex is unlocked and its subsequent removal
from the pending slot in the robustlist header, the kernel can
inadvertently corrupt data in the newly-mapped pages when the process
terminates. i am fixing the bug by using the same global vm lock
mechanism that was used to fix the race condition with unmapping
barriers after pthread_barrier_wait returns.


# b6f9974a 02-Oct-2011 Rich Felker <dalias@aerifal.cx>

simplify robust mutex unlock code path

right now it's questionable whether this change is an improvement or
not, but if we later want to support priority inheritance mutexes, it
will be important to have the code paths unified like this to avoid
major code duplication.


# b8688ff8 02-Oct-2011 Rich Felker <dalias@aerifal.cx>

fix crash if pthread_mutex_unlock is called without ever locking

this is valid for error-checking mutexes; otherwise it invokes UB and
would be justified in crashing.


# 7fe58d35 02-Oct-2011 Rich Felker <dalias@aerifal.cx>

use count=0 instead of 1 for recursive mutex with only one lock reference

this simplifies the code paths slightly, but perhaps what's nicer is
that it makes recursive mutexes fully reentrant, i.e. locking and
unlocking from a signal handler works even if the interrupted code was
in the middle of locking or unlocking.


# c68de0be 02-Aug-2011 Rich Felker <dalias@aerifal.cx>

avoid accessing mutex memory after atomic unlock

this change is needed to fix a race condition and ensure that it's
possible to unlock and destroy or unmap the mutex as soon as
pthread_mutex_lock succeeds. POSIX explicitly gives such an example in
the rationale and requires an implementation to allow such usage.


# a1eb8cb5 30-Mar-2011 Rich Felker <dalias@aerifal.cx>

avoid crash on stupid but allowable usage of pthread_mutex_unlock

unlocking an unlocked mutex is not UB for robust or error-checking
mutexes, so we must avoid calling __pthread_self (which might crash
due to lack of thread-register initialization) until after checking
that the mutex is locked.


# 02084109 30-Mar-2011 Rich Felker <dalias@aerifal.cx>

streamline mutex unlock to remove a useless branch, use a_store to unlock

this roughly halves the cost of pthread_mutex_unlock, at least for
non-robust, normal-type mutexes.

the a_store change is in preparation for future support of archs which
require a memory barrier or special atomic store operation, and also
should prevent the possibility of the compiler misordering writes.


# 047e434e 17-Mar-2011 Rich Felker <dalias@aerifal.cx>

implement robust mutexes

some of this code should be cleaned up, e.g. using macros for some of
the bit flags, masks, etc. nonetheless, the code is believed to be
working and correct at this point.


# 18c7ea80 17-Mar-2011 Rich Felker <dalias@aerifal.cx>

avoid function call to pthread_self in mutex unlock

if the mutex was previously locked, we can assume pthread_self was
already called at the time of locking, and thus that the thread
pointer is initialized.


# b1c43161 16-Mar-2011 Rich Felker <dalias@aerifal.cx>

unify lock and owner fields of mutex structure

this change is necessary to free up one slot in the mutex structure so
that we can use doubly-linked lists in the implementation of robust
mutexes.


# 4820f926 08-Mar-2011 Rich Felker <dalias@aerifal.cx>

fix and optimize non-default-type mutex behavior

problem 1: mutex type from the attribute was being ignored by
pthread_mutex_init, so recursive/errorchecking mutexes were never
being used at all.

problem 2: ownership of recursive mutexes was not being enforced at
unlock time.


# e8827563 17-Feb-2011 Rich Felker <dalias@aerifal.cx>

reorganize pthread data structures and move the definitions to alltypes.h

this allows sys/types.h to provide the pthread types, as required by
POSIX. this design also facilitates forcing ABI-compatible sizes in
the arch-specific alltypes.h, while eliminating the need for
developers changing the internals of the pthread types to poke around
with arch-specific headers they may not be able to test.


# 0b44a031 11-Feb-2011 Rich Felker <dalias@aerifal.cx>

initial check-in, version 0.5.0