History log of /freebsd-9.3-release/lib/libthr/thread/thr_mutex.c
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
# 267654 19-Jun-2014 gjb

Copy stable/9 to releng/9.3 as part of the 9.3-RELEASE cycle.

Approved by: re (implicit)
Sponsored by: The FreeBSD Foundation

# 236275 30-May-2012 davidxu

MFC r236135:

Return EBUSY for PTHREAD_MUTEX_ADAPTIVE_NP too when the mutex could not
be acquired.

PR: 168317


# 225736 22-Sep-2011 kensmith

Copy head to stable/9 as part of 9.0-RELEASE release cycle.

Approved by: re (implicit)


# 217047 06-Jan-2011 davidxu

Return 0 instead of garbage value.

Found by: clang static analyzer


# 216687 24-Dec-2010 davidxu

Always clear flag PMUTEX_FLAG_DEFERED when unlocking, as it is only
significant for lock owner.


# 216641 22-Dec-2010 davidxu

MFp4:

- Add flags CVWAIT_ABSTIME and CVWAIT_CLOCKID for umtx kernel based
condition variable, this should eliminate an extra system call to get
current time.

- Add sub-function UMTX_OP_NWAKE_PRIVATE to wake up N channels in single
system call. Create userland sleep queue for condition variable, in most
cases, thread will wait in the queue, the pthread_cond_signal will defer
thread wakeup until the mutex is unlocked, it tries to avoid an extra
system call and a extra context switch in time window of pthread_cond_signal
and pthread_mutex_unlock.

The changes are part of process-shared mutex project.


# 214410 27-Oct-2010 davidxu

Remove locking and unlock in pthread_mutex_destroy, because
it can not fix race condition in application code, as a result,
the problem described in PR threads/151767 is avoided.


# 213257 29-Sep-2010 davidxu

Check invalid mutex in _mutex_cv_unlock.


# 213241 28-Sep-2010 davidxu

In current code, statically initialized and destroyed object have
same null value, the code can not distinguish between them, to
fix the problem, now a destroyed object is assigned to a non-null
value, and it will be rejected by some pthread functions.
PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP is changed to number 1, so that
adaptive mutex can be statically initialized correctly.


# 212077 01-Sep-2010 davidxu

Change atfork lock from mutex to rwlock, also make mutexes used by malloc()
module private type, when private type mutex is locked/unlocked, thread
critical region is entered or leaved. These changes makes fork()
async-signal safe which required by POSIX. Note that user's atfork handler
still needs to be async-signal safe, but it is not problem of libthr, it
is user's responsiblity.


# 179970 24-Jun-2008 davidxu

Add two commands to _umtx_op system call to allow a simple mutex to be
locked and unlocked completely in userland. by locking and unlocking mutex
in userland, it reduces the total time a mutex is locked by a thread,
in some application code, a mutex only protects a small piece of code, the
code's execution time is less than a simple system call, if a lock contention
happens, however in current implemenation, the lock holder has to extend its
locking time and enter kernel to unlock it, the change avoids this disadvantage,
it first sets mutex to free state and then enters kernel and wake one waiter
up. This improves performance dramatically in some sysbench mutex tests.

Tested by: kris
Sounds great: jeff


# 179411 29-May-2008 davidxu

- Reduce function call overhead for uncontended case.
- Remove unused flags MUTEX_FLAGS_* and their code.
- Check validity of the timeout parameter in mutex_self_lock().


# 178587 26-Apr-2008 kris

Increase the default MUTEX_ADAPTIVE_SPINS to 2000, after further
testing it turns out 200 was too short to give good adaptive
performance.

Reviewed by: jeff
MFC after: 1 week


# 177600 25-Mar-2008 ru

Fixed mis-implementation of pthread_mutex_get{spin,yield}loops_np().

Reviewed by: davidxu


# 176275 14-Feb-2008 des

_pthread_mutex_isowned_np(): use a more reliable method; the current code
will work in simple cases, but may fail in more complicated ones.

Reviewed by: davidxu


# 176059 06-Feb-2008 des

Remove unnecessary prototype.


# 176049 06-Feb-2008 des

Per discussion on -threads, rename _islocked_np() to _isowned_np().


# 175969 04-Feb-2008 des

After careful consideration (and a brief discussion with attilio@), change
the semantics of pthread_mutex_islocked_np() to return true if and only if
the mutex is held by the current thread.

Obviously, change the regression test to match.

MFC after: 2 weeks


# 175958 03-Feb-2008 des

Add pthread_mutex_islocked_np(), a cheap way to verify that a mutex is
locked. This is intended primarily to support the userland equivalent
of the various *_ASSERT_LOCKED() macros we have in the kernel.

MFC after: 2 weeks


# 174696 17-Dec-2007 davidxu

Add function prototypes.


# 174585 14-Dec-2007 davidxu

1. Add function pthread_mutex_setspinloops_np to turn a mutex's spin
loop count.
2. Add function pthread_mutex_setyieldloops_np to turn a mutex's yield
loop count.
3. Make environment variables PTHREAD_SPINLOOPS and PTHREAD_YIELDLOOPS
to be only used for turnning PTHREAD_MUTEX_ADAPTIVE_NP mutex.


# 174535 11-Dec-2007 davidxu

Enclose all code for macro ENQUEUE_MUTEX in do while statement, and
add missing brackets.

MFC: after 1 day


# 174001 27-Nov-2007 jasone

Fix pointer dereferencing problems in _pthread_mutex_init_calloc_cb() that
were obscured by pseudo-opaque pthreads API pointer casting.


# 173967 27-Nov-2007 jasone

Add _pthread_mutex_init_calloc_cb() to libthr and libkse, so that malloc(3)
(part of libc) can use pthreads mutexes without causing infinite recursion
during initialization.


# 173803 21-Nov-2007 davidxu

Convert ceiling type to unsigned integer before comparing, fix compiler
warnings.


# 173208 30-Oct-2007 davidxu

Avoid doing adaptive spinning for priority protected mutex, current
implementation always does lock in kernel.


# 173207 30-Oct-2007 davidxu

Don't do adaptive spinning if it is running on UP kernel.


# 173206 30-Oct-2007 davidxu

Restore revision 1.55, the kris's adaptive mutex type.


# 173174 30-Oct-2007 kris

Adaptive mutexes should have the same deadlock detection properties that
default (errorcheck) mutexes do.

Noticed by: davidxu


# 173173 30-Oct-2007 davidxu

Add my recent work of adaptive spin mutex code. Use two environments variable
to tune pthread mutex performance:
1. LIBPTHREAD_SPINLOOPS
If a pthread mutex is being locked by another thread, this environment
variable sets total number of spin loops before the current thread
sleeps in kernel, this saves a syscall overhead if the mutex will be
unlocked very soon (well written application code).
2. LIBPTHREAD_YIELDLOOPS
If a pthread mutex is being locked by other threads, this environment
variable sets total number of sched_yield() loops before the currrent
thread sleeps in kernel. if a pthread mutex is locked, the current thread
gives up cpu, but will not sleep in kernel, this means, current thread
does not set contention bit in mutex, but let lock owner to run again
if the owner is on kernel's run queue, and when lock owner unlocks the
mutex, it does not need to enter kernel and do lots of work to resume
mutex waiters, in some cases, this saves lots of syscall overheads for
mutex owner.

In my practice, sometimes LIBPTHREAD_YIELDLOOPS can massively improve performance
than LIBPTHREAD_SPINLOOPS, this depends on application. These two environments
are global to all pthread mutex, there is no interface to set them for each
pthread mutex, the default values are zero, this means spinning is turned off
by default.


# 173154 29-Oct-2007 kris

Add a new "non-portable" mutex type, PTHREAD_MUTEX_ADAPTIVE_NP. This
is also implemented in glibc and is used by a number of existing
applications (mysql, firefox, etc).

This mutex type is a default mutex with the additional property that
it spins briefly when attempting to acquire a contested lock, doing
trylock operations in userland before entering the kernel to block if
eventually unsuccessful.

The expectation is that applications requesting this mutex type know
that the mutex is likely to be only held for very brief periods, so it
is faster to spin in userland and probably succeed in acquiring the
mutex, than to enter the kernel and sleep, only to be woken up almost
immediately. This can help significantly in certain cases when
pthread mutexes are heavily contended and held for brief durations
(such as mysql).

Spin up to 200 times before entering the kernel, which represents only
a few us on modern CPUs. No performance degradation was observed with
this value and it is sufficient to avoid a large performance drop in
mysql performance in the heavily contended pthread mutex case.

The libkse implementation is a NOP.

Reviewed by: jeff
MFC after: 3 days


# 169413 09-May-2007 davidxu

backout experimental adaptive spinning mutex for product use.


# 165791 05-Jan-2007 davidxu

Insert mutex at tail if it has highest ceiling.


# 165790 05-Jan-2007 davidxu

Oops, don't corrupt the list.


# 165789 05-Jan-2007 davidxu

Check if the PP mutex is recursive, if we have already locked it, place the
mutex in right order sorted by priority ceiling.


# 165370 20-Dec-2006 davidxu

Check environment variable PTHREAD_ADAPTIVE_SPIN, if it is set, use
it as a default spin cycle count.


# 165206 14-Dec-2006 davidxu

Create inline function _thr_umutex_trylock2 to only try one atomic
operation, if it is failed, we call syscall directly, this saves
one atomic operation per lock contention.


# 164178 11-Nov-2006 davidxu

Move code calculating new inherited priority into single function.


# 162143 08-Sep-2006 davidxu

Use return value of _thr_umutex_lock instead of using zero.


# 161681 28-Aug-2006 davidxu

Use umutex APIs to implement pthread_mutex, member pp_mutexq is added
into pthread structure to keep track of locked PTHREAD_PRIO_PROTECT mutex,
no real mutex code is changed, the mutex locking and unlocking code should
has same performance as before.


# 161069 08-Aug-2006 davidxu

Axe unused member field.


# 160426 17-Jul-2006 delphij

Unexpand two TAILQ_FOREACH_SAFE cases.

Ok'ed by: davidxu


# 159165 02-Jun-2006 davidxu

Remove unused member field m_queue.


# 157591 08-Apr-2006 davidxu

Do not check validity of timeout if a mutex can be acquired immediately.
Completly drop recursive mutex in pthread_cond_wait and restore recursive
after resumption. Reorganize code to make gcc to generate better code.


# 157457 04-Apr-2006 davidxu

WARNS level 4 cleanup.


# 157194 27-Mar-2006 davidxu

Remove priority mutex code because it does not work correctly,
to make it work, turnstile like mechanism to support priority
propagating and other realtime scheduling options in kernel
should be available to userland mutex, for the moment, I just
want to make libthr be simple and efficient thread library.

Discussed with: deischen, julian


# 156102 28-Feb-2006 davidxu

Reimplement mutex_init to get rid of compile warning.


# 154422 16-Jan-2006 davidxu

Eliminate unused code.


# 154350 14-Jan-2006 davidxu

Enable mutex inheritance code in mutex_fork, I forgot to turn on it.
while here, add some comments about process shared mutex.


# 153595 21-Dec-2005 davidxu

Let _mutex_cv_lock call internal functiona mutex_lock_common.


# 153334 12-Dec-2005 davidxu

Remove unused _get_curthread() call.


# 149298 19-Aug-2005 stefanf

- Prefix MUTEX_TYPE_MAX with PTHREAD_ to avoid namespace pollution.
- Remove the macros MUTEX_TYPE_FAST and MUTEX_TYPE_COUNTING_FAST.

OK'ed by: deischen


# 144518 01-Apr-2005 davidxu

Import my recent 1:1 threading working. some features improved includes:
1. fast simple type mutex.
2. __thread tls works.
3. asynchronous cancellation works ( using signal ).
4. thread synchronization is fully based on umtx, mainly, condition
variable and other synchronization objects were rewritten by using
umtx directly. those objects can be shared between processes via
shared memory, it has to change ABI which does not happen yet.
5. default stack size is increased to 1M on 32 bits platform, 2M for
64 bits platform.
As the result, some mysql super-smack benchmarks show performance is
improved massivly.

Okayed by: jeff, mtm, rwatson, scottl


# 135579 22-Sep-2004 mtm

Remove vestiges of libthr's signal mangling past. This fixes that last
known problem with mysql on libthr: not being able to kill mysqld.


# 135575 22-Sep-2004 mtm

The SUSv3 function say that the affected functions MAY FAIL, if the
specified mutex is invalid. In spec parlance 'MAY FAIL' means it's
up to the implementor. So, remove the check for NULL pointers for two
reasons:
1. A mutex may be invalid without necessarily being NULL.
2. If the pointer to the mutex is NULL core-dumping in the
vicinity of the problem is much much much better than failing
in some other part of the code (especially when the application
doesn't check the return value of the function that you oh so
helpfully set to EINVAL).


# 132890 30-Jul-2004 mtm

o Assertions to catch that stuff that shouldn't happen is not happening.
o In the rwlock code: move a duplicated check inside an if..else to after
the if...else clause.
o When initializing a static rwlock move the initialization check
inside the lock.
o In thr_setschedparam.c: When breaking out of the trylock...retry if busy
loop make sure to reset the mtx pointer to null if the mutex is nolonger
in a queue.


# 131431 01-Jul-2004 marcel

Change the thread ID (thr_id_t) used for 1:1 threading from being a
pointer to the corresponding struct thread to the thread ID (lwpid_t)
assigned to that thread. The primary reason for this change is that
libthr now internally uses the same ID as the debugger and the kernel
when referencing to a kernel thread. This allows us to implement the
support for debugging without additional translations and/or mappings.

To preserve the ABI, the 1:1 threading syscalls, including the umtx
locking API have not been changed to work on a lwpid_t. Instead the
1:1 threading syscalls operate on long and the umtx locking API has
not been changed except for the contested bit. Previously this was
the least significant bit. Now it's the most significant bit. Since
the contested bit should not be tested by userland, this change is
not expected to be visible. Just to be sure, UMTX_CONTESTED has been
removed from <sys/umtx.h>.

Reviewed by: mtm@
ABI preservation tested on: i386, ia64


# 129484 20-May-2004 mtm

Make libthr async-signal-safe without costly signal masking. The guidlines I
followed are: Only 3 functions (pthread_cancel, pthread_setcancelstate,
pthread_setcanceltype) are required to be async-signal-safe by POSIX. None of
the rest of the pthread api is required to be async-signal-safe. This means
that only the three mentioned functions are safe to use from inside
signal handlers.
However, there are certain system/libc calls that are
cancellation points that a caller may call from within a signal handler,
and since they are cancellation points calls have to be made into libthr
to test for cancellation and exit the thread if necessary. So, the
cancellation test and thread exit code paths must be async-signal-safe
as well. A summary of the changes follows:

o Almost all of the code paths that masked signals, as well as locking the
pthread structure now lock only the pthread structure.
o Signals are masked (and left that way) as soon as a thread enters
pthread_exit().
o The active and dead threads locks now explicitly require that signals
are masked.
o Access to the isdead field of the pthread structure is protected by both
the active and dead list locks for writing. Either one is sufficient for
reading.
o The thread state and type fields have been combined into one three-state
switch to make it easier to read without requiring a lock. It doesn't need
a lock for writing (and therefore for reading either) because only the
current thread can write to it and it is an integer value.
o The thread state field of the pthread structure has been eliminated. It
was an unnecessary field that mostly duplicated the flags field, but
required additional locking that would make a lot more code paths require
signal masking. Any truly unique values (such as PS_DEAD) have been
reborn as separate members of the pthread structure.
o Since the mutex and condvar pthread functions are not async-signal-safe
there is no need to muck about with the wait queues when handling
a signal ...
o ... which also removes the need for wrapping signal handlers and sigaction(2).
o The condvar and mutex async-cancellation code had to be revised as a result
of some of these changes, which resulted in semi-unrelated changes which
would have been difficult to work on as a separate commit, so they are
included as well.

The only part of the changes I am worried about is related to locking for
the pthread joining fields. But, I will take a closer look at them once this
mega-patch is committed.


# 129483 20-May-2004 mtm

Forced commit for rev. 1.26

Bugfix: recursive mutex reference counting.

Noticed by:Michael Bretterklieber <mbretter@inode.at>
Partl Submitted by: deischen


# 129482 20-May-2004 mtm

q§?\022


# 127561 29-Mar-2004 mtm

The thread suspend function now returns ETIMEDOUT, not EAGAIN.


# 127485 27-Mar-2004 mtm

Stop using signals for synchronizing threads. The performance penalty
was too much.


# 127454 26-Mar-2004 mtm

o The mutex locking functions aren't normally cancellation points. But,
we still have to DTRT when an asynchronously cancellable thread is
cancelled while waiting for a mutex.
o While dequeueing a waiting mutex don't skip a thread if it has
a cancel pending. Only skip it if it is also async cancellable.


# 125966 18-Feb-2004 mtm

o Refactor and, among other things, get rid of insane nesting levels.
o Fix mutex priority protocols. Keep separate counts of priority
inheritance and protection mutexes to make things easier.
This will not have much affect since this is only the
userland side, and the rest involves kernel scheduling.


# 124719 19-Jan-2004 mtm

Refactor _pthread_mutex_init
o Simplify the logic by removing a lot of unnecesary nesting
o Reduce the amount of local variables
o Zero-out the allocated structure and get rid of
all the unnecessary setting to 0 and NULL;

Refactor _pthread_mutex_destroy
o Simplify the logic by removing a lot of unnecesary nesting
o No need to check pointer that the mutex attributes points
to. Checking passed in pointer is enough.


# 123987 30-Dec-2003 mtm

o Implement pthread_mutex_timedlock(), which does not block indefinitely on
a mutex locked by another thread.
o document it: pthread_mutex_timedlock(3)


# 123986 30-Dec-2003 mtm

Make it possible for the library to specify a timeout value when
waiting on a locked mutex. This involves passing a struct timespec
from the pthread mutex locking interfaces all the way down to the
function that suspends the thread until the mutex is released.
The timeout is assumed to be an absolute time (i.e. not relative to
the current time).

Also, in _thread_suspend() make the passed in timespec const.


# 123350 09-Dec-2003 mtm

Fix the wrapper function around signals so that a signal handling
thread on one of the mutex or condition variable queues is removed
from those queues before the real signal handler is called.


# 117277 06-Jul-2003 mtm

Change all instances of THR_LOCK/UNLOCK, etc to UMTX_*.
It is a more acurate description of the locks they
operate on.


# 117196 03-Jul-2003 mtm

_pthread_mutex_trylock() is another internal libc function that must block
signals.


# 117145 02-Jul-2003 mtm

Begin making libthr async signal safe.

Create a private, single underscore, version of pthread_mutex_unlock for libc.
pthread_mutex_lock already has one. These versions are different from the
ones that applications will link against because they block all signals
from the time a call to lock the mutex is made until it is successfully
unlocked.


# 117127 01-Jul-2003 mtm

Do not attempt to reque a thread on a mutex queue. It may be that
a thread receives a spurious wakeup from sigtimedwait(), so make sure
that the call to the queueing code is called only once before entering
the loop (not in the loop). This should fix some fatal errors people
are seeing with messages stating the thread is already on the mutex queue.
These errors may still be triggered from signal handlers; however, since
that part of the code is not locked down yet.


# 117073 30-Jun-2003 mtm

Catchup with _thread_suspend() changes.


# 117049 29-Jun-2003 mtm

Sweep through pthread locking and use the new locking primitives for
libthr.


# 115692 02-Jun-2003 mtm

Consolidate static_init() and static_init_private into one function.
The behaviour of this function is controlled by the argument: private.


# 115442 31-May-2003 mtm

I botched one of my committs in the last round. Fix it.


# 115390 29-May-2003 mtm

Make the mutex static initializers look more like the one for
condition variables. Cosmetic.

Explicitly compare against PTHREAD_MUTEX_INITIALIZER. We shouldn't
encourage calls to the mutex functions with null pointers to mutexes.

Approved by: re/jhb


# 115260 23-May-2003 mtm

Make WARNS2 clean. The fixes mostly included:
o removed unused variables
o explicit inclusion of header files
o prototypes for externally defined functions

Approved by: re/blanket libthr


# 115198 21-May-2003 mtm

Insert a debugging aid:
When in either the mutex or cond queue we notice that the thread
is already on one of the queues, don't just simply abort(). Print
out the thread's identifiers and what queue it was on.

Approved by: markm/mentor, re/blanket libthr


# 114940 12-May-2003 mtm

Forced commit, for previous revision.

Make state transitions of a thread on a mutex queue
atomic (with respect to other threads and signal handlers).
This includes:
o Introduce two functions to implement atomicity with respect
to other threads and signal handlers. Basically,
_thread_critical_enter() locks the calling thread and blocks
signals. _thread_critical_exit() unblocks signals and unlocks
the thread.

o Introduce two new functions:
get_muncontested() locks a mutex that is not owned by
another thread.
get_mcontested() places a thread on a contested mutex's
queue, taking care to use the _thread_critical_enter/exit
functions to protect thread state.

o Modify mutex_unlock_common() to also protect state transitions.
In this case it needs the cooperation of mutex_queue_deq(), which
must return with the thread locked and signals disabled *before*
it takes the thread off the queue.

Combine _pthread_mutex_lock() and _pthread_mutex_trylock()
into one function: mutex_lock_common(), that can handle
both cases. Its behaviour is controlled by an argument,
int nonblock, which if not zero means do not attempt
to acquire a contested mutex if the uncontested case fails.

BTW, when I write about contested and uncontested mutexes, I'm writing
about it from the application's point of view. I'm not writing about
internal locking of pthread_mutex->lock, which is achieved differently.

While internal mutex locking is mostly done, there's still a bit more
work left in this area.

Approved by: markm/mentor, re/blanket libthr
Reviewed by: jeff (slightly diff. revision)


# 114938 12-May-2003 mtm

msg1


# 114772 06-May-2003 mtm

o Correct a debug message that refered to the wrong function
o Remove an unncecesary if clause

Approved by: markm (mentor)(implicit)
Reviewd by: jeff


# 112965 02-Apr-2003 jeff

- Define curthread as _get_curthread() and remove all direct calls to
_get_curthread(). This is similar to the kernel's curthread. Doing
this saves stack overhead and is more convenient to the programmer.
- Pass the pointer to the newly created thread to _thread_init().
- Remove _get_curthread_slow().


# 112958 01-Apr-2003 jeff

- Restore old mutex code from libc_r. It is more standards compliant.
This was changed because originally we were blocking on the umtx and
allowing the kernel to do the queueing. It was decided that the
lib should queue and start the threads in the order it decides and the
umtx code would just be used like spinlocks.


# 112918 01-Apr-2003 jeff

- Add libthr but don't hook it up to the regular build yet. This is an
adaptation of libc_r for the thr system call interface. This is beta
quality code.