History log of /haiku/src/system/libroot/posix/pthread/pthread_barrier.cpp
Revision Date Author Comments
# 93d7d1c5 12-Jun-2023 Augustin Cavalier <waddlesplash@gmail.com>

user_mutex: Per-team contexts.

This requires the introduction of the flag B_USER_MUTEX_SHARED, and then
actually using the SHARED flags in pthread structures to determine when
it should be passed through.

This commit still uses wired memory even for per-team contexts.
That will change in the next commit.

GLTeapot FPS seems about the same.

Change-Id: I749a00dcea1531e113a65299b6d6610f57511fcc
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6602
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>


# 496e4113 12-Jun-2023 Augustin Cavalier <waddlesplash@gmail.com>

pthread_barrier: Ensure the barrier is "idle" during destruction.

If the serial thread tries to destroy barriers immediately, not all threads
may have exited yet. This takes care of that case.


# ca458a2b 12-Jun-2023 Augustin Cavalier <waddlesplash@gmail.com>

user_mutex: Adjust semantics of B_USER_MUTEX_UNBLOCK_ALL.

No longer is it required that the mutex be unlocked.
This was the case before the recent refactor, though it
wasn't noted anywhere. Now it is the case once more.

Should fix #18445.


# 6f3f29c7 06-Jun-2023 Augustin Cavalier <waddlesplash@gmail.com>

user_mutex: Refactor locking and unblocking mechanism.

Suppose the following scenario:

1. Thread A holds a mutex.

2. Thread B goes to acquire the mutex, winds up in kernel waiting.

3. Thread A unlocks; first unsets the LOCKED flag.
As WAITING is set, it calls the kernel; but instead of processing
this immediately, the thread is suspended for any reason (locks,
reschedule, etc.)

4. Thread B hits a timeout, or a signal. It then unblocks in the kernel,
which causes the WAITING flag to be unset.

5. Thread C goes to acquire the lock. It sets the LOCKED flag.
It sees the WAITING flag is not set, so it returns at once,
having successfully acquired the lock.

6. Thread A, suspended back in step 3, resumes.

Now we encounter the problem. Under the previous code, the following
would occur.

7. Thread A sees that no threads are waiting. It thus unsets the LOCKED
flag, and returns from the kernel. Now we have a mutex theoretically
held by thread C but which (illegally) has no LOCKED flag set!

8. Some other thread tries to acquire the lock, and succeeds, for LOCKED
is not set. We now have one lock owned by two separate threads.
That's very bad!

The solution, in this commit, is to (1) switch from using "atomic_or"
to lock mutexes, to using "atomic_test_and_set", and (2) mandate that
_kern_unblock_mutex must be invoked with the mutex already unlocked.

Trying to solve the problem with (2) but without (1) produces other
complications and would overall be more complicated. For instance,
all existing userland code expected that it would set LOCKED, but then
check LOCKED|WAITING. If _kern_mutex_unlock does not unset LOCKED,
then whichever thread sets LOCKED when it was previously unset is
now the mutex's undisputed owner, and if it fails to notice this,
would deadlock.

That could have been solved with extra checks at all lock points, but
then that would mean locks would not be acquired "fairly": it would
be possible for any thread to race with an unlocking thread, and
acquire the lock before the kernel had a chance to wake anyone up.

Given how fast atomics can be, and how slow invoking the kernel is
comparatively, that would probably make our mutexes extremely "unfair."
This would not violate the POSIX specification, but it does seem like
a dangerous choice to make in implementing these APIs.

Linux's "futex" API, which our API bears some similarities to, requires
at least one atomic test-and-set for an uncontended acquisition,
and multiple atomics more for even the simplest case of contended
acquisition. If it works for them, it should work for us, too.

Fixes #18436.

Change-Id: Ib8c28acf04ce03234fe738e41aa0969ca1917540
Reviewed-on: https://review.haiku-os.org/c/haiku/+/6537
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>


# 4785ffe7 06-Jun-2023 Augustin Cavalier <waddlesplash@gmail.com>

pthread_barrier: Add casts to appease GCC2.


# 505fdd61 06-Jun-2023 Augustin Cavalier <waddlesplash@gmail.com>

pthread_barrier: Rewrite critical section.

The previous implementation was prone to deadlocks when the next round
of threads tried to enter the barrier before the prior round exited it.
This new version takes care of that problem, and also removes some
other contention.

Basic design:

* waiter_count is now atomic, which means only the "serial" thread, or
in case of contention threads that raced, need acquire the mutex.

* mutex remains locked during threads wakeup, at which point waiter_count
is negative. It is only unlocked when count reaches 0 in the last-woken
thread. This protects against the races that lead to deadlocks.

* Remove usage of _kern_mutex_switch_lock. This was done incorrectly;
if it returned EINTR, the first lock would be unlocked but the second
would not be acquired, creating further races. Instead, we leave
the barrier lock in "LOCKED" state at all times except when we
actually want to wake threads up, when it is left "Unlocked"
(and "unlocked" by each successive exiting thread, just in case.)

Fixes #15736.


# 4f10ef40 21-Feb-2017 Jérôme Duval <jerome.duval@gmail.com>

pthread: check parameters for pthread_barrierattr_*pshared().

* fixes #13323.


# 0e0f49e7 27-Dec-2016 Dmytro Shynkevych <dm.shynk@gmail.com>

libroot: Implemented pthread barriers

This is an implementation of pthread barriers pursuant to the relevant specification.

Barriers are essentially a special case of conditional variables,
such that all threads waiting on one are woken up when the number of
waiters reaches a number provided at the initialization of the barrier.
In view of that, this implementation mimics the implementation of pthread_cond,
except it is more specialized and self-contained.

Signed-off-by: Jérôme Duval <jerome.duval@gmail.com>