Searched +hist:2 +hist:c588b03 (Results 1 - 5 of 5) sorted by last modified time

/haiku/src/system/kernel/
H A Dthread.cppdiff 2d43eebc Fri Aug 04 11:53:58 MDT 2023 Augustin Cavalier <waddlesplash@gmail.com> kernel: Store "next" pointer before notifying B_EVENT_INVALID.

Once B_EVENT_INVALID has been set, the select routine is free to
delete the select_info at any time. We therefore cannot access
the "next" pointer after notify_select_events returns.

May fix a KDL seen by trungnt2910.
diff 29536a23 Sun Jan 10 06:57:51 MST 2021 Jérôme Duval <jerome.duval@gmail.com> kernel/thread: restore signal mask just before returning to userland

* otherwise the signal to be handled might be blocked. fixes #15193
* also remove automatic syscall restart on _kern_select, to match Linux and
BSDs behavior: this fixes parallel build with newer gnu make, which happens
to use pselect.
* also remove automatic syscall restart on _kern_poll.

from https://man7.org/linux/man-pages/man7/signal.7.html
"The following interfaces are never restarted after being
interrupted by a signal handler, regardless of the use of
SA_RESTART; they always fail with the error EINTR when
interrupted by a signal handler: ...
select(2), and pselect(2)."
from https://notes.shichao.io/unp/ch6/
"Berkeley-derived kernels never automatically restart select."

Change-Id: I3e9488f60c966b38d427f992f06e6e2217d4adc5
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3636
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
diff 29536a23 Sun Jan 10 06:57:51 MST 2021 Jérôme Duval <jerome.duval@gmail.com> kernel/thread: restore signal mask just before returning to userland

* otherwise the signal to be handled might be blocked. fixes #15193
* also remove automatic syscall restart on _kern_select, to match Linux and
BSDs behavior: this fixes parallel build with newer gnu make, which happens
to use pselect.
* also remove automatic syscall restart on _kern_poll.

from https://man7.org/linux/man-pages/man7/signal.7.html
"The following interfaces are never restarted after being
interrupted by a signal handler, regardless of the use of
SA_RESTART; they always fail with the error EINTR when
interrupted by a signal handler: ...
select(2), and pselect(2)."
from https://notes.shichao.io/unp/ch6/
"Berkeley-derived kernels never automatically restart select."

Change-Id: I3e9488f60c966b38d427f992f06e6e2217d4adc5
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3636
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
diff 837f4f48 Sun Jan 10 06:57:51 MST 2021 Jérôme Duval <jerome.duval@gmail.com> kernel/thread: restore signal mask just before returning to userland

* otherwise the signal to be handled might be blocked. fixes #15193
* also remove automatic syscall restart on _kern_select, to match Linux and
BSDs behavior: this fixes parallel build with newer gnu make, which happens
to use pselect.
* also remove automatic syscall restart on _kern_poll.

from https://man7.org/linux/man-pages/man7/signal.7.html
"The following interfaces are never restarted after being
interrupted by a signal handler, regardless of the use of
SA_RESTART; they always fail with the error EINTR when
interrupted by a signal handler: ...
select(2), and pselect(2)."
from https://notes.shichao.io/unp/ch6/
"Berkeley-derived kernels never automatically restart select."

Change-Id: I7f86d221eae1ad93d8a308a75581d2c30a369c9e
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3627
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
diff 837f4f48 Sun Jan 10 06:57:51 MST 2021 Jérôme Duval <jerome.duval@gmail.com> kernel/thread: restore signal mask just before returning to userland

* otherwise the signal to be handled might be blocked. fixes #15193
* also remove automatic syscall restart on _kern_select, to match Linux and
BSDs behavior: this fixes parallel build with newer gnu make, which happens
to use pselect.
* also remove automatic syscall restart on _kern_poll.

from https://man7.org/linux/man-pages/man7/signal.7.html
"The following interfaces are never restarted after being
interrupted by a signal handler, regardless of the use of
SA_RESTART; they always fail with the error EINTR when
interrupted by a signal handler: ...
select(2), and pselect(2)."
from https://notes.shichao.io/unp/ch6/
"Berkeley-derived kernels never automatically restart select."

Change-Id: I7f86d221eae1ad93d8a308a75581d2c30a369c9e
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3627
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
diff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
diff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
diff 2b7ea4cd Fri Nov 29 11:31:10 MST 2013 Pawel Dziepak <pdziepak@quarnos.org> kernel: Remove Thread::next_state
diff 2eb2b522 Mon Jul 01 17:55:04 MDT 2013 Ingo Weinhold <ingo_weinhold@gmx.de> Enforce team and thread limits

Also fixes incorrect team accounting in case of error when creating
a team. The previously incremented sUsedTeams wasn't decremented again.
diff 4535495d Mon Jan 10 14:54:38 MST 2011 Ingo Weinhold <ingo_weinhold@gmx.de> Merged the signals branch into trunk, with these changes:
* The team and thread kernel structures have been renamed to Team and Thread
respectively and moved into the new BKernel namespace.
* Several (kernel add-on) sources have been converted from C to C++ since
private kernel headers are included that are no longer C compatible.

Changes after merging:
* Fixed gcc 2 build (warnings mainly in the scary firewire bus manager).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
/haiku/src/system/kernel/scheduler/
H A Dscheduler_tracing.cppdiff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
diff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
/haiku/src/bin/debug/time_stats/
H A Dscheduling_analysis.cppdiff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
diff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
/haiku/src/apps/debuganalyzer/model/
H A DModel.cppdiff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
diff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
/haiku/headers/private/system/
H A Dthread_defs.hdiff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.
diff 2c588b03 Mon Aug 05 20:31:02 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> kernel: Properly separate and handle THREAD_BLOCK_TYPE_USER.

Consider this scenario:
* A userland thread puts its ID into some structure so that it
can be woken up later, sets its wait_status to initiate the
begin of the wait, and then calls _user_block_thread.
* A second thread finishes whatever task the first thread
intended to wait for, reads the ID almost immediately
after it was written, and calls _user_unblock_thread.
* _user_unblock_thread was called so soon that the first
thread is not yet blocked on the _user_block_thread block,
but is instead blocked on e.g. the thread's main mutex.
* The first thread's thread_block() call returns B_OK.
As in this example it was inside mutex_lock, it thinks
that it now owns the mutex.
* But it doesn't own the mutex, and so (until yesterday)
all sorts of mayhem and then a random crash occurs, or
(after yesterday) an assert-failure is tripped that
the thread does not own the mutex it expected to.

The above scenario is not a hypothetical, but is in fact the
exact scenario behind the strange panics in #15211.

The solution is to only have _user_unblock_thread actually
unblock threads that were blocked by _user_block_thread,
so I've introduced a new BLOCK_TYPE to differentiate these.
While I'm at it, remove the BLOCK_TYPE_USER_BASE, which was
never used (and now never will be.) If we want to differentiate
different consumers of _user_block_thread for debugging
purposes, we should use the currently-unused "object"
argument to thread_block, instead of cluttering the
relatively-clean block type debugging code with special
types.

One final note: The race condition which was the case of
this bug does not, in fact, imply a deadlock on the part
of the rw_lock here. The wait_status is protected by the
thread's mutex, which is acquired by both _user_block_thread
and _user_unblock_thread, and so if _user_unblock_thread
succeeds faster than _user_block_thread can initiate
the block, it will just see that wait_status is already
<= 0 and return immediately.

Fixes #15211.

Completed in 90 milliseconds