History log of /freebsd-11-stable/sys/sys/lock.h
Revision Date Author Comments
# 331722 29-Mar-2018 eadler

Revert r330897:

This was intended to be a non-functional change. It wasn't. The commit
message was thus wrong. In addition it broke arm, and merged crypto
related code.

Revert with prejudice.

This revert skips files touched in r316370 since that commit was since
MFCed. This revert also skips files that require $FreeBSD$ property
changes.

Thank you to those who helped me get out of this mess including but not
limited to gonzo, kevans, rgrimes.

Requested by: gjb (re)


# 330897 14-Mar-2018 eadler

Partial merge of the SPDX changes

These changes are incomplete but are making it difficult
to determine what other changes can/should be merged.

No objections from: pfg


# 327478 02-Jan-2018 mjg

MFC r324335,r327393,r327397,r327401,r327402:

locks: take the number of readers into account when waiting

Previous code would always spin once before checking the lock. But a lock
with e.g. 6 readers is not going to become free in the duration of once spin
even if they start draining immediately.

Conservatively perform one for each reader.

Note that the total number of allowed spins is still extremely small and is
subject to change later.

=============

rwlock: tidy up __rw_runlock_hard similarly to r325921

=============

sx: read the SX_NOADAPTIVE flag and Giant ownership only once

These used to be read multiple times when waiting for the lock the become
free, which had the potential to issue completely avoidable traffic.

=============

locks: re-check the reason to go to sleep after locking sleepq/turnstile

In both rw and sx locks we always go to sleep if the lock owner is not
running.

We do spin for some time if the lock is read-locked.

However, if we decide to go to sleep due to the lock owner being off cpu
and after sleepq/turnstile gets acquired the lock is read-locked, we should
fallback to the aforementioned wait.

=============

sx: fix up non-smp compilation after r327397

=============

locks: adjust loop limit check when waiting for readers

The check was for the exact value, but since the counter started being
incremented by the number of readers it could have jumped over.

=============

Return a non-NULL owner only if the lock is exclusively held in owner_sx().

Fix some whitespace bugs while here.


# 327413 31-Dec-2017 mjg

MFC r320561,r323236,r324041,r324314,r324609,r324613,r324778,r324780,r324787,
r324803,r324836,r325469,r325706,r325917,r325918,r325919,r325920,r325921,
r325922,r325925,r325963,r326106,r326107,r326110,r326111,r326112,r326194,
r326195,r326196,r326197,r326198,r326199,r326200,r326237:

rwlock: perform the typically false td_rw_rlocks check later

Check if the lock is available first instead.

=============

Sprinkle __read_frequently on few obvious places.

Note that some of annotated variables should probably change their types
to something smaller, preferably bit-sized.

=============

mtx: drop the tid argument from _mtx_lock_sleep

tid must be equal to curthread and the target routine was already reading
it anyway, which is not a problem. Not passing it as a parameter allows for
a little bit shorter code in callers.

=============

locks: partially tidy up waiting on readers

spin first instant of instantly re-readoing and don't re-read after
spinning is finished - the state is already known.

Note the code is subject to significant changes later.

=============

locks: take the number of readers into account when waiting

Previous code would always spin once before checking the lock. But a lock
with e.g. 6 readers is not going to become free in the duration of once spin
even if they start draining immediately.

Conservatively perform one for each reader.

Note that the total number of allowed spins is still extremely small and is
subject to change later.

=============

mtx: change MTX_UNOWNED from 4 to 0

The value is spread all over the kernel and zeroing a register is
cheaper/shorter than setting it up to an arbitrary value.

Reduces amd64 GENERIC-NODEBUG .text size by 0.4%.

=============

mtx: fix up owner_mtx after r324609

Now that MTX_UNOWNED is 0 the test was alwayas false.

=============

mtx: clean up locking spin mutexes

1) shorten the fast path by pushing the lockstat probe to the slow path
2) test for kernel panic only after it turns out we will have to spin,
in particular test only after we know we are not recursing

=============

mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes

There is nothing panic-breaking to do in the unlock case and the lock
case will fallback to the slow path doing the check already.

=============

rwlock: reduce lockstat branches in the slowpath

=============

mtx: fix up UP build after r324778

=============

mtx: implement thread lock fastpath

=============

rwlock: fix up compilation without KDTRACE_HOOKS after r324787

=============

rwlock: use fcmpset for setting RW_LOCK_WRITE_SPINNER

=============

sx: avoid branches if in the slow path if lockstat is disabled

=============

rwlock: avoid branches in the slow path if lockstat is disabled

=============

locks: pull up PMC_SOFT_CALLs out of slow path loops

=============

mtx: unlock before traversing threads to wake up

This shortens the lock hold time while not affecting corretness.
All the woken up threads end up competing can lose the race against
a completely unrelated thread getting the lock anyway.

=============

rwlock: unlock before traversing threads to wake up

While here perform a minor cleanup of the unlock path.

=============

sx: perform a minor cleanup of the unlock slowpath

No functional changes.

=============

mtx: add missing parts of the diff in r325920

Fixes build breakage.

=============

locks: fix compilation issues without SMP or KDTRACE_HOOKS

=============

locks: remove the file + line argument from internal primitives when not used

The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed)
for many locks even in production kernels.

While here whack the tid argument from wlock hard and xlock hard.

There is no kbi change of any sort - "external" primitives still accept the
pair.

=============

locks: pass the found lock value to unlock slow path

This avoids an explicit read later.

While here whack the cheaply obtainable 'tid' argument.

=============

rwlock: don't check for curthread's read lock count in the fast path

=============

rwlock: unbreak WITNESS builds after r326110

=============

sx: unbreak debug after r326107

An assertion was modified to use the found value, but it was not updated to
handle a race where blocked threads appear after the entrance to the func.

Move the assertion down to the area protected with sleepq lock where the
lock is read anyway. This does not affect coverage of the assertion and
is consistent with what rw locks are doing.

=============

rwlock: stop re-reading the owner when going to sleep

=============

locks: retry turnstile/sleepq loops on failed cmpset

In order to go to sleep threads set waiter flags, but that can spuriously
fail e.g. when a new reader arrives. Instead of unlocking everything and
looping back, re-evaluate the new state while still holding the lock necessary
to go to sleep.

=============

sx: change sunlock to wake waiters up if it locked sleepq

sleepq is only locked if the curhtread is the last reader. By the time
the lock gets acquired new ones could have arrived. The previous code
would unlock and loop back. This results spurious relocking of sleepq.

This is a step towards xadd-based unlock routine.

=============

rwlock: add __rw_try_{r,w}lock_int

=============

rwlock: fix up compilation of the previous change

commmitted wrong version of the patch

=============

Convert in-kernel thread_lock_flags calls to thread_lock when debug is disabled

The flags argument is not used in this case.

=============

Add the missing lockstat check for thread lock.

=============

rw: fix runlock_hard when new readers show up

When waiters/writer spinner flags are set no new readers can show up unless
they already have a different rw rock read locked. The change in r326195 failed
to take that into account - in presence of new readers it would spin until
they all drain, which would be lead to trouble if e.g. they go off cpu and
can get scheduled because of this thread.


# 315394 16-Mar-2017 mjg

MFC,r313855,r313865,r313875,r313877,r313878,r313901,r313908,r313928,r313944,r314185,r314476,r314187

locks: let primitives for modules unlock without always goging to the slsow path

It is only needed if the LOCK_PROFILING is enabled. It has to always check if
the lock is about to be released which requires an avoidable read if the option
is not specified..

==

sx: fix compilation on UP kernels after r313855

sx primitives use inlines as opposed to macros. Change the tested condition
to LOCK_DEBUG which covers the case, but is slightly overzelaous.

commit a39b839d16cd72b1df284ccfe6706fcdf362706e
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Sat Feb 18 22:06:03 2017 +0000

locks: clean up trylock primitives

In particular thius reduces accesses of the lock itself.

git-svn-id: svn+ssh://svn.freebsd.org/base/head@313928 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f

commit 013560e742a5a276b0deef039bc18078d51d6eb0
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Sat Feb 18 01:52:10 2017 +0000

mtx: plug the 'opts' argument when not used

git-svn-id: svn+ssh://svn.freebsd.org/base/head@313908 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f

commit 9a507901162fb476b9809da2919905735cd605af
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 22:09:55 2017 +0000

sx: fix mips builld after r313855

The namespace in this file really needs cleaning up. In the meantime
let inline primitives be defined as long as LOCK_DEBUG is not enabled.

Reported by: kib

git-svn-id: svn+ssh://svn.freebsd.org/base/head@313901 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f

commit aa6243a5124b9ceb3b1683ea4dbb0a133ce70095
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 15:40:24 2017 +0000

mtx: get rid of file/line args from slow paths if they are unused

This denotes changes which went in by accident in r313877.

On most production kernels both said parameters are zeroed and have nothing
reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change
stops passing them by internal consumers which this is the case.

Kernel modules use _flags variants which are not affected kbi-wise.

git-svn-id: svn+ssh://svn.freebsd.org/base/head@313878 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f

commit 688545a6af7ed0972653d6e2c6ca406ac511f39d
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 15:34:40 2017 +0000

mtx: restrict r313875 to kernels without LOCK_PROFILING

git-svn-id: svn+ssh://svn.freebsd.org/base/head@313877 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f

commit bbe6477138713da2d503f93cb5dd602e14152a08
Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f>
Date: Fri Feb 17 14:55:59 2017 +0000

mtx: microoptimize lockstat handling in __mtx_lock_sleep

This saves a function call and multiple branches after the lock is acquired.

overzelaous


# 315339 16-Mar-2017 mjg

MFC r312890,r313386,r313390:

Sprinkle __read_mostly on backoff and lock profiling code.

==

locks: change backoff to exponential

Previous implementation would use a random factor to spread readers and
reduce chances of starvation. This visibly reduces effectiveness of the
mechanism.

Switch to the more traditional exponential variant. Try to limit starvation
by imposing an upper limit of spins after which spinning is half of what
other threads get. Note the mechanism is turned off by default.

==

locks: follow up r313386

Unfinished diff was committed by accident. The loop in lock_delay
was changed to decrement, but the loop iterator was still incrementing.


# 303953 11-Aug-2016 mjg

MFC r303562,303563,r303584,r303643,r303652,r303655,r303707:

rwlock: s/READER/WRITER/ in wlock lockstat annotation

==

sx: increment spin_cnt before cpu_spinwait in xlock

The change is a no-op only done for consistency with the rest of the file.

==

locks: change sleep_cnt and spin_cnt types to u_int

Both variables are uint64_t, but they only count spins or sleeps.
All reasonable values which we can get here comfortably hit in 32-bit range.

==

Implement trivial backoff for locking primitives.

All current spinning loops retry an atomic op the first chance they get,
which leads to performance degradation under load.

One classic solution to the problem consists of delaying the test to an
extent. This implementation has a trivial linear increment and a random
factor for each attempt.

For simplicity, this first thouch implementation only modifies spinning
loops where the lock owner is running. spin mutexes and thread lock were
not modified.

Current parameters are autotuned on boot based on mp_cpus.

Autotune factors are very conservative and are subject to change later.

==

locks: fix up ifdef guards introduced in r303643

Both sx and rwlocks had copy-pasted ADAPTIVE_MUTEXES instead of the correct
define.

==

locks: fix compilation for KDTRACE_HOOKS && !ADAPTIVE_* case

==

locks: fix sx compilation on mips after r303643

The kernel.h header is required for the SYSINIT macro, which apparently
was present on amd64 by accident.

Approved by: re (gjb)