#
341100 |
|
27-Nov-2018 |
vangyzen |
MFC r340409
Make no assertions about lock state when the scheduler is stopped.
Change the assert paths in rm, rw, and sx locks to match the lock and unlock paths. I did this for mutexes in r306346.
Reported by: Travis Lane <tlane@isilon.com> Sponsored by: Dell EMC Isilon |
#
334437 |
|
31-May-2018 |
mjg |
MFC r329276,r329451,r330294,r330414,r330415,r330418,r331109,r332394,r332398, r333831:
rwlock: diff-reduction of runlock compared to sx sunlock
==
Undo LOCK_PROFILING pessimisation after r313454 and r313455
With the option used to compile the kernel both sx and rw shared ops would always go to the slow path which added avoidable overhead even when the facility is disabled.
Furthermore the increased time spent doing uncontested shared lock acquire would be bogusly added to total wait time, somewhat skewing the results.
Restore old behaviour of going there only when profiling is enabled.
This change is a no-op for kernels without LOCK_PROFILING (which is the default).
==
sx: fix adaptive spinning broken in r327397
The condition was flipped.
In particular heavy multithreaded kernel builds on zfs started suffering due to nested sx locks.
For instance make -s -j 128 buildkernel:
before: 3326.67s user 1269.62s system 6981% cpu 1:05.84 total after: 3365.55s user 911.27s system 6871% cpu 1:02.24 total
==
locks: fix a corner case in r327399
If there were exactly rowner_retries/asx_retries (by default: 10) transitions between read and write state and the waiters still did not get the lock, the next owner -> reader transition would result in the code correctly falling back to turnstile/sleepq where it would incorrectly think it was waiting for a writer and decide to leave turnstile/sleepq to loop back. From this point it would take ts/sq trips until the lock gets released.
The bug sometimes manifested itself in stalls during -j 128 package builds.
Refactor the code to fix the bug, while here remove some of the gratituous differences between rw and sx locks.
==
sx: don't do an atomic op in upgrade if it cananot succeed
The code already pays the cost of reading the lock to obtain the waiters flag. Checking whether there is more than one reader is not a problem and avoids dirtying the line.
This also fixes a small corner case: if waiters were to show up between reading the flag and upgrading the lock, the operation would fail even though it should not. No correctness change here though.
==
mtx: tidy up recursion handling in thread lock
Normally after grabbing the lock it has to be verified we got the right one to begin with. However, if we are recursing, it must not change thus the check can be avoided. In particular this avoids a lock read for non-recursing case which found out the lock was changed.
While here avoid an irq trip of this happens.
==
locks: slightly depessimize lockstat
The slow path is always taken when lockstat is enabled. This induces rdtsc (or other) calls to get the cycle count even when there was no contention.
Still go to the slow path to not mess with the fast path, but avoid the heavy lifting unless necessary.
This reduces sys and real time during -j 80 buildkernel: before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total
So note this is still a significant hit.
LOCK_PROFILING results are not affected.
==
rw: whack avoidable re-reads in try_upgrade
==
locks: extend speculative spin waiting for readers to drain
Now that 10 years have passed since the original limit of 10000 was committed, bump it a little bit.
Spinning waiting for writers is semi-informed in the sense that we always know if the owner is running and base the decision to spin on that. However, no such information is provided for read-locking. In particular this means that it is possible for a write-spinner to completely waste cpu time waiting for the lock to be released, while the reader holding it was preempted and is now waiting for the spinner to go off cpu.
Nonetheless, in majority of cases it is an improvement to spin instead of instantly giving up and going to sleep.
The current approach is pretty simple: snatch the number of current readers and performs that many pauses before checking again. The total number of pauses to execute is limited to 10k. If the lock is still not free by that time, go to sleep.
Given the previously noted problem of not knowing whether spinning makes any sense to begin with the new limit has to remain rather conservative. But at the very least it should also be related to the machine. Waiting for writers uses parameters selected based on the number of activated hardware threads. The upper limit of pause instructions to be executed in-between re-reads of the lock is typically 16384 or 32678. It was selected as the limit of total spins. The lower bound is set to already present 10000 as to not change it for smaller machines.
Bumping the limit reduces system time by few % during benchmarks like buildworld, buildkernel and others. Tested on 2 and 4 socket machines (Broadwell, Skylake).
Figuring out how to make a more informed decision while not pessimizing the fast path is left as an exercise for the reader.
==
fix uninitialized variable warning in reader locks
Approved by: re (marius) |
#
329380 |
|
16-Feb-2018 |
mjg |
MFC r327875,r327905,r327914:
mtx: use fcmpset to cover setting MTX_CONTESTED
===
rwlock: try regular read unlock even in the hard path
Saves on turnstile trips if the lock got more readers.
===
sx: retry hard shared unlock just like in r327905 for rwlocks |
#
327478 |
|
02-Jan-2018 |
mjg |
MFC r324335,r327393,r327397,r327401,r327402:
locks: take the number of readers into account when waiting
Previous code would always spin once before checking the lock. But a lock with e.g. 6 readers is not going to become free in the duration of once spin even if they start draining immediately.
Conservatively perform one for each reader.
Note that the total number of allowed spins is still extremely small and is subject to change later.
=============
rwlock: tidy up __rw_runlock_hard similarly to r325921
=============
sx: read the SX_NOADAPTIVE flag and Giant ownership only once
These used to be read multiple times when waiting for the lock the become free, which had the potential to issue completely avoidable traffic.
=============
locks: re-check the reason to go to sleep after locking sleepq/turnstile
In both rw and sx locks we always go to sleep if the lock owner is not running.
We do spin for some time if the lock is read-locked.
However, if we decide to go to sleep due to the lock owner being off cpu and after sleepq/turnstile gets acquired the lock is read-locked, we should fallback to the aforementioned wait.
=============
sx: fix up non-smp compilation after r327397
=============
locks: adjust loop limit check when waiting for readers
The check was for the exact value, but since the counter started being incremented by the number of readers it could have jumped over.
=============
Return a non-NULL owner only if the lock is exclusively held in owner_sx().
Fix some whitespace bugs while here. |
#
327413 |
|
31-Dec-2017 |
mjg |
MFC r320561,r323236,r324041,r324314,r324609,r324613,r324778,r324780,r324787, r324803,r324836,r325469,r325706,r325917,r325918,r325919,r325920,r325921, r325922,r325925,r325963,r326106,r326107,r326110,r326111,r326112,r326194, r326195,r326196,r326197,r326198,r326199,r326200,r326237:
rwlock: perform the typically false td_rw_rlocks check later
Check if the lock is available first instead.
=============
Sprinkle __read_frequently on few obvious places.
Note that some of annotated variables should probably change their types to something smaller, preferably bit-sized.
=============
mtx: drop the tid argument from _mtx_lock_sleep
tid must be equal to curthread and the target routine was already reading it anyway, which is not a problem. Not passing it as a parameter allows for a little bit shorter code in callers.
=============
locks: partially tidy up waiting on readers
spin first instant of instantly re-readoing and don't re-read after spinning is finished - the state is already known.
Note the code is subject to significant changes later.
=============
locks: take the number of readers into account when waiting
Previous code would always spin once before checking the lock. But a lock with e.g. 6 readers is not going to become free in the duration of once spin even if they start draining immediately.
Conservatively perform one for each reader.
Note that the total number of allowed spins is still extremely small and is subject to change later.
=============
mtx: change MTX_UNOWNED from 4 to 0
The value is spread all over the kernel and zeroing a register is cheaper/shorter than setting it up to an arbitrary value.
Reduces amd64 GENERIC-NODEBUG .text size by 0.4%.
=============
mtx: fix up owner_mtx after r324609
Now that MTX_UNOWNED is 0 the test was alwayas false.
=============
mtx: clean up locking spin mutexes
1) shorten the fast path by pushing the lockstat probe to the slow path 2) test for kernel panic only after it turns out we will have to spin, in particular test only after we know we are not recursing
=============
mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes
There is nothing panic-breaking to do in the unlock case and the lock case will fallback to the slow path doing the check already.
=============
rwlock: reduce lockstat branches in the slowpath
=============
mtx: fix up UP build after r324778
=============
mtx: implement thread lock fastpath
=============
rwlock: fix up compilation without KDTRACE_HOOKS after r324787
=============
rwlock: use fcmpset for setting RW_LOCK_WRITE_SPINNER
=============
sx: avoid branches if in the slow path if lockstat is disabled
=============
rwlock: avoid branches in the slow path if lockstat is disabled
=============
locks: pull up PMC_SOFT_CALLs out of slow path loops
=============
mtx: unlock before traversing threads to wake up
This shortens the lock hold time while not affecting corretness. All the woken up threads end up competing can lose the race against a completely unrelated thread getting the lock anyway.
=============
rwlock: unlock before traversing threads to wake up
While here perform a minor cleanup of the unlock path.
=============
sx: perform a minor cleanup of the unlock slowpath
No functional changes.
=============
mtx: add missing parts of the diff in r325920
Fixes build breakage.
=============
locks: fix compilation issues without SMP or KDTRACE_HOOKS
=============
locks: remove the file + line argument from internal primitives when not used
The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed) for many locks even in production kernels.
While here whack the tid argument from wlock hard and xlock hard.
There is no kbi change of any sort - "external" primitives still accept the pair.
=============
locks: pass the found lock value to unlock slow path
This avoids an explicit read later.
While here whack the cheaply obtainable 'tid' argument.
=============
rwlock: don't check for curthread's read lock count in the fast path
=============
rwlock: unbreak WITNESS builds after r326110
=============
sx: unbreak debug after r326107
An assertion was modified to use the found value, but it was not updated to handle a race where blocked threads appear after the entrance to the func.
Move the assertion down to the area protected with sleepq lock where the lock is read anyway. This does not affect coverage of the assertion and is consistent with what rw locks are doing.
=============
rwlock: stop re-reading the owner when going to sleep
=============
locks: retry turnstile/sleepq loops on failed cmpset
In order to go to sleep threads set waiter flags, but that can spuriously fail e.g. when a new reader arrives. Instead of unlocking everything and looping back, re-evaluate the new state while still holding the lock necessary to go to sleep.
=============
sx: change sunlock to wake waiters up if it locked sleepq
sleepq is only locked if the curhtread is the last reader. By the time the lock gets acquired new ones could have arrived. The previous code would unlock and loop back. This results spurious relocking of sleepq.
This is a step towards xadd-based unlock routine.
=============
rwlock: add __rw_try_{r,w}lock_int
=============
rwlock: fix up compilation of the previous change
commmitted wrong version of the patch
=============
Convert in-kernel thread_lock_flags calls to thread_lock when debug is disabled
The flags argument is not used in this case.
=============
Add the missing lockstat check for thread lock.
=============
rw: fix runlock_hard when new readers show up
When waiters/writer spinner flags are set no new readers can show up unless they already have a different rw rock read locked. The change in r326195 failed to take that into account - in presence of new readers it would spin until they all drain, which would be lead to trouble if e.g. they go off cpu and can get scheduled because of this thread. |
#
327409 |
|
31-Dec-2017 |
mjg |
MFC r323235,r323236,r324789,r324863:
Introduce __read_frequently
While __read_mostly groups variables together, their placement is not specified. In particular 2 frequently used variables can end up in different lines.
This annotation is only expected to be used for variables read all the time, e.g. on each syscall entry.
=============
Sprinkle __read_frequently on few obvious places.
Note that some of annotated variables should probably change their types to something smaller, preferably bit-sized.
=============
Mark kdb_active as __read_frequently and switch to bool to eat less space.
=============
Change kdb_active type to u_char.
Fixes warnings from gcc and keeps the small size. Perhaps nesting should be moved to another variablle. |
#
326533 |
|
04-Dec-2017 |
markj |
MFC r326175, r326176: Lockstat fixes for sx locks. |
#
320241 |
|
22-Jun-2017 |
markj |
MFC r320124: Fix the !TD_IS_IDLETHREAD(curthread) locking assertions.
Approved by: re (kib) |
#
315394 |
|
16-Mar-2017 |
mjg |
MFC,r313855,r313865,r313875,r313877,r313878,r313901,r313908,r313928,r313944,r314185,r314476,r314187
locks: let primitives for modules unlock without always goging to the slsow path
It is only needed if the LOCK_PROFILING is enabled. It has to always check if the lock is about to be released which requires an avoidable read if the option is not specified..
==
sx: fix compilation on UP kernels after r313855
sx primitives use inlines as opposed to macros. Change the tested condition to LOCK_DEBUG which covers the case, but is slightly overzelaous.
commit a39b839d16cd72b1df284ccfe6706fcdf362706e Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Sat Feb 18 22:06:03 2017 +0000
locks: clean up trylock primitives
In particular thius reduces accesses of the lock itself.
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313928 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 013560e742a5a276b0deef039bc18078d51d6eb0 Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Sat Feb 18 01:52:10 2017 +0000
mtx: plug the 'opts' argument when not used
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313908 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 9a507901162fb476b9809da2919905735cd605af Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 22:09:55 2017 +0000
sx: fix mips builld after r313855
The namespace in this file really needs cleaning up. In the meantime let inline primitives be defined as long as LOCK_DEBUG is not enabled.
Reported by: kib
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313901 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit aa6243a5124b9ceb3b1683ea4dbb0a133ce70095 Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 15:40:24 2017 +0000
mtx: get rid of file/line args from slow paths if they are unused
This denotes changes which went in by accident in r313877.
On most production kernels both said parameters are zeroed and have nothing reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change stops passing them by internal consumers which this is the case.
Kernel modules use _flags variants which are not affected kbi-wise.
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313878 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 688545a6af7ed0972653d6e2c6ca406ac511f39d Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 15:34:40 2017 +0000
mtx: restrict r313875 to kernels without LOCK_PROFILING
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313877 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit bbe6477138713da2d503f93cb5dd602e14152a08 Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 14:55:59 2017 +0000
mtx: microoptimize lockstat handling in __mtx_lock_sleep
This saves a function call and multiple branches after the lock is acquired.
overzelaous |
#
315386 |
|
16-Mar-2017 |
mjg |
MFC r313853,r313859:
locks: remove SCHEDULER_STOPPED checks from primitives for modules
They all fallback to the slow path if necessary and the check is there.
This means a panicked kernel executing code from modules will be able to succeed doing actual lock/unlock, but this was already the case for core code which has said primitives inlined.
==
Introduce SCHEDULER_STOPPED_TD for use when the thread pointer was already read
Sprinkle in few places. |
#
315382 |
|
16-Mar-2017 |
mjg |
MFC r313467:
locks: tidy up unlock fallback paths
Update comments to note these functions are reachable if lockstat is enabled.
Check if the lock has any bits set before attempting unlock, which saves an unnecessary atomic operation. |
#
315381 |
|
16-Mar-2017 |
mjg |
MFC r313455:
sx: implement slock/sunlock fast path
See r313454. |
#
315378 |
|
16-Mar-2017 |
mjg |
MFC r313275,r313280,r313282,r313335:
mtx: move lockstat handling out of inline primitives
Lockstat requires checking if it is enabled and if so, calling a 6 argument function. Further, determining whether to call it on unlock requires pre-reading the lock value.
This is problematic in at least 3 ways: - more branches in the hot path than necessary - additional cacheline ping pong under contention - bigger code
Instead, check first if lockstat handling is necessary and if so, just fall back to regular locking routines. For this purpose a new macro is introduced (LOCKSTAT_PROFILE_ENABLED).
LOCK_PROFILING uninlines all primitives. Fold in the current inline lock variant into the _mtx_lock_flags to retain the support. With this change the inline variants are not used when LOCK_PROFILING is defined and thus can ignore its existence.
This results in: text data bss dec hex filename 22259667 1303208 4994976 28557851 1b3c21b kernel.orig 21797315 1303208 4994976 28095499 1acb40b kernel.patched
i.e. about 3% reduction in text size.
A remaining action is to remove spurious arguments for internal kernel consumers.
==
sx: move lockstat handling out of inline primitives
See r313275 for details.
==
rwlock: move lockstat handling out of inline primitives
See r313275 for details.
One difference here is that recursion handling was removed from the fallback routine. As it is it was never supposed to see a recursed lock in the first place. Future changes will move it out of inline variants, but right now there is no easy to way to test if the lock is recursed without reading additional words.
==
locks: fix recursion support after recent changes
When a relevant lockstat probe is enabled the fallback primitive is called with a constant signifying a free lock. This works fine for typical cases but breaks with recursion, since it checks if the passed value is that of the executing thread.
Read the value if necessary. |
#
315377 |
|
16-Mar-2017 |
mjg |
MFC r313269,r313270,r313271,r313272,r313274,r313278,r313279,r313996,r314474
mtx: switch to fcmpset
The found value is passed to locking routines in order to reduce cacheline accesses.
mtx_unlock grows an explicit check for regular unlock. On ll/sc architectures the routine can fail even if the lock could have been handled by the inline primitive.
==
rwlock: switch to fcmpset
==
sx: switch to fcmpset
==
sx: uninline slock/sunlock
Shared locking routines explicitly read the value and test it. If the change attempt fails, they fall back to a regular function which would retry in a loop.
The problem is that with many concurrent readers the risk of failure is pretty high and even the value returned by fcmpset is very likely going to be stale by the time the loop in the fallback routine is reached.
Uninline said primitives. It gives a throughput increase when doing concurrent slocks/sunlocks with 80 hardware threads from ~50 mln/s to ~56 mln/s.
Interestingly, rwlock primitives are already not inlined.
==
sx: add witness support missed in r313272
==
mtx: fix up _mtx_obtain_lock_fetch usage in thread lock
Since _mtx_obtain_lock_fetch no longer sets the argument to MTX_UNOWNED, callers have to do it on their own.
==
mtx: fixup r313278, the assignemnt was supposed to go inside the loop
==
mtx: fix spin mutexes interaction with failed fcmpset
While doing so move recursion support down to the fallback routine.
==
locks: ensure proper barriers are used with atomic ops when necessary
Unclear how, but the locking routine for mutexes was using the *release* barrier instead of acquire. This must have been either a copy-pasto or bad completion.
Going through other uses of atomics shows no barriers in: - upgrade routines (addressed in this patch) - sections protected with turnstile locks - this should be fine as necessary barriers are in the worst case provided by turnstile unlock
I would like to thank Mark Millard and andreast@ for reporting the problem and testing previous patches before the issue got identified. |
#
315341 |
|
16-Mar-2017 |
mjg |
MFC r311172,r311194,r311226,r312389,r312390:
mtx: reduce lock accesses
Instead of spuriously re-reading the lock value, read it once.
This change also has a side effect of fixing a performance bug: on failed _mtx_obtain_lock, it was possible that re-read would find the lock is unowned, but in this case the primitive would make a trip through turnstile code.
This is diff reduction to a variant which uses atomic_fcmpset.
==
Reduce lock accesses in thread lock similarly to r311172
==
mtx: plug open-coded mtx_lock access missed in r311172
==
rwlock: reduce lock accesses similarly to r311172
==
sx: reduce lock accesses similarly to r311172 |
#
315339 |
|
16-Mar-2017 |
mjg |
MFC r312890,r313386,r313390:
Sprinkle __read_mostly on backoff and lock profiling code.
==
locks: change backoff to exponential
Previous implementation would use a random factor to spread readers and reduce chances of starvation. This visibly reduces effectiveness of the mechanism.
Switch to the more traditional exponential variant. Try to limit starvation by imposing an upper limit of spins after which spinning is half of what other threads get. Note the mechanism is turned off by default.
==
locks: follow up r313386
Unfinished diff was committed by accident. The loop in lock_delay was changed to decrement, but the loop iterator was still incrementing. |
#
303953 |
|
11-Aug-2016 |
mjg |
MFC r303562,303563,r303584,r303643,r303652,r303655,r303707:
rwlock: s/READER/WRITER/ in wlock lockstat annotation
==
sx: increment spin_cnt before cpu_spinwait in xlock
The change is a no-op only done for consistency with the rest of the file.
==
locks: change sleep_cnt and spin_cnt types to u_int
Both variables are uint64_t, but they only count spins or sleeps. All reasonable values which we can get here comfortably hit in 32-bit range.
==
Implement trivial backoff for locking primitives.
All current spinning loops retry an atomic op the first chance they get, which leads to performance degradation under load.
One classic solution to the problem consists of delaying the test to an extent. This implementation has a trivial linear increment and a random factor for each attempt.
For simplicity, this first thouch implementation only modifies spinning loops where the lock owner is running. spin mutexes and thread lock were not modified.
Current parameters are autotuned on boot based on mp_cpus.
Autotune factors are very conservative and are subject to change later.
==
locks: fix up ifdef guards introduced in r303643
Both sx and rwlocks had copy-pasted ADAPTIVE_MUTEXES instead of the correct define.
==
locks: fix compilation for KDTRACE_HOOKS && !ADAPTIVE_* case
==
locks: fix sx compilation on mips after r303643
The kernel.h header is required for the SYSINIT macro, which apparently was present on amd64 by accident.
Approved by: re (gjb) |
#
302408 |
|
08-Jul-2016 |
gjb |
Copy head@r302406 to stable/11 as part of the 11.0-RELEASE cycle. Prune svn:mergeinfo from the new branch, as nothing has been merged here.
Additional commits post-branch will follow.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
301157 |
|
01-Jun-2016 |
mjg |
Microoptimize locking primitives by avoiding unnecessary atomic ops.
Inline version of primitives do an atomic op and if it fails they fallback to actual primitives, which immediately retry the atomic op.
The obvious optimisation is to check if the lock is free and only then proceed to do an atomic op.
Reviewed by: jhb, vangyzen
|
#
286166 |
|
02-Aug-2015 |
markj |
Don't modify curthread->td_locks unless INVARIANTS is enabled.
This field is only used in a KASSERT that verifies that no locks are held when returning to user mode. Moreover, the td_locks accounting is only correct when LOCK_DEBUG > 0, which is implied by INVARIANTS.
Reviewed by: jhb MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D3205
|
#
285704 |
|
19-Jul-2015 |
markj |
Consistently use a reader/writer flag for lockstat probes in rwlock(9) and sx(9), rather than using the probe function name to determine whether a given lock is a read lock or a write lock. Update lockstat(1) accordingly.
|
#
285703 |
|
19-Jul-2015 |
markj |
Implement the lockstat provider using SDT(9) instead of the custom provider in lockstat.ko. This means that lockstat probes now have typed arguments and will utilize SDT probe hot-patching support when it arrives.
Reviewed by: gnn Differential Revision: https://reviews.freebsd.org/D2993
|
#
285664 |
|
18-Jul-2015 |
markj |
Pass the lock object to lockstat_nsecs() and return immediately if LO_NOPROFILE is set. Some timecounter handlers acquire a spin mutex, and we don't want to recurse if lockstat probes are enabled.
PR: 201642 Reviewed by: avg MFC after: 3 days
|
#
284297 |
|
12-Jun-2015 |
avg |
several lockstat improvements
0. For spin events report time spent spinning, not a loop count. While loop count is much easier and cheaper to obtain it is hard to reason about the reported numbers, espcially for adaptive locks where both spinning and sleeping can happen. So, it's better to compare apples and apples.
1. Teach lockstat about FreeBSD rw locks. This is done in part by changing the corresponding probes and in part by changing what probes lockstat should expect.
2. Teach lockstat that rw locks are adaptive and can spin on FreeBSD.
3. Report lock acquisition events for successful rw try-lock operations.
4. Teach lockstat about FreeBSD sx locks. Reporting of events for those locks completely mirrors rw locks.
5. Report spin and block events before acquisition event. This is behavior documented for the upstream, so it makes sense to stick to it. Note that because of FreeBSD adaptive lock implementations both the spin and block events may be reported for the same acquisition while the upstream reports only one of them.
Differential Revision: https://reviews.freebsd.org/D2727 Reviewed by: markj MFC after: 17 days Relnotes: yes Sponsored by: ClusterHQ
|
#
275751 |
|
13-Dec-2014 |
dchagin |
Add _NEW flag to mtx(9), sx(9), rmlock(9) and rwlock(9). A _NEW flag passed to _init_flags() to avoid check for double-init.
Differential Revision: https://reviews.freebsd.org/D1208 Reviewed by: jhb, wblock MFC after: 1 Month
|
#
274092 |
|
04-Nov-2014 |
jhb |
Add a new thread state "spinning" to schedgraph and add tracepoints at the start and stop of spinning waits in lock primitives.
|
#
258541 |
|
25-Nov-2013 |
attilio |
- For kernel compiled only with KDTRACE_HOOKS and not any lock debugging option, unbreak the lock tracing release semantic by embedding calls to LOCKSTAT_PROFILE_RELEASE_LOCK() direclty in the inlined version of the releasing functions for mutex, rwlock and sxlock. Failing to do so skips the lockstat_probe_func invokation for unlocking. - As part of the LOCKSTAT support is inlined in mutex operation, for kernel compiled without lock debugging options, potentially every consumer must be compiled including opt_kdtrace.h. Fix this by moving KDTRACE_HOOKS into opt_global.h and remove the dependency by opt_kdtrace.h for all files, as now only KDTRACE_FRAMES is linked there and it is only used as a compile-time stub [0].
[0] immediately shows some new bug as DTRACE-derived support for debug in sfxge is broken and it was never really tested. As it was not including correctly opt_kdtrace.h before it was never enabled so it was kept broken for a while. Fix this by using a protection stub, leaving sfxge driver authors the responsibility for fixing it appropriately [1].
Sponsored by: EMC / Isilon storage division Discussed with: rstone [0] Reported by: rstone [1] Discussed with: philip
|
#
255788 |
|
22-Sep-2013 |
davide |
Consistently use the same value to indicate exclusively-held and shared-held locks for all the primitives in lc_lock/lc_unlock routines. This fixes the problems introduced in r255747, which indeed introduced an inversion in the logic.
Reported by: many Tested by: bdrewery, pho, lme, Adam McDougall, O. Hartmann Approved by: re (glebius)
|
#
255745 |
|
20-Sep-2013 |
davide |
Fix lc_lock/lc_unlock() support for rmlocks held in shared mode. With current lock classes KPI it was really difficult because there was no way to pass an rmtracker object to the lock/unlock routines. In order to accomplish the task, modify the aforementioned functions so that they can return (or pass as argument) an uinptr_t, which is in the rm case used to hold a pointer to struct rm_priotracker for current thread. As an added bonus, this fixes rm_sleep() in the rm shared case, which right now can communicate priotracker structure between lc_unlock()/lc_lock().
Suggested by: jhb Reviewed by: jhb Approved by: re (delphij)
|
#
252212 |
|
25-Jun-2013 |
jhb |
A few mostly cosmetic nits to aid in debugging: - Call lock_init() first before setting any lock_object fields in lock init routines. This way if the machine panics due to a duplicate init the lock's original state is preserved. - Somewhat similarly, don't decrement td_locks and td_slocks until after an unlock operation has completed successfully.
|
#
244582 |
|
22-Dec-2012 |
attilio |
Fixup r240424: On entering KDB backends, the hijacked thread to run interrupt context can still be idlethread. At that point, without the panic condition, it can still happen that idlethread then will try to acquire some locks to carry on some operations.
Skip the idlethread check on block/sleep lock operations when KDB is active.
Reported by: jh Tested by: jh MFC after: 1 week
|
#
240475 |
|
13-Sep-2012 |
attilio |
Remove all the checks on curthread != NULL with the exception of some MD trap checks (eg. printtrap()).
Generally this check is not needed anymore, as there is not a legitimate case where curthread != NULL, after pcpu 0 area has been properly initialized.
Reviewed by: bde, jhb MFC after: 1 week
|
#
240424 |
|
12-Sep-2012 |
attilio |
Improve check coverage about idle threads.
Idle threads are not allowed to acquire any lock but spinlocks. Deny any attempt to do so by panicing at the locking operation when INVARIANTS is on. Then, remove the check on blocking on a turnstile. The check in sleepqueues is left because they are not allowed to use tsleep() either which could happen still.
Reviewed by: bde, jhb, kib MFC after: 1 week
|
#
233628 |
|
28-Mar-2012 |
fabient |
Add software PMC support.
New kernel events can be added at various location for sampling or counting. This will for example allow easy system profiling whatever the processor is with known tools like pmcstat(8).
Simultaneous usage of software PMC and hardware PMC is possible, for example looking at the lock acquire failure, page fault while sampling on instructions.
Sponsored by: NETASQ MFC after: 1 month
|
#
228433 |
|
12-Dec-2011 |
avg |
put sys/systm.h at its proper place or add it if missing
Reported by: lstewart, tinderbox Pointyhat to: avg, attilio MFC after: 1 week MFC with: r228430
|
#
228424 |
|
11-Dec-2011 |
avg |
panic: add a switch and infrastructure for stopping other CPUs in SMP case
Historical behavior of letting other CPUs merily go on is a default for time being. The new behavior can be switched on via kern.stop_scheduler_on_panic tunable and sysctl.
Stopping of the CPUs has (at least) the following benefits: - more of the system state at panic time is preserved intact - threads and interrupts do not interfere with dumping of the system state
Only one thread runs uninterrupted after panic if stop_scheduler_on_panic is set. That thread might call code that is also used in normal context and that code might use locks to prevent concurrent execution of certain parts. Those locks might be held by the stopped threads and would never be released. To work around this issue, it was decided that instead of explicit checks for panic context, we would rather put those checks inside the locking primitives.
This change has substantial portions written and re-written by attilio and kib at various times. Other changes are heavily based on the ideas and patches submitted by jhb and mdf. bde has provided many insights into the details and history of the current code.
The new behavior may cause problems for systems that use a USB keyboard for interfacing with system console. This is because of some unusual locking patterns in the ukbd code which have to be used because on one hand ukbd is below syscons, but on the other hand it has to interface with other usb code that uses regular mutexes/Giant for its concurrency protection. Dumping to USB-connected disks may also be affected.
PR: amd64/139614 (at least) In cooperation with: attilio, jhb, kib, mdf Discussed with: arch@, bde Tested by: Eugene Grosbein <eugen@grosbein.net>, gnn, Steven Hartland <killing@multiplay.co.uk>, glebius, Andrew Boyer <aboyer@averesystems.com> (various versions of the patch) MFC after: 3 months (or never)
|
#
227788 |
|
21-Nov-2011 |
attilio |
Introduce the same mutex-wise fix in r227758 for sx locks.
The functions that offer file and line specifications are: - sx_assert_ - sx_downgrade_ - sx_slock_ - sx_slock_sig_ - sx_sunlock_ - sx_try_slock_ - sx_try_xlock_ - sx_try_upgrade_ - sx_unlock_ - sx_xlock_ - sx_xlock_sig_ - sx_xunlock_
Now vm_map locking is fully converted and can avoid to know specifics about locking procedures. Reviewed by: kib MFC after: 1 month
|
#
227588 |
|
16-Nov-2011 |
pjd |
Constify arguments for locking KPIs where possible.
This enables locking consumers to pass their own structures around as const and be able to assert locks embedded into those structures.
Reviewed by: ed, kib, jhb
|
#
227309 |
|
07-Nov-2011 |
ed |
Mark all SYSCTL_NODEs static that have no corresponding SYSCTL_DECLs.
The SYSCTL_NODE macro defines a list that stores all child-elements of that node. If there's no SYSCTL_DECL macro anywhere else, there's no reason why it shouldn't be static.
|
#
219819 |
|
21-Mar-2011 |
jeff |
- Merge changes to the base system to support OFED. These include a wider arg2 for sysctl, updates to vlan code, IFT_INFINIBAND, and other miscellaneous small features.
|
#
217326 |
|
12-Jan-2011 |
mdf |
sysctl(9) cleanup checkpoint: amd64 GENERIC builds cleanly.
Commit the kernel changes.
|
#
217265 |
|
11-Jan-2011 |
jhb |
Remove unneeded includes of <sys/linker_set.h>. Other headers that use it internally contain nested includes.
Reviewed by: bde
|
#
208912 |
|
08-Jun-2010 |
jhb |
Fix a sign bug that caused adaptive spinning in sx_xlock() to not work properly. Among other things it did not drop Giant while spinning leading to livelocks.
Reviewed by: rookie, kib, jmallett MFC after: 3 days
|
#
200447 |
|
12-Dec-2009 |
attilio |
In current code, threads performing an interruptible sleep (on both sxlock, via the sx_{s, x}lock_sig() interface, or plain lockmgr), will leave the waiters flag on forcing the owner to do a wakeup even when if the waiter queue is empty. That operation may lead to a deadlock in the case of doing a fake wakeup on the "preferred" (based on the wakeup algorithm) queue while the other queue has real waiters on it, because nobody is going to wakeup the 2nd queue waiters and they will sleep indefinitively.
A similar bug, is present, for lockmgr in the case the waiters are sleeping with LK_SLEEPFAIL on. In this case, even if the waiters queue is not empty, the waiters won't progress after being awake but they will just fail, still not taking care of the 2nd queue waiters (as instead the lock owned doing the wakeup would expect).
In order to fix this bug in a cheap way (without adding too much locking and complicating too much the semantic) add a sleepqueue interface which does report the actual number of waiters on a specified queue of a waitchannel (sleepq_sleepcnt()) and use it in order to determine if the exclusive waiters (or shared waiters) are actually present on the lockmgr (or sx) before to give them precedence in the wakeup algorithm. This fix alone, however doesn't solve the LK_SLEEPFAIL bug. In order to cope with it, add the tracking of how many exclusive LK_SLEEPFAIL waiters a lockmgr has and if all the waiters on the exclusive waiters queue are LK_SLEEPFAIL just wake both queues.
The sleepq_sleepcnt() introduction and ABI breakage require __FreeBSD_version bumping.
Reported by: avg, kib, pho Reviewed by: kib Tested by: pho
|
#
197643 |
|
30-Sep-2009 |
attilio |
When releasing a read/shared lock we need to use a write memory barrier in order to avoid, on architectures which doesn't have strong ordered writes, CPU instructions reordering.
Diagnosed by: fabio Reviewed by: jhb Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com>
|
#
196772 |
|
02-Sep-2009 |
attilio |
Fix some bugs related to adaptive spinning:
In the lockmgr support: - GIANT_RESTORE() is just called when the sleep finishes, so the current code can ends up into a giant unlock problem. Fix it by appropriately call GIANT_RESTORE() when needed. Note that this is not exactly ideal because for any interation of the adaptive spinning we drop and restore Giant, but the overhead should be not a factor. - In the lock held in exclusive mode case, after the adaptive spinning is brought to completition, we should just retry to acquire the lock instead to fallthrough. Fix that. - Fix a style nit
In the sx support: - Call GIANT_SAVE() before than looping. This saves some overhead because in the current code GIANT_SAVE() is called several times.
Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com>
|
#
196334 |
|
17-Aug-2009 |
attilio |
* Change the scope of the ASSERT_ATOMIC_LOAD() from a generic check to a pointer-fetching specific operation check. Consequently, rename the operation ASSERT_ATOMIC_LOAD_PTR(). * Fix the implementation of ASSERT_ATOMIC_LOAD_PTR() by checking directly alignment on the word boundry, for all the given specific architectures. That's a bit too strict for some common case, but it assures safety. * Add a comment explaining the scope of the macro * Add a new stub in the lockmgr specific implementation
Tested by: marcel (initial version), marius Reviewed by: rwatson, jhb (comment specific review) Approved by: re (kib)
|
#
196226 |
|
14-Aug-2009 |
bz |
Add a new macro to test that a variable could be loaded atomically. Check that the given variable is at most uintptr_t in size and that it is aligned.
Note: ASSERT_ATOMIC_LOAD() uses ALIGN() to check for adequate alignment -- however, the function of ALIGN() is to guarantee alignment, and therefore may lead to stronger alignment enforcement than necessary for types that are smaller than sizeof(uintptr_t).
Add checks to mtx, rw and sx locks init functions to detect possible breakage. This was used during debugging of the problem fixed with r196118 where a pointer was on an un-aligned address in the dpcpu area.
In collaboration with: rwatson Reviewed by: rwatson Approved by: re (kib)
|
#
193307 |
|
02-Jun-2009 |
attilio |
Handle lock recursion differenty by always checking against LO_RECURSABLE instead the lock own flag itself.
Tested by: pho
|
#
193025 |
|
29-May-2009 |
attilio |
The patch for r193011 was partially rejected when applied, complete it.
|
#
193011 |
|
29-May-2009 |
attilio |
Reverse the logic for ADAPTIVE_SX option and enable it by default. Introduce for this operation the reverse NO_ADAPTIVE_SX option. The flag SX_ADAPTIVESPIN to be passed to sx_init_flags(9) gets suppressed and the new flag, offering the reversed logic, SX_NOADAPTIVE is added.
Additively implements adaptive spininning for sx held in shared mode. The spinning limit can be handled through sysctls in order to be tuned while the code doesn't reach the release, after which time they should be dropped probabilly.
This change has made been necessary by recent benchmarks where it does improve concurrency of workloads in presence of high contention (ie. ZFS).
KPI breakage is documented by __FreeBSD_version bumping, manpage and UPDATING updates.
Requested by: jeff, kmacy Reviewed by: jeff Tested by: pho
|
#
192853 |
|
26-May-2009 |
sson |
Add the OpenSolaris dtrace lockstat provider. The lockstat provider adds probes for mutexes, reader/writer and shared/exclusive locks to gather contention statistics and other locking information for dtrace scripts, the lockstat(1M) command and other potential consumers.
Reviewed by: attilio jhb jb Approved by: gnn (mentor)
|
#
189846 |
|
15-Mar-2009 |
jeff |
- Wrap lock profiling state variables in #ifdef LOCK_PROFILING blocks.
|
#
182914 |
|
10-Sep-2008 |
jhb |
Teach WITNESS about the interlocks used with lockmgr. This removes a bunch of spurious witness warnings since lockmgr grew witness support. Before this, every time you passed an interlock to a lockmgr lock WITNESS treated it as a LOR.
Reviewed by: attilio
|
#
181334 |
|
05-Aug-2008 |
jhb |
If a thread that is swapped out is made runnable, then the setrunnable() routine wakes up proc0 so that proc0 can swap the thread back in. Historically, this has been done by waking up proc0 directly from setrunnable() itself via a wakeup(). When waking up a sleeping thread that was swapped out (the usual case when waking proc0 since only sleeping threads are eligible to be swapped out), this resulted in a bit of recursion (e.g. wakeup() -> setrunnable() -> wakeup()).
With sleep queues having separate locks in 6.x and later, this caused a spin lock LOR (sleepq lock -> sched_lock/thread lock -> sleepq lock). An attempt was made to fix this in 7.0 by making the proc0 wakeup use the ithread mechanism for doing the wakeup. However, this required grabbing proc0's thread lock to perform the wakeup. If proc0 was asleep elsewhere in the kernel (e.g. waiting for disk I/O), then this degenerated into the same LOR since the thread lock would be some other sleepq lock.
Fix this by deferring the wakeup of the swapper until after the sleepq lock held by the upper layer has been locked. The setrunnable() routine now returns a boolean value to indicate whether or not proc0 needs to be woken up. The end result is that consumers of the sleepq API such as *sleep/wakeup, condition variables, sx locks, and lockmgr, have to wakeup proc0 if they get a non-zero return value from sleepq_abort(), sleepq_broadcast(), or sleepq_signal().
Discussed with: jeff Glanced at by: sam Tested by: Jurgen Weber jurgen - ish com au MFC after: 2 weeks
|
#
179025 |
|
15-May-2008 |
attilio |
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the lock "type" directly to witness_init() from the parent lock init function. Handle delayed initialization before than witness_initialize() is called through the witness_pendhelp structure. - Axe out LO_ENROLLPEND as it is not really needed. The case where the mutex init delayed wants to be destroyed can't happen because witness_destroy() checks for witness_cold and panic in case. - In enroll(), if we cannot allocate a new object from the freelist, notify that to userspace through a printf(). - Modify the depart function in order to return nothing as in the current CVS version it always returns true and adjust callers accordingly. - Fix the witness_addgraph() argument name prototype. - Remove unuseful code from itismychild().
This commit leads to a shrinked struct lock_object and so smaller locks, in particular on amd64 where 2 uintptr_t (16 bytes per-primitive) are gained.
Reviewed by: jhb
|
#
177085 |
|
12-Mar-2008 |
jeff |
- Pass the priority argument from *sleep() into sleepq and down into sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static boost. - Change the priority arguments to cv_* to match sleepq/msleep/etc. where 0 means no priority change. Catch -1 in cv_broadcastpri() and convert it to 0 for now. - Set a flag when sleeping in a way that is compatible with swapping since direct priority comparisons are meaningless now. - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which controls the boost behavior. Turning it off gives better performance in some workloads but needs more investigation. - While we're modifying sleepq, change signal and broadcast to both return with the lock held as the lock was held on enter.
Reviewed by: jhb, peter
|
#
174629 |
|
15-Dec-2007 |
jeff |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
|
#
173733 |
|
18-Nov-2007 |
attilio |
Expand lock class with the "virtual" function lc_assert which will offer an unified way for all the lock primitives to express lock assertions. Currenty, lockmgrs and rmlocks don't have assertions, so just panic in that case. This will be a base for more callout improvements.
Ok'ed by: jhb, jeff
|
#
173600 |
|
14-Nov-2007 |
julian |
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
|
#
172416 |
|
02-Oct-2007 |
pjd |
Fix sx_try_slock(), so it only fails when there is an exclusive owner. Before that fix, it was possible for the function to fail if number of sharers changes between 'x = sx->sx_lock' step and atomic_cmpset_acq_ptr() call.
This fixes ZFS problem when ZFS returns strange EIO errors under load. In ZFS there is a code that depends on the fact that sx_try_slock() can only fail if there is an exclusive owner.
Discussed with: attilio Reviewed by: jhb Approved by: re (kensmith)
|
#
171277 |
|
06-Jul-2007 |
attilio |
Fix some problems with lock_profiling in sx locks: - Adjust lock_profiling stubs semantic in the hard functions in order to be more accurate and trustable - Disable shared paths for lock_profiling. Actually, lock_profiling has a subtle race which makes results caming from shared paths not completely trustable. A macro stub (LOCK_PROFILING_SHARED) can be actually used for re-enabling this paths, but is currently intended for developing use only. - Use homogeneous names for automatic variables in hard functions regarding lock_profiling - Style fixes - Add a CTASSERT for some flags building
Discussed with: kmacy, kris Approved by: jeff (mentor) Approved by: re
|
#
170149 |
|
31-May-2007 |
attilio |
Add functions sx_xlock_sig() and sx_slock_sig(). These functions are intended to do the same actions of sx_xlock() and sx_slock() but with the difference to perform an interruptible sleep, so that sleep can be interrupted by external events. In order to support these new featueres, some code renstruction is needed, but external API won't be affected at all.
Note: use "void" cast for "int" returning functions in order to avoid tools like Coverity prevents to whine.
Requested by: rwatson Tested by: rwatson Reviewed by: jhb Approved by: jeff (mentor)
|
#
170115 |
|
29-May-2007 |
attilio |
style(9) fixes for sx locks.
Approved by: jeff (mentor)
|
#
170113 |
|
29-May-2007 |
attilio |
Add a small fix for lock profiling in sx locks. "0" cannot be a correct value since when the function is entered at least one shared holder must be present and since we want the last one "1" is the correct value. Note that lock_profiling for sx locks is far from being perfect. Expect further fixes for that.
Approved by: jeff (mentor)
|
#
169780 |
|
19-May-2007 |
jhb |
Rename the macros for assertion flags passed to sx_assert() from SX_* to SA_* to match mutexes and rwlocks. The old flags still exist for backwards compatiblity.
Requested by: attilio
|
#
169776 |
|
19-May-2007 |
jhb |
Expose sx_xholder() as a public macro. It returns a pointer to the thread that holds the current exclusive lock, or NULL if no thread holds an exclusive lock.
Requested by: pjd
|
#
169774 |
|
19-May-2007 |
jhb |
Oops, didn't include SX_ADAPTIVESPIN in the list of valid flags for the assert in sx_init_flags().
Submitted by: attilio
|
#
169769 |
|
19-May-2007 |
jhb |
Add a new SX_RECURSE flag to make support for recursive exclusive locks conditional. By default, sx(9) locks are back to not supporting recursive exclusive locks.
Submitted by: attilio
|
#
169676 |
|
18-May-2007 |
jhb |
Fix a comment.
|
#
169675 |
|
18-May-2007 |
jhb |
Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().
|
#
169394 |
|
08-May-2007 |
jhb |
Add destroyed cookie values for sx locks and rwlocks as well as extra KASSERTs so that any lock operations on a destroyed lock will panic or hang.
|
#
168333 |
|
04-Apr-2007 |
kmacy |
fix typo
|
#
168332 |
|
04-Apr-2007 |
kmacy |
style fixes and make sure that the lock is treated as released in the sharers == 0 case not that this is somewhat racy because a new sharer can come in while we're updating stats
|
#
168330 |
|
03-Apr-2007 |
kmacy |
Fixes to sx for newsx - fix recursed case and move out of inline
Submitted by: Attilio Rao <attilio@freebsd.org>
|
#
168191 |
|
31-Mar-2007 |
jhb |
Optimize sx locks to use simple atomic operations for the common cases of obtaining and releasing shared and exclusive locks. The algorithms for manipulating the lock cookie are very similar to that rwlocks. This patch also adds support for exclusive locks using the same algorithm as mutexes.
A new sx_init_flags() function has been added so that optional flags can be specified to alter a given locks behavior. The flags include SX_DUPOK, SX_NOWITNESS, SX_NOPROFILE, and SX_QUITE which are all identical in nature to the similar flags for mutexes.
Adaptive spinning on select locks may be enabled by enabling the ADAPTIVE_SX kernel option. Only locks initialized with the SX_ADAPTIVESPIN flag via sx_init_flags() will adaptively spin.
The common cases for sx_slock(), sx_sunlock(), sx_xlock(), and sx_xunlock() are now performed inline in non-debug kernels. As a result, <sys/sx.h> now requires <sys/lock.h> to be included prior to <sys/sx.h>.
The new kernel option SX_NOINLINE can be used to disable the aforementioned inlining in non-debug kernels.
The size of struct sx has changed, so the kernel ABI is probably greatly disturbed.
MFC after: 1 month Submitted by: attilio Tested by: kris, pjd
|
#
167787 |
|
21-Mar-2007 |
jhb |
Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes, rwlocks, and sx locks to 'lock_object'.
|
#
167368 |
|
09-Mar-2007 |
jhb |
Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes. These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both functions accept a 'struct lock_object *' as their first parameter. The 'lc_unlock' function returns an integer that is then passed as the second paramter to the subsequent 'lc_lock' function. This can be used to communicate state. For example, sx locks and rwlocks use this to indicate if the lock was share/read locked vs exclusive/write locked.
Currently, spin mutexes and lockmgr locks do not provide working lc_lock and lc_unlock functions.
|
#
167365 |
|
09-Mar-2007 |
jhb |
Use C99-style struct member initialization for lock classes.
|
#
167163 |
|
02-Mar-2007 |
kmacy |
lock stats updates need to be protected by the lock
|
#
167136 |
|
01-Mar-2007 |
kmacy |
Evidently I've overestimated gcc's ability to peak inside inline functions and optimize away unused stack values. The 48 bytes that the lock_profile_object adds to the stack evidently has a measurable performance impact on certain workloads.
|
#
167054 |
|
27-Feb-2007 |
kmacy |
Further improvements to LOCK_PROFILING: - Fix missing initialization in kern_rwlock.c causing bogus times to be collected - Move updates to the lock hash to after the lock is released for spin mutexes, sleep mutexes, and sx locks - Add new kernel build option LOCK_PROFILE_FAST - only update lock profiling statistics when an acquisition is contended. This reduces the overhead of LOCK_PROFILING to increasing system time by 20%-25% which on "make -j8 kernel-toolchain" on a dual woodcrest is unmeasurable in terms of wall-clock time. Contrast this to enabling lock profiling without LOCK_PROFILE_FAST and I see a 5x-6x slowdown in wall-clock time.
|
#
167012 |
|
26-Feb-2007 |
kmacy |
general LOCK_PROFILING cleanup
- only collect timestamps when a lock is contested - this reduces the overhead of collecting profiles from 20x to 5x
- remove unused function from subr_lock.c
- generalize cnt_hold and cnt_lock statistics to be kept for all locks
- NOTE: rwlock profiling generates invalid statistics (and most likely always has) someone familiar with that should review
|
#
164246 |
|
13-Nov-2006 |
kmacy |
track lock class name in a way that doesn't break WITNESS
|
#
164159 |
|
11-Nov-2006 |
kmacy |
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%.
Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
|
#
161337 |
|
15-Aug-2006 |
jhb |
Add a new 'show sleepchain' ddb command similar to 'show lockchain' except that it operates on lockmgr and sx locks. This can be useful for tracking down vnode deadlocks in VFS for example. Note that this command is a bit more fragile than 'show lockchain' as we have to poke around at the wait channel of a thread to see if it points to either a struct lock or a condition variable inside of a struct sx. If td_wchan points to something unmapped, then this command will terminate early due to a fault, but no harm will be done.
|
#
160771 |
|
27-Jul-2006 |
jhb |
Adjust td_locks for non-spin mutexes, rwlocks, and sx locks so that it is a count of all non-spin locks, not just lockmgr locks. This can give us a much cheaper way to see if we have any locks held (such as when returning to userland via userret()) without requiring WITNESS.
MFC after: 1 week
|
#
154484 |
|
17-Jan-2006 |
jhb |
Add a new file (kern/subr_lock.c) for holding code related to struct lock_obj objects: - Add new lock_init() and lock_destroy() functions to setup and teardown lock_object objects including KTR logging and registering with WITNESS. - Move all the handling of LO_INITIALIZED out of witness and the various lock init functions into lock_init() and lock_destroy(). - Remove the constants for static indices into the lock_classes[] array and change the code outside of subr_lock.c to use LOCK_CLASS to compare against a known lock class. - Move the 'show lock' ddb function and lock_classes[] array out of kern_mutex.c over to subr_lock.c.
|
#
154077 |
|
06-Jan-2006 |
jhb |
Trim another pointer from struct lock_object (and thus from struct mtx and struct sx). Instead of storing a direct pointer to a our lock_class struct in lock_object, reserve 4 bits in the lo_flags field to serve as an index into a global lock_classes array that contains pointers to the lock classes. Only debugging code such as WITNESS or INVARIANTS checks and KTR logging need to access the lock_class member, so this shouldn't add any overhead to production kernels. It might add some slight overhead to kernels using those debug options however.
As with the previous set of changes to lock_object, this is going to completely obliterate the kernel ABI, so be sure to recompile all your modules.
|
#
153395 |
|
13-Dec-2005 |
jhb |
Add a new 'show lock' command to ddb. If the argument has a valid lock class, then it displays various information about the lock and calls a new function pointer in lock_class (lc_ddb_show) to dump class-specific information about the lock as well (such as the owner of a mutex or xlock'ed sx lock). This is easier than staring at hex dumps of locks to figure out who owns the lock, etc. Note that extending lock_class doesn't affect the ABI for any kernel modules as the only code that deals with lock_class structures directly is kern_mutex.c, kern_sx.c, and witness.
MFC after: 1 week
|
#
139804 |
|
06-Jan-2005 |
imp |
/* -> /*- for copyright notices, minor format tweaks as necessary
|
#
126316 |
|
27-Feb-2004 |
jhb |
Fix _sx_assert() to panic() rather than printf() when an assertion fails and ignore assertions if we have already paniced.
|
#
126003 |
|
19-Feb-2004 |
pjd |
Simplify check. We are only able to check exclusive lock and if 2nd condition is true, first one is true for sure.
Approved by: jhb, scottl (mentor)
|
#
125421 |
|
04-Feb-2004 |
pjd |
Allow assert that the current thread does not hold the sx(9) lock.
Reviewed by: jhb In cooperation with: juli, jhb Approved by: jhb, scottl (mentor)
|
#
125160 |
|
28-Jan-2004 |
jhb |
Rework witness_lock() to make it slightly more useful and flexible. - witness_lock() is split into two pieces: witness_checkorder() and witness_lock(). Witness_checkorder() determines if acquiring a specified lock at the time it is called would result in a lock order. It optionally adds a new lock order relationship as well. witness_lock() updates witness's data structures to assume that a lock has been acquired by stick a new lock instance in the appropriate lock instance list. - The mutex and sx lock functions now call checkorder() prior to trying to acquire a lock and continue to call witness_lock() after the acquire is completed. This will let witness catch a deadlock before it happens rather than trying to do so after the threads have deadlocked (i.e. never actually report it). - A new function witness_defineorder() has been added that adds a lock order between two locks at runtime without having to acquire the locks. If the lock order cannot be added it will return an error. This function is available to programmers via the WITNESS_DEFINEORDER() macro which accepts either two mutexes or two sx locks as its arguments. - A few simple wrapper macros were added to allow developers to call witness_checkorder() anywhere as a way of enforcing locking assertions in code that might acquire a certain lock in some situations. The macros are: witness_check_{mutex,shared_sx,exclusive_sx} and take an appropriate lock as the sole argument. - The code to remove a lock instance from a lock list in witness_unlock() was unnested by using a goto to vastly improve the readability of this function.
|
#
117494 |
|
13-Jul-2003 |
truckman |
Extend the mutex pool implementation to permit the creation and use of multiple mutex pools with different options and sizes. Mutex pools can be created with either the default sleep mutexes or with spin mutexes. A dynamically created mutex pool can now be destroyed if it is no longer needed.
Create two pools by default, one that matches the existing pool that uses the MTX_NOWITNESS option that should be used for building higher level locks, and a new pool with witness checking enabled.
Modify the users of the existing mutex pool to use the appropriate pool in the new implementation.
Reviewed by: jhb
|
#
116182 |
|
11-Jun-2003 |
obrien |
Use __FBSDID().
|
#
93812 |
|
04-Apr-2002 |
jhb |
Set the lock type equal to the lock name for now as all of the current sx locks don't use very specific lock names.
|
#
93672 |
|
02-Apr-2002 |
arr |
- Add MTX_SYSINIT and SX_SYSINIT as macro glue for allowing sx and mtx locks to be able to setup a SYSINIT call. This helps in places where a lock is needed to protect some data, but the data is not truly associated with a subsystem that can properly initialize it's lock. The macros use the mtx_sysinit() and sx_sysinit() functions, respectively, as the handler argument to SYSINIT().
Reviewed by: alfred, jhb, smp@
|
#
89496 |
|
18-Jan-2002 |
tanimura |
Invert the test of sx_xholder for SX_LOCKED. We need to warn if a thread other than the curthread holds an sx.
While I am here, break a line at the end of warning.
|
#
87594 |
|
10-Dec-2001 |
obrien |
Update to C99, s/__FUNCTION__/__func__/.
|
#
86333 |
|
13-Nov-2001 |
dillon |
Create a mutex pool API for short term leaf mutexes. Replace the manual mutex pool in kern_lock.c (lockmgr locks) with the new API. Replace the mutexes embedded in sxlocks with the new API.
|
#
85412 |
|
24-Oct-2001 |
jhb |
Fix this to actually compile in the !INVARIANTS case.
Reported by: Maxime Henrion <mux@qualys.com>
|
#
85388 |
|
23-Oct-2001 |
jhb |
Change the sx(9) assertion API to use a sx_assert() function similar to mtx_assert(9) rather than several SX_ASSERT_* macros.
|
#
85205 |
|
20-Oct-2001 |
jhb |
The mtx_init() and sx_init() functions bzero'd locks before handing them off to witness_init() making the check for double intializating a lock by testing the LO_INITIALIZED flag moot. Workaround this by checking the LO_INITIALIZED flag ourself before we bzero the lock structure.
|
#
83366 |
|
12-Sep-2001 |
julian |
KSE Milestone 2 Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process.
Sorry john! (your next MFC will be a doosie!)
Reviewed by: peter@freebsd.org, dillon@freebsd.org
X-MFC after: ha ha ha ha
|
#
82246 |
|
23-Aug-2001 |
jhb |
Use witness_upgrade/downgrade for sx_try_upgrade/downgrade.
|
#
82212 |
|
23-Aug-2001 |
jhb |
Clear the sx_xholder pointer when downgrading an exclusive lock.
|
#
81599 |
|
13-Aug-2001 |
jasone |
Add sx_try_upgrade() and sx_downgrade().
Submitted by: Alexander Kabaev <ak03@gte.com>
|
#
78872 |
|
27-Jun-2001 |
jhb |
- Add trylock variants of shared and exclusive locks. - The sx assertions don't actually need the internal sx mutex lock, so don't bother doing so. - Add a new assertion SX_ASSERT_LOCKED() that asserts that either a shared or exclusive lock should be held. This assertion should be used instead of SX_ASSERT_SLOCKED() in almost all cases. - Adjust some KASSERT()'s to include file and line information. - Use the new witness_assert() function in the WITNESS case for sx slock asserts to verify that the current thread actually owns a slock.
|
#
76272 |
|
04-May-2001 |
jhb |
- Move state about lock objects out of struct lock_object and into a new struct lock_instance that is stored in the per-process and per-CPU lock lists. Previously, the lock lists just kept a pointer to each lock held. That pointer is now replaced by a lock instance which contains a pointer to the lock object, the file and line of the last acquisition of a lock, and various flags about a lock including its recursion count. - If we sleep while holding a sleepable lock, then mark that lock instance as having slept and ignore any lock order violations that occur while acquiring Giant when we wake up with slept locks. This is ok because of Giant's special nature. - Allow witness to differentiate between shared and exclusive locks and unlocks of a lock. Witness will now detect the case when a lock is acquired first in one mode and then in another. Mutexes are always locked and unlocked exclusively. Witness will also now detect the case where a process attempts to unlock a shared lock while holding an exclusive lock and vice versa. - Fix a bug in the lock list implementation where we used the wrong constant to detect the case where a lock list entry was full.
|
#
74912 |
|
28-Mar-2001 |
jhb |
Rework the witness code to work with sx locks as well as mutexes. - Introduce lock classes and lock objects. Each lock class specifies a name and set of flags (or properties) shared by all locks of a given type. Currently there are three lock classes: spin mutexes, sleep mutexes, and sx locks. A lock object specifies properties of an additional lock along with a lock name and all of the extra stuff needed to make witness work with a given lock. This abstract lock stuff is defined in sys/lock.h. The lockmgr constants, types, and prototypes have been moved to sys/lockmgr.h. For temporary backwards compatability, sys/lock.h includes sys/lockmgr.h. - Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin locks held. By making this per-cpu, we do not have to jump through magic hoops to deal with sched_lock changing ownership during context switches. - Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with proc->p_sleeplocks, which is a list of held sleep locks including sleep mutexes and sx locks. - Add helper macros for logging lock events via the KTR_LOCK KTR logging level so that the log messages are consistent. - Add some new flags that can be passed to mtx_init(): - MTX_NOWITNESS - specifies that this lock should be ignored by witness. This is used for the mutex that blocks a sx lock for example. - MTX_QUIET - this is not new, but you can pass this to mtx_init() now and no events will be logged for this lock, so that one doesn't have to change all the individual mtx_lock/unlock() operations. - All lock objects maintain an initialized flag. Use this flag to export a mtx_initialized() macro that can be safely called from drivers. Also, we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness performs the corresponding checks using the initialized flag. - The lock order reversal messages have been improved to output slightly more accurate file and line numbers.
|
#
73901 |
|
06-Mar-2001 |
jhb |
In order to avoid recursing on the backing mutex for sx locks in the INVARIANTS case, define the actual KASSERT() in _SX_ASSERT_[SX]LOCKED macros that are used in the sx code itself and convert the SX_ASSERT_[SX]LOCKED macros to simple wrappers that grab the mutex for the duration of the check.
|
#
73863 |
|
06-Mar-2001 |
bmilekic |
- Add sx_descr description member to sx lock structure - Add sx_xholder member to sx struct which is used for INVARIANTS-enabled assertions. It indicates the thread that presently owns the xlock. - Add some assertions to the sx lock code that will detect the fatal API abuse: xlock --> xlock xlock --> slock which now works thanks to sx_xholder. Notice that the remaining two problematic cases: slock --> xlock slock --> slock (a little less problematic, but still recursion) will need to be handled by witness eventually, as they are more involved.
Reviewed by: jhb, jake, jasone
|
#
73782 |
|
05-Mar-2001 |
jasone |
Implement shared/exclusive locks.
Reviewed by: bmilekic, jake, jhb
|
#
341100 |
|
27-Nov-2018 |
vangyzen |
MFC r340409
Make no assertions about lock state when the scheduler is stopped.
Change the assert paths in rm, rw, and sx locks to match the lock and unlock paths. I did this for mutexes in r306346.
Reported by: Travis Lane <tlane@isilon.com> Sponsored by: Dell EMC Isilon
|
#
334437 |
|
31-May-2018 |
mjg |
MFC r329276,r329451,r330294,r330414,r330415,r330418,r331109,r332394,r332398, r333831:
rwlock: diff-reduction of runlock compared to sx sunlock
==
Undo LOCK_PROFILING pessimisation after r313454 and r313455
With the option used to compile the kernel both sx and rw shared ops would always go to the slow path which added avoidable overhead even when the facility is disabled.
Furthermore the increased time spent doing uncontested shared lock acquire would be bogusly added to total wait time, somewhat skewing the results.
Restore old behaviour of going there only when profiling is enabled.
This change is a no-op for kernels without LOCK_PROFILING (which is the default).
==
sx: fix adaptive spinning broken in r327397
The condition was flipped.
In particular heavy multithreaded kernel builds on zfs started suffering due to nested sx locks.
For instance make -s -j 128 buildkernel:
before: 3326.67s user 1269.62s system 6981% cpu 1:05.84 total after: 3365.55s user 911.27s system 6871% cpu 1:02.24 total
==
locks: fix a corner case in r327399
If there were exactly rowner_retries/asx_retries (by default: 10) transitions between read and write state and the waiters still did not get the lock, the next owner -> reader transition would result in the code correctly falling back to turnstile/sleepq where it would incorrectly think it was waiting for a writer and decide to leave turnstile/sleepq to loop back. From this point it would take ts/sq trips until the lock gets released.
The bug sometimes manifested itself in stalls during -j 128 package builds.
Refactor the code to fix the bug, while here remove some of the gratituous differences between rw and sx locks.
==
sx: don't do an atomic op in upgrade if it cananot succeed
The code already pays the cost of reading the lock to obtain the waiters flag. Checking whether there is more than one reader is not a problem and avoids dirtying the line.
This also fixes a small corner case: if waiters were to show up between reading the flag and upgrading the lock, the operation would fail even though it should not. No correctness change here though.
==
mtx: tidy up recursion handling in thread lock
Normally after grabbing the lock it has to be verified we got the right one to begin with. However, if we are recursing, it must not change thus the check can be avoided. In particular this avoids a lock read for non-recursing case which found out the lock was changed.
While here avoid an irq trip of this happens.
==
locks: slightly depessimize lockstat
The slow path is always taken when lockstat is enabled. This induces rdtsc (or other) calls to get the cycle count even when there was no contention.
Still go to the slow path to not mess with the fast path, but avoid the heavy lifting unless necessary.
This reduces sys and real time during -j 80 buildkernel: before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total
So note this is still a significant hit.
LOCK_PROFILING results are not affected.
==
rw: whack avoidable re-reads in try_upgrade
==
locks: extend speculative spin waiting for readers to drain
Now that 10 years have passed since the original limit of 10000 was committed, bump it a little bit.
Spinning waiting for writers is semi-informed in the sense that we always know if the owner is running and base the decision to spin on that. However, no such information is provided for read-locking. In particular this means that it is possible for a write-spinner to completely waste cpu time waiting for the lock to be released, while the reader holding it was preempted and is now waiting for the spinner to go off cpu.
Nonetheless, in majority of cases it is an improvement to spin instead of instantly giving up and going to sleep.
The current approach is pretty simple: snatch the number of current readers and performs that many pauses before checking again. The total number of pauses to execute is limited to 10k. If the lock is still not free by that time, go to sleep.
Given the previously noted problem of not knowing whether spinning makes any sense to begin with the new limit has to remain rather conservative. But at the very least it should also be related to the machine. Waiting for writers uses parameters selected based on the number of activated hardware threads. The upper limit of pause instructions to be executed in-between re-reads of the lock is typically 16384 or 32678. It was selected as the limit of total spins. The lower bound is set to already present 10000 as to not change it for smaller machines.
Bumping the limit reduces system time by few % during benchmarks like buildworld, buildkernel and others. Tested on 2 and 4 socket machines (Broadwell, Skylake).
Figuring out how to make a more informed decision while not pessimizing the fast path is left as an exercise for the reader.
==
fix uninitialized variable warning in reader locks
Approved by: re (marius)
|
#
329380 |
|
16-Feb-2018 |
mjg |
MFC r327875,r327905,r327914:
mtx: use fcmpset to cover setting MTX_CONTESTED
===
rwlock: try regular read unlock even in the hard path
Saves on turnstile trips if the lock got more readers.
===
sx: retry hard shared unlock just like in r327905 for rwlocks
|
#
327478 |
|
02-Jan-2018 |
mjg |
MFC r324335,r327393,r327397,r327401,r327402:
locks: take the number of readers into account when waiting
Previous code would always spin once before checking the lock. But a lock with e.g. 6 readers is not going to become free in the duration of once spin even if they start draining immediately.
Conservatively perform one for each reader.
Note that the total number of allowed spins is still extremely small and is subject to change later.
=============
rwlock: tidy up __rw_runlock_hard similarly to r325921
=============
sx: read the SX_NOADAPTIVE flag and Giant ownership only once
These used to be read multiple times when waiting for the lock the become free, which had the potential to issue completely avoidable traffic.
=============
locks: re-check the reason to go to sleep after locking sleepq/turnstile
In both rw and sx locks we always go to sleep if the lock owner is not running.
We do spin for some time if the lock is read-locked.
However, if we decide to go to sleep due to the lock owner being off cpu and after sleepq/turnstile gets acquired the lock is read-locked, we should fallback to the aforementioned wait.
=============
sx: fix up non-smp compilation after r327397
=============
locks: adjust loop limit check when waiting for readers
The check was for the exact value, but since the counter started being incremented by the number of readers it could have jumped over.
=============
Return a non-NULL owner only if the lock is exclusively held in owner_sx().
Fix some whitespace bugs while here.
|
#
327413 |
|
31-Dec-2017 |
mjg |
MFC r320561,r323236,r324041,r324314,r324609,r324613,r324778,r324780,r324787, r324803,r324836,r325469,r325706,r325917,r325918,r325919,r325920,r325921, r325922,r325925,r325963,r326106,r326107,r326110,r326111,r326112,r326194, r326195,r326196,r326197,r326198,r326199,r326200,r326237:
rwlock: perform the typically false td_rw_rlocks check later
Check if the lock is available first instead.
=============
Sprinkle __read_frequently on few obvious places.
Note that some of annotated variables should probably change their types to something smaller, preferably bit-sized.
=============
mtx: drop the tid argument from _mtx_lock_sleep
tid must be equal to curthread and the target routine was already reading it anyway, which is not a problem. Not passing it as a parameter allows for a little bit shorter code in callers.
=============
locks: partially tidy up waiting on readers
spin first instant of instantly re-readoing and don't re-read after spinning is finished - the state is already known.
Note the code is subject to significant changes later.
=============
locks: take the number of readers into account when waiting
Previous code would always spin once before checking the lock. But a lock with e.g. 6 readers is not going to become free in the duration of once spin even if they start draining immediately.
Conservatively perform one for each reader.
Note that the total number of allowed spins is still extremely small and is subject to change later.
=============
mtx: change MTX_UNOWNED from 4 to 0
The value is spread all over the kernel and zeroing a register is cheaper/shorter than setting it up to an arbitrary value.
Reduces amd64 GENERIC-NODEBUG .text size by 0.4%.
=============
mtx: fix up owner_mtx after r324609
Now that MTX_UNOWNED is 0 the test was alwayas false.
=============
mtx: clean up locking spin mutexes
1) shorten the fast path by pushing the lockstat probe to the slow path 2) test for kernel panic only after it turns out we will have to spin, in particular test only after we know we are not recursing
=============
mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes
There is nothing panic-breaking to do in the unlock case and the lock case will fallback to the slow path doing the check already.
=============
rwlock: reduce lockstat branches in the slowpath
=============
mtx: fix up UP build after r324778
=============
mtx: implement thread lock fastpath
=============
rwlock: fix up compilation without KDTRACE_HOOKS after r324787
=============
rwlock: use fcmpset for setting RW_LOCK_WRITE_SPINNER
=============
sx: avoid branches if in the slow path if lockstat is disabled
=============
rwlock: avoid branches in the slow path if lockstat is disabled
=============
locks: pull up PMC_SOFT_CALLs out of slow path loops
=============
mtx: unlock before traversing threads to wake up
This shortens the lock hold time while not affecting corretness. All the woken up threads end up competing can lose the race against a completely unrelated thread getting the lock anyway.
=============
rwlock: unlock before traversing threads to wake up
While here perform a minor cleanup of the unlock path.
=============
sx: perform a minor cleanup of the unlock slowpath
No functional changes.
=============
mtx: add missing parts of the diff in r325920
Fixes build breakage.
=============
locks: fix compilation issues without SMP or KDTRACE_HOOKS
=============
locks: remove the file + line argument from internal primitives when not used
The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed) for many locks even in production kernels.
While here whack the tid argument from wlock hard and xlock hard.
There is no kbi change of any sort - "external" primitives still accept the pair.
=============
locks: pass the found lock value to unlock slow path
This avoids an explicit read later.
While here whack the cheaply obtainable 'tid' argument.
=============
rwlock: don't check for curthread's read lock count in the fast path
=============
rwlock: unbreak WITNESS builds after r326110
=============
sx: unbreak debug after r326107
An assertion was modified to use the found value, but it was not updated to handle a race where blocked threads appear after the entrance to the func.
Move the assertion down to the area protected with sleepq lock where the lock is read anyway. This does not affect coverage of the assertion and is consistent with what rw locks are doing.
=============
rwlock: stop re-reading the owner when going to sleep
=============
locks: retry turnstile/sleepq loops on failed cmpset
In order to go to sleep threads set waiter flags, but that can spuriously fail e.g. when a new reader arrives. Instead of unlocking everything and looping back, re-evaluate the new state while still holding the lock necessary to go to sleep.
=============
sx: change sunlock to wake waiters up if it locked sleepq
sleepq is only locked if the curhtread is the last reader. By the time the lock gets acquired new ones could have arrived. The previous code would unlock and loop back. This results spurious relocking of sleepq.
This is a step towards xadd-based unlock routine.
=============
rwlock: add __rw_try_{r,w}lock_int
=============
rwlock: fix up compilation of the previous change
commmitted wrong version of the patch
=============
Convert in-kernel thread_lock_flags calls to thread_lock when debug is disabled
The flags argument is not used in this case.
=============
Add the missing lockstat check for thread lock.
=============
rw: fix runlock_hard when new readers show up
When waiters/writer spinner flags are set no new readers can show up unless they already have a different rw rock read locked. The change in r326195 failed to take that into account - in presence of new readers it would spin until they all drain, which would be lead to trouble if e.g. they go off cpu and can get scheduled because of this thread.
|
#
327409 |
|
31-Dec-2017 |
mjg |
MFC r323235,r323236,r324789,r324863:
Introduce __read_frequently
While __read_mostly groups variables together, their placement is not specified. In particular 2 frequently used variables can end up in different lines.
This annotation is only expected to be used for variables read all the time, e.g. on each syscall entry.
=============
Sprinkle __read_frequently on few obvious places.
Note that some of annotated variables should probably change their types to something smaller, preferably bit-sized.
=============
Mark kdb_active as __read_frequently and switch to bool to eat less space.
=============
Change kdb_active type to u_char.
Fixes warnings from gcc and keeps the small size. Perhaps nesting should be moved to another variablle.
|
#
326533 |
|
04-Dec-2017 |
markj |
MFC r326175, r326176: Lockstat fixes for sx locks.
|
#
320241 |
|
22-Jun-2017 |
markj |
MFC r320124: Fix the !TD_IS_IDLETHREAD(curthread) locking assertions.
Approved by: re (kib)
|
#
315394 |
|
16-Mar-2017 |
mjg |
MFC,r313855,r313865,r313875,r313877,r313878,r313901,r313908,r313928,r313944,r314185,r314476,r314187
locks: let primitives for modules unlock without always goging to the slsow path
It is only needed if the LOCK_PROFILING is enabled. It has to always check if the lock is about to be released which requires an avoidable read if the option is not specified..
==
sx: fix compilation on UP kernels after r313855
sx primitives use inlines as opposed to macros. Change the tested condition to LOCK_DEBUG which covers the case, but is slightly overzelaous.
commit a39b839d16cd72b1df284ccfe6706fcdf362706e Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Sat Feb 18 22:06:03 2017 +0000
locks: clean up trylock primitives
In particular thius reduces accesses of the lock itself.
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313928 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 013560e742a5a276b0deef039bc18078d51d6eb0 Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Sat Feb 18 01:52:10 2017 +0000
mtx: plug the 'opts' argument when not used
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313908 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 9a507901162fb476b9809da2919905735cd605af Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 22:09:55 2017 +0000
sx: fix mips builld after r313855
The namespace in this file really needs cleaning up. In the meantime let inline primitives be defined as long as LOCK_DEBUG is not enabled.
Reported by: kib
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313901 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit aa6243a5124b9ceb3b1683ea4dbb0a133ce70095 Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 15:40:24 2017 +0000
mtx: get rid of file/line args from slow paths if they are unused
This denotes changes which went in by accident in r313877.
On most production kernels both said parameters are zeroed and have nothing reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change stops passing them by internal consumers which this is the case.
Kernel modules use _flags variants which are not affected kbi-wise.
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313878 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit 688545a6af7ed0972653d6e2c6ca406ac511f39d Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 15:34:40 2017 +0000
mtx: restrict r313875 to kernels without LOCK_PROFILING
git-svn-id: svn+ssh://svn.freebsd.org/base/head@313877 ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f
commit bbe6477138713da2d503f93cb5dd602e14152a08 Author: mjg <mjg@ccf9f872-aa2e-dd11-9fc8-001c23d0bc1f> Date: Fri Feb 17 14:55:59 2017 +0000
mtx: microoptimize lockstat handling in __mtx_lock_sleep
This saves a function call and multiple branches after the lock is acquired.
overzelaous
|
#
315386 |
|
16-Mar-2017 |
mjg |
MFC r313853,r313859:
locks: remove SCHEDULER_STOPPED checks from primitives for modules
They all fallback to the slow path if necessary and the check is there.
This means a panicked kernel executing code from modules will be able to succeed doing actual lock/unlock, but this was already the case for core code which has said primitives inlined.
==
Introduce SCHEDULER_STOPPED_TD for use when the thread pointer was already read
Sprinkle in few places.
|
#
315382 |
|
16-Mar-2017 |
mjg |
MFC r313467:
locks: tidy up unlock fallback paths
Update comments to note these functions are reachable if lockstat is enabled.
Check if the lock has any bits set before attempting unlock, which saves an unnecessary atomic operation.
|
#
315381 |
|
16-Mar-2017 |
mjg |
MFC r313455:
sx: implement slock/sunlock fast path
See r313454.
|
#
315378 |
|
16-Mar-2017 |
mjg |
MFC r313275,r313280,r313282,r313335:
mtx: move lockstat handling out of inline primitives
Lockstat requires checking if it is enabled and if so, calling a 6 argument function. Further, determining whether to call it on unlock requires pre-reading the lock value.
This is problematic in at least 3 ways: - more branches in the hot path than necessary - additional cacheline ping pong under contention - bigger code
Instead, check first if lockstat handling is necessary and if so, just fall back to regular locking routines. For this purpose a new macro is introduced (LOCKSTAT_PROFILE_ENABLED).
LOCK_PROFILING uninlines all primitives. Fold in the current inline lock variant into the _mtx_lock_flags to retain the support. With this change the inline variants are not used when LOCK_PROFILING is defined and thus can ignore its existence.
This results in: text data bss dec hex filename 22259667 1303208 4994976 28557851 1b3c21b kernel.orig 21797315 1303208 4994976 28095499 1acb40b kernel.patched
i.e. about 3% reduction in text size.
A remaining action is to remove spurious arguments for internal kernel consumers.
==
sx: move lockstat handling out of inline primitives
See r313275 for details.
==
rwlock: move lockstat handling out of inline primitives
See r313275 for details.
One difference here is that recursion handling was removed from the fallback routine. As it is it was never supposed to see a recursed lock in the first place. Future changes will move it out of inline variants, but right now there is no easy to way to test if the lock is recursed without reading additional words.
==
locks: fix recursion support after recent changes
When a relevant lockstat probe is enabled the fallback primitive is called with a constant signifying a free lock. This works fine for typical cases but breaks with recursion, since it checks if the passed value is that of the executing thread.
Read the value if necessary.
|
#
315377 |
|
16-Mar-2017 |
mjg |
MFC r313269,r313270,r313271,r313272,r313274,r313278,r313279,r313996,r314474
mtx: switch to fcmpset
The found value is passed to locking routines in order to reduce cacheline accesses.
mtx_unlock grows an explicit check for regular unlock. On ll/sc architectures the routine can fail even if the lock could have been handled by the inline primitive.
==
rwlock: switch to fcmpset
==
sx: switch to fcmpset
==
sx: uninline slock/sunlock
Shared locking routines explicitly read the value and test it. If the change attempt fails, they fall back to a regular function which would retry in a loop.
The problem is that with many concurrent readers the risk of failure is pretty high and even the value returned by fcmpset is very likely going to be stale by the time the loop in the fallback routine is reached.
Uninline said primitives. It gives a throughput increase when doing concurrent slocks/sunlocks with 80 hardware threads from ~50 mln/s to ~56 mln/s.
Interestingly, rwlock primitives are already not inlined.
==
sx: add witness support missed in r313272
==
mtx: fix up _mtx_obtain_lock_fetch usage in thread lock
Since _mtx_obtain_lock_fetch no longer sets the argument to MTX_UNOWNED, callers have to do it on their own.
==
mtx: fixup r313278, the assignemnt was supposed to go inside the loop
==
mtx: fix spin mutexes interaction with failed fcmpset
While doing so move recursion support down to the fallback routine.
==
locks: ensure proper barriers are used with atomic ops when necessary
Unclear how, but the locking routine for mutexes was using the *release* barrier instead of acquire. This must have been either a copy-pasto or bad completion.
Going through other uses of atomics shows no barriers in: - upgrade routines (addressed in this patch) - sections protected with turnstile locks - this should be fine as necessary barriers are in the worst case provided by turnstile unlock
I would like to thank Mark Millard and andreast@ for reporting the problem and testing previous patches before the issue got identified.
|
#
315341 |
|
16-Mar-2017 |
mjg |
MFC r311172,r311194,r311226,r312389,r312390:
mtx: reduce lock accesses
Instead of spuriously re-reading the lock value, read it once.
This change also has a side effect of fixing a performance bug: on failed _mtx_obtain_lock, it was possible that re-read would find the lock is unowned, but in this case the primitive would make a trip through turnstile code.
This is diff reduction to a variant which uses atomic_fcmpset.
==
Reduce lock accesses in thread lock similarly to r311172
==
mtx: plug open-coded mtx_lock access missed in r311172
==
rwlock: reduce lock accesses similarly to r311172
==
sx: reduce lock accesses similarly to r311172
|
#
315339 |
|
16-Mar-2017 |
mjg |
MFC r312890,r313386,r313390:
Sprinkle __read_mostly on backoff and lock profiling code.
==
locks: change backoff to exponential
Previous implementation would use a random factor to spread readers and reduce chances of starvation. This visibly reduces effectiveness of the mechanism.
Switch to the more traditional exponential variant. Try to limit starvation by imposing an upper limit of spins after which spinning is half of what other threads get. Note the mechanism is turned off by default.
==
locks: follow up r313386
Unfinished diff was committed by accident. The loop in lock_delay was changed to decrement, but the loop iterator was still incrementing.
|
#
303953 |
|
11-Aug-2016 |
mjg |
MFC r303562,303563,r303584,r303643,r303652,r303655,r303707:
rwlock: s/READER/WRITER/ in wlock lockstat annotation
==
sx: increment spin_cnt before cpu_spinwait in xlock
The change is a no-op only done for consistency with the rest of the file.
==
locks: change sleep_cnt and spin_cnt types to u_int
Both variables are uint64_t, but they only count spins or sleeps. All reasonable values which we can get here comfortably hit in 32-bit range.
==
Implement trivial backoff for locking primitives.
All current spinning loops retry an atomic op the first chance they get, which leads to performance degradation under load.
One classic solution to the problem consists of delaying the test to an extent. This implementation has a trivial linear increment and a random factor for each attempt.
For simplicity, this first thouch implementation only modifies spinning loops where the lock owner is running. spin mutexes and thread lock were not modified.
Current parameters are autotuned on boot based on mp_cpus.
Autotune factors are very conservative and are subject to change later.
==
locks: fix up ifdef guards introduced in r303643
Both sx and rwlocks had copy-pasted ADAPTIVE_MUTEXES instead of the correct define.
==
locks: fix compilation for KDTRACE_HOOKS && !ADAPTIVE_* case
==
locks: fix sx compilation on mips after r303643
The kernel.h header is required for the SYSINIT macro, which apparently was present on amd64 by accident.
Approved by: re (gjb)
|