#
3e988edd |
|
15-Oct-2017 |
Mateusz Guzik <mjguzik@gmail.com> |
[kernel][x86] cleanup spinlocks Since xchg always modifies the lock, the cpu number stored gets replaced by random threads attempting to grab the lock. While it does not affect correctness, it hinders debugging. Replace xchg use with lock cmpxchg. Problem spotted by teisenbe@. lock: Stop doing forward jump in the fast path, instead resort to jumping only in case of contention. This saves on branch misprediction. Note that contended behaviour remains simplistic: lock value is repeatedly checked with pause in-between. unlock: amd64 has strong enough ordering that a mere write is sufficient to unlock. Thus replace the heavy-weight xchg with mov. This significantly speeds up lock/unlock as reported by k bench: 40 vs 21 just lock/unlock 58 vs 43 w/irqsave (already disabled) 62 vs 44 w/irqsave Change-Id: Ie5ba95964333f559ae223341cf2ec0c2e0c446a9
|