Lines Matching refs:lock

82 #include <kern/lock.h>
112 * lock.
248 * A spin lock (accessed by routines
249 * vm_object_cache_{lock,lock_try,unlock}) governs the
254 * must also lock the cache.
257 * from the reference mechanism, so that the lock need
556 * The lock will be initialized for each allocated object in
729 * initialze the vm_object lock world
799 * object (cache lock + exclusive object lock).
997 * if we try to take a regular lock here
999 * holding a lock on this object while
1256 * worthwhile grabbing the lock
1273 * don't need the queue lock to find
1274 * and lock an object on the cached list
1299 * hopefully, the lock will have cleared
1338 * behind the page queue lock...
1440 * put the page queues lock back to the caller's
1460 * Called with, and returns with, cache lock unlocked.
1678 * the object lock was released by vm_object_reap()
1698 * The lock will be released on return and the VM object is no longer valid.
1732 /* Must take page lock for this - using it to protect token queue */
1867 * hogging the page queue lock too long
2426 * so the object can't disappear when we release the lock.
2667 * there is, update the offset and lock the new object. We also turn off
3018 * The caller must hold a reference and a lock
3071 * We don't bother to lock the new object within
4020 * In addition to the lock on the object, the vm_object_hash_lock
4022 * association require use of the hash lock.
4316 * before dropping the lock, to prevent a race.
4493 * the object lock.
4757 * regret it) to unlock the object and then retake the lock
4829 * take a "shared" lock on the shadow objects. If we can collapse,
4948 * We need the exclusive lock on the VM objects.
5304 * if it's not, which object do we lock first?
6336 * from vm_object_reference. This lock is never released.
6440 * Since we need to lock both objects at the same time,
6441 * make sure we always lock them in the same order to