Lines Matching defs:lock

192  * If multiple bases need to be locked, use the base ordering for lock
208 * @lock: Lock protecting the timer_base
209 * @running_timer: When expiring timers, the lock is dropped. To make
220 * prevents a life lock, when the task which tries to
252 raw_spinlock_t lock;
358 * same lock or cachelines, so we skew each extra cpu with an extra
406 * to lock contention or spurious cache line bouncing.
432 * to lock contention or spurious cache line bouncing.
642 * the base lock:
914 * @key: lockdep class key of the fake lock used for tracking timer
915 * sync lock dependencies
1025 * We are using hashed locking: Holding per_cpu(timer_bases[x]).lock means
1037 __acquires(timer->base->lock)
1052 raw_spin_lock_irqsave(&base->lock, *flags);
1055 raw_spin_unlock_irqrestore(&base->lock, *flags);
1094 * We lock timer base and calculate the bucket index right
1102 * while holding base lock to prevent a race against the
1136 * while holding base lock to prevent a race against the
1163 raw_spin_unlock(&base->lock);
1165 raw_spin_lock(&base->lock);
1187 raw_spin_unlock_irqrestore(&base->lock, flags);
1370 * holding base lock to prevent a race against the shutdown code.
1378 raw_spin_unlock(&base->lock);
1380 raw_spin_lock(&base->lock);
1389 raw_spin_unlock_irqrestore(&base->lock, flags);
1400 * timer base lock which prevents further rearming of the time. In that
1417 * If @shutdown is set then the lock has to be taken whether the
1419 * which might hit between the lockless pending check and the lock
1420 * acquisition. By taking the lock it is ensured that such a newly
1432 raw_spin_unlock_irqrestore(&base->lock, flags);
1486 * timer base lock which prevents further rearming of the timer. Any
1491 * right after dropping the base lock if @shutdown is false. That
1514 raw_spin_unlock_irqrestore(&base->lock, flags);
1527 * after dropping the base lock. That needs to be prevented by the calling
1562 * the waiter to acquire the lock and make progress.
1567 raw_spin_unlock_irq(&base->lock);
1570 raw_spin_lock_irq(&base->lock);
1580 * got preempted, and it prevents a life lock when the task which tries to
1593 * Mark the base as contended and grab the expiry lock,
1595 * callback. Drop the lock immediately so the softirq can
1619 * timer base lock which prevents rearming of @timer
1622 * be rearmed concurrently, i.e. after dropping the base lock then the
1626 * base lock which prevents rearming of the timer. Any attempt to rearm
1689 * interrupt context. Even if the lock has nothing to do with the timer in
1709 * lock. If there is the possibility of a concurrent rearm then the return
1777 * account for lockdep too. To avoid bogus "held lock freed"
1786 * Couple the lock chain with the lock chain at
1804 * callback kept a lock held, bad luck, but not worse
1838 raw_spin_unlock(&base->lock);
1840 raw_spin_lock(&base->lock);
1843 raw_spin_unlock_irq(&base->lock);
1845 raw_spin_lock_irq(&base->lock);
1898 * hold base->lock.
2113 lockdep_assert_held(&base_local->lock);
2114 lockdep_assert_held(&base_global->lock);
2126 __releases(timer_bases[BASE_LOCAL]->lock)
2127 __releases(timer_bases[BASE_GLOBAL]->lock)
2134 raw_spin_unlock(&base_global->lock);
2135 raw_spin_unlock(&base_local->lock);
2139 * timer_lock_remote_bases - lock timer bases of cpu
2145 __acquires(timer_bases[BASE_LOCAL]->lock)
2146 __acquires(timer_bases[BASE_GLOBAL]->lock)
2155 raw_spin_lock(&base_local->lock);
2156 raw_spin_lock_nested(&base_global->lock, SINGLE_DEPTH_NESTING);
2249 raw_spin_lock(&base_local->lock);
2250 raw_spin_lock_nested(&base_global->lock, SINGLE_DEPTH_NESTING);
2319 raw_spin_unlock(&base_global->lock);
2320 raw_spin_unlock(&base_local->lock);
2369 * enqueue sending a pointless IPI, but taking the lock would just
2371 * for the cost of taking the lock in the exit from idle
2379 /* Activate without holding the timer_base->lock */
2393 lockdep_assert_held(&base->lock);
2428 raw_spin_lock_irq(&base->lock);
2430 raw_spin_unlock_irq(&base->lock);
2674 raw_spin_lock_irq(&new_base->lock);
2675 raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
2689 raw_spin_unlock(&old_base->lock);
2690 raw_spin_unlock_irq(&new_base->lock);
2706 raw_spin_lock_init(&base->lock);