• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /netgear-WNDR4500-V1.0.1.40_1.0.68/src/linux/linux-2.6/kernel/

Lines Matching defs:to

13  *  Thanks to Thomas Gleixner for suggestions, analysis and fixes.
22 * Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly
40 * along with this program; if not, write to the Free Software
66 * list of 'owned' pi_state instances - these have to be
88 * The order of wakup is always to make the first condition true, then
95 /* Which hash list lock to use: */
159 * For other futexes, it points to &current->mm->mmap_sem and
182 * virtual address, we dont even have to find the underlying vma.
183 * Note : We do have to check 'uaddr' is a valid user address,
211 * it's a read-only handle, it's expected that futexes attach to
213 * VM_MAYSHARE here, not VM_SHARED which is restricted to shared
235 * We could walk the page table to read the non-linear
237 * from swap. But that's a lot of code to duplicate here
252 * Take a reference to the resource addressed by a key.
272 * Drop a reference to the resource addressed by a key.
394 * refcount is at 0 - put it back to 1.
437 * pi_state_list anymore, but we have to be careful
513 * We are the first waiter - try to look up the real owner and attach
514 * the new pi_state to it, but bail out when TID = 0
523 * We need to look at the task state flags to figure out,
576 * plist_del() and also before assigning to q->lock_ptr.
583 * A memory barrier is required here to prevent the following store
584 * to lock_ptr from getting ahead of the wakeup. Clearing the lock
614 * We pass it to the next owner. (The WAITERS bit is always
660 * bit has not to be preserved here. We are the owner:
692 * to this virtual address:
735 * to this virtual address:
788 * futex_atomic_op_inuser needs to both read and write
791 * enough, we need to handle the fault ourselves, while
850 * Requeue all waiters hashed on one physical page to another
917 * If key1 and key2 hash to the same bucket, no need to
976 * The priority used to register this element is
1029 * spin_lock(), causing us to take the wrong lock. This
1037 * however, change back to the original value. Therefore
1107 * We own it, so we have to replace the pending owner
1131 * In case we must use restart_block to restart a futex_wait,
1168 * if cond(var) is known to be true at the time of blocking, for
1174 * a wakeup when *uaddr != val on entry to the syscall. This is
1207 * don't want to hold mmap_sem while we sleep.
1216 * queueing ourselves into the futex hash. This code thus has to
1311 * if there are waiters then it will block, it does PI, etc. (Due to
1317 struct hrtimer_sleeper timeout, *to = NULL;
1328 to = &timeout;
1329 hrtimer_init(&to->timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
1330 hrtimer_init_sleeper(to, current);
1331 to->timer.expires = *time;
1350 * To avoid races, we attempt to take the lock here again
1365 * situation and we return success to user space.
1373 * Surprise - we got the lock. Just return to userspace:
1382 * to wake at next unlock
1411 * We took the lock due to owner died take over.
1428 * exit to complete.
1439 * OWNER_DIED bit is set to figure out whether
1466 * don't want to hold mmap_sem while we sleep.
1476 ret = rt_mutex_timed_lock(&q.pi_state->pi_mutex, to, 1);
1536 * We have to r/w *(int __user *)uaddr, but we can't modify it
1538 * enough, we need to handle the fault ourselves, while
1600 * To avoid races, try to do the TID -> 0 atomic transition
1613 * Rare case: we managed to release the lock atomically,
1614 * no need to wake anyone else up:
1620 * Ok, other tasks may need to be woken up - check waiters
1630 * The atomic access to the futex value
1657 * We have to r/w *(int __user *)uaddr, but we can't modify it
1659 * enough, we need to handle the fault ourselves, while
1719 * Signal allows caller to avoid the race which would occur if they
1781 * key->shared.inode needs to be referenced while holding it.
1788 /* Now we map fd to filp, so userspace can access it */
1809 * field, to allow the kernel to clean up if the thread dies after
1810 * acquiring the lock, but just before it could have added itself to
1816 * @head: pointer to the list-head
1837 * @head_ptr: pointer to a list-head pointer, the kernel fills it in
1838 * @len_ptr: pointer to a length field, the kernel fills in the header size
1892 * set, wake up a waiter (if any). (We have to do a
1894 * to handle the rare but possible case of recursive