• Home
  • History
  • Annotate
  • Raw
  • Download
  • only in /macosx-10.5.8/xnu-1228.15.4/osfmk/i386/

Lines Matching refs:entry

251  *	valid virtual mappings of that page.  An entry is
282 block if it needed an entry and none were available - we'd panic. Some time ago I
304 and one or all terminate. The list hanging off each pv array entry could have thousands of
318 pages in the system are not aliased and hence represented by a single pv entry I've kept
319 the rooted entry size as small as possible because there is one of these dedicated for
321 link and the ppn entry needed for matching while running the hash list to find the entry we
343 The main flow difference is that the code is now aware of the rooted entry and the hashed
344 entries. Code that runs the pv list still starts with the rooted entry and then continues
345 down the qlink onto the hashed entries. Code that is looking up a specific pv entry first
346 checks the rooted entry and then hashes and runs the hash list for the match. The hash list
452 * Each entry in the pv_head_table is locked by a bit in the
506 * page-directory entry.
520 * by locking the pv_lock_table entry that corresponds to the pv_head
679 * for legacy, returns the address of the pde entry.
680 * for 64 bit, causes the pdpt page containing the pde entry to be mapped,
681 * then returns the mapped address of the pde entry in that page
702 * this returns the address of the requested pml4 entry in the top level page.
712 * maps in the pml4 page, if any, containing the pdpt entry requested
713 * and returns the address of the pdpt entry in that mapped page
757 * maps in the pdpt page, if any, containing the pde entry requested
758 * and returns the address of the pde entry in that mapped page
1182 /* make sure G bit is on for high shared pde entry */
1300 * 0xFFFFFF80:00000000. This is the highest entry in the 4th-level.
1619 * entry covers 1GB of addr space */
1682 pmap_expand_pdpt(p, (uint64_t)HIGH_MEM_BASE); /* need room for another pde entry */
1731 * on kernel entry and exit.
2013 * nuke the entry in the page table
2038 * entry after this one we remove that
2040 * and copy it to the rooted entry. Then free it instead.
2367 * Delete this entry.
2608 * Must allocate a new pvlist entry while we're unlocked;
2610 * If we determine we need a pvlist entry, we will unlock
2612 * the allocated entry later (if we no longer need it).
2641 * if we have a previous managed page, lock the pv entry now. after
2708 * and remove old pvlist entry.
2709 * 2) Add pvlist entry for new mapping
2907 * Remember that we used the pvlist entry.
3135 * Set the page directory entry for this page table.
3167 pmap_expand_pml4(map, vaddr); /* need room for another pdpt entry */
3225 * Set the page directory entry for this page table.
3278 pmap_expand_pdpt(map, vaddr); /* need room for another pde entry */
3344 * Set the page directory entry for this page table.
3511 * a fault; therefore, its page table entry
3592 * invalidate this TLB entry. The invalidation *must* follow
4178 vm_map_entry_t entry;
4199 &mapaddr, PMAP_NWINDOWS*PAGE_SIZE, (vm_map_offset_t)0, 0, &entry);