Lines Matching refs:and

5  * Common Development and Distribution License (the "License").
11 * and limitations under the License.
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
22 * Copyright (c) 1987, 2010, Oracle and/or its affiliates. All rights reserved.
29 * specific hat data structures and the sfmmu-specific hat procedures.
183 * NOTE: Don't alter this structure without changing defines above and
184 * the tsb_miss and protection handlers.
219 * routines (tsb miss/protection handlers and vatopfn) while not
276 * the impact on ism_map_t, TSB miss area, hblk tag and region id type in
325 * Returns 1 if map1 and map2 are equal.
449 * The value of SFMMU_L1_HMERLINKS and SFMMU_L2_HMERLINKS will be increased
468 * This macro grabs hat lock and allocates level 2 hat chain
470 * is called with alloc = 0, and lock = 0.
653 * find the shared hme entry during trap handling and therefore there is no
656 * worst case and add the the number of ttes required to map the entire region
658 * has a 4M pagesize, and memory is low, the allocation of 4M pages may fail
659 * then 8K pages will be allocated instead and the first TSB which stores 8K
696 lock_t sfmmu_ctx_lock; /* sync ctx alloc and invalidation */
822 * in a fast path, and then recheck the flag after acquiring the lock in
875 * context), and context 1 (reserved for stolen context). So this constant
902 and tsbe, TSB_SOFTSZ_MASK, tmp2; /* tmp2=szc */ \
908 and tmp2, tmp1, tmp1; /* tsbent = virtpage & mask */ \
917 * The 3rd TSB corresponds to the shared context, and is used
928 and tsbe, TSB_SOFTSZ_MASK, tmp2; /* tmp2=szc */ \
934 and tmp2, tmp1, tmp1; /* tsbent = virtpage & mask */ \
975 * use_shctx = 1 if shme is in scd and 0 otherwise
982 and hmentoff, HTAG_RID_MASK, hmentoff /* mask off rid */ ;\
983 and hmentoff, BT_ULMASK, use_shctx /* mask bit index */ ;\
989 and use_shctx, 0x1, use_shctx \
1005 and tmp1, TSB_SOFTSZ_MASK, tmp1; \
1011 * This register contains utsb_pabase in bits 63:13, and TSB size
1041 * This register contains utsb_pabase in bits 63:13, and TSB size
1129 * hme_blk, and the rehash count. The rehash count is actually only 2 bits
1130 * and has the following meaning:
1136 * Note: The ordering and size of the hmeblk_tag members are implictly known
1244 * the counts can be high and there are not enough bits in the tte. When
1249 * and sf_hment are at the same offsets in both structures. Whenever
1343 * limit on how much nucleus memory is required and to avoid overflowing the
1344 * tsbmiss uhashsz and khashsz data areas. The number below corresponds to
1377 * is only grabbed by the tsb miss handlers, vatopfn, and while
1402 * The bspage and re-hash part is 64 bits, with the sfmmup being another 64
1461 * address space and the other hash is for the kernel address space.
1462 * The number of buckets are calculated at boot time and stored in the global
1463 * variables "uhmehash_num" and "khmehash_num". By making the hash table size
1469 * An hme hash bucket contains a pointer to an hme_blk and the mutex that
1471 * Spitfire supports 4 page sizes. 8k and 64K pages only need one hash.
1472 * 512K pages need 2 hashes and 4M pages need 3 hashes.
1475 * and it varies depending on the page size as follows:
1481 * changes should be reflected in both versions. This function and the TSB
1491 * list, while adding/removing a hme_blk to the list, and while
1498 * ctx and vaddr. It assumed the SFMMU_HASH_LOCK is held.
1539 * that owns the specified vaddr and hatid. If if doesn't find one , hmeblkp
1575 * that owns the specified vaddr and hatid. If if doesn't find one , hmeblkp
1650 * and initializes using DEMP_RANGE_INIT(). It then passes a pointer to this
1662 * out or exiting) we allow these macros to take a NULL dmr input and do
1721 * The TSB is made up of tte entries. Both the tag and data are present
1726 * The cpu who holds the lock can then modify the data side, and the tag side.
1728 * clear the lock and allow the tsb entry to be read. It is assumed that all
1775 * in the tsb miss handler and is 128 bytes (2 e$ lines).
1778 * and should be aligned on an ecache line boundary.
1816 * minimize cache misses in the kpm tsb miss handler and occupies
1818 * nucleus memory and it should be aligned on an ecache line
1820 * not much to share and the tsbmiss pathes are different, so
1857 * For kernel TSBs we may go beyond the hardware supported sizes and support
1915 * | |_ VA hole (Spitfire), zeros (Cheetah and beyond)
1935 * | |_ VA hole (Spitfire) / ones (Cheetah and beyond)
1939 * Note that since we store 21..13 of each TSB's VA, TSBs and their slabs
1955 * Each register contains TSB's physical base and size code information
2021 * The jmp opcode [24:19] = 11 1000 and source register is bits [18:14].
2213 * exmaple, a page with a 64K and a 4M mappings has a p_index value of 0x0A.
2216 * for 8K mappings, it is NOT USED by the code and SHOULD NOT be set.
2222 * Defines for psm page struct fields and large page support
2414 * For kpm_smallpages, the state about how a kpm page is mapped and whether
2464 * ctx, hmeblk, mlistlock and other stats for sfmmu