Searched refs:map (Results 1 - 25 of 174) sorted by path

1234567

/barrelfish-2018-10-04/doc/000-overview/
H A DOverview.tex328 as a simple example, to map a frame into its vspace an application
337 in the page table, the process is secure: the user cannot map a frame
/barrelfish-2018-10-04/doc/004-virtual_memory/
H A DVirtualMemory.tex128 The map invocation can create multi-page mappings in one system call, as long
130 In the case of mappings that cross page table boundaries, we need a map
133 Additionally, the map invocation can be used to create superpages (e.g. 2MB
/barrelfish-2018-10-04/doc/006-routing/
H A DRouting.tex184 We assign virtual circuit identifiers at random. At each node, we use a hash table to map virtual circuit identifiers to a pointer to the channel state. The use of a hash table allows efficient message forwarding. When a message arrives, it can be determined where to forward this message by means of a simple look-up in the hash table. The complexity of this lookup is linear in the number of virtual circuit identifiers that map to the same hash bucket (the number of buckets in the hash table is a compile time constant).
/barrelfish-2018-10-04/doc/012-services/
H A DServices.tex358 provides functionality to map and unmap pages, manage mapping of
/barrelfish-2018-10-04/doc/013-capability-mgmt/
H A Dtype_system.tex493 \arg CSpace address of the root (L1) CNode of the capability to map
494 \arg CSpace address of the capability to map
495 \arg Level of the capability to map
497 \arg Offset in bytes into the source capability of the region to map
498 \arg Size of the region to map in VNode entries
/barrelfish-2018-10-04/doc/014-bulk-transfer/
H A Dbulk-transfer.tex61 between two domains. These domains then map this physical memory into their
333 classification will map to deciding which process should get the
545 read-only access to this new shared-pool and then map it into
818 map it as read-only in the virtual address-space of consumer.
/barrelfish-2018-10-04/doc/015-disk-driver-arch/
H A Drunning.tex70 On x86 architectures, the BIOS memory map can be retrieved to determine the
71 layout of memory. Some BIOSs report a memory map that is not sorted by
75 map is preprocessed to eliminate conflicts and ensure ascending addresses.
77 As the preprocessed memory map might be larger due to the case where one memory
79 first need to copy the map into a larger buffer. The memory map is then sorted
82 At the end, regions are page-aligned as Barrelfish can only map whole pages.
94 \caption{Memory map transformation}
/barrelfish-2018-10-04/doc/017-arm/
H A DARM.tex258 to map many kernel devices since most drivers run in user space on
321 We map the whole available physical memory into the kernel���s virtual
331 map the area of RAM containing the CPU driver's exception vectors.
336 map low memory virtual-to-physical as well, as a way to access
368 is initialized to map 1GB of RAM at 0x80000000, and the exception
370 is set to map the lower 2GB of the physical address space 1-1 to
715 \item Create a physical memory map for the available memory
745 \item Reset mapping, only map in the physical memory aliased at high
/barrelfish-2018-10-04/doc/019-device-drivers/
H A DDeviceDriver.tex257 \fnname{map\_device\_register} is a function provided by the driverkit
260 size of the register (\varname{4096}) and will map this at a random virtual
420 device and also permissions (capabilities) to map these address registers in
422 \pathname{include/pci/mem.h} to map these BARs into the address space of the
450 receives a list of capabilities for a particular device which it can map in
474 to the driver in a way that driverkit can map them. What capabilities we give
/barrelfish-2018-10-04/doc/021-cpudriver/
H A Dcpudriver.tex143 function called \fnname{start\_aps\_x86\_64\_start} will afterwards map in the
169 \item \textbf{Number of base-page-sized pages to map:} If non-zero, this
172 of the region (starting from offset zero) to map.
/barrelfish-2018-10-04/doc/022-armv8/
H A Dreport.tex105 assumptions (available hardware, memory map, etc.), that programmers can rely
417 physical window. The ARMv8 CPU driver \textbf{shall not} dynamically map
652 \item Hagfish queries UEFI for the system memory map, then allocates and
664 \item The EFI memory map (including Hagfish's custom-tagged regions).
697 \item The final EFI memory map, with all areas allocated by Hagfish to
/barrelfish-2018-10-04/include/barrelfish/
H A Dpmap.h26 errval_t (*map)(struct pmap* pmap, genvaddr_t vaddr, struct capref frame, member in struct:pmap_funcs
/barrelfish-2018-10-04/include/openssl/
H A Ddtls1.h111 unsigned long map; /* track 32 packets on 32-bit systems member in struct:dtls1_bitmap_st
/barrelfish-2018-10-04/include/sys/
H A Dfile.h120 typedef int fo_mmap_t(struct file *fp, vm_map_t map, vm_offset_t *addr,
401 fo_mmap(struct file *fp, vm_map_t map, vm_offset_t *addr, vm_size_t size, argument
408 return ((*fp->f_ops->fo_mmap)(fp, map, addr, size, prot, cap_maxprot,
/barrelfish-2018-10-04/include/vm/
H A Dmemguard.h49 #define memguard_init(map) do { } while (0)
H A Dswap_pager.c1794 * anything just return. If we run out of space in the map we wait
2397 * The map must be locked.
2400 * VM objects backing the VM map. To make up for fractional losses,
2401 * if the VM object has any swap use at all the associated map entries
2407 vm_map_t map; local
2412 map = &vmspace->vm_map;
2415 for (cur = map->header.next; cur != &map->header; cur = cur->next) {
H A Dvm_extern.h82 int vm_fault_hold(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type,
84 int vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len,
H A Dvm_fault.c123 vm_map_t map; member in struct:faultstate
150 vm_map_lookup_done(fs->map, fs->entry);
200 * FALSE, one for the map entry with MAP_ENTRY_NOSYNC
246 * requiring the given permissions, in the map specified.
248 * associated physical map.
256 * The map in question must be referenced, and remains so.
260 vm_fault(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type, argument
270 if (map != kernel_map && KTRPOINT(td, KTR_FAULT))
273 result = vm_fault_hold(map, trunc_page(vaddr), fault_type, fault_flags,
276 if (map !
283 vm_fault_hold(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type, int fault_flags, vm_page_t *m_hold) argument
1217 vm_fault_quick_hold_pages(vm_map_t map, vm_offset_t addr, vm_size_t len, vm_prot_t prot, vm_page_t *ma, int max_count) argument
[all...]
H A Dvm_glue.c161 vm_map_t map; local
166 map = &curproc->p_vmspace->vm_map;
167 if ((vm_offset_t)addr + len > vm_map_max(map) ||
171 vm_map_lock_read(map);
172 rv = vm_map_check_protection(map, trunc_page((vm_offset_t)addr),
174 vm_map_unlock_read(map);
294 vm_sync_icache(vm_map_t map, vm_offset_t va, vm_offset_t sz) argument
297 pmap_sync_icache(map->pmap, va, sz);
900 * Lock the map until swapout
903 * the map
[all...]
H A Dvm_kern.c153 * Allocates a region from the kernel address map and physical pages
207 * Allocates a region from the kernel address map and physically
267 * Allocates a map to manage a subrange
273 * min, max Returned endpoints of map
424 * Allocates pageable memory from a sub-map of the kernel. If the submap
430 kmap_alloc_wait(map, size)
431 vm_map_t map;
442 * To make this work for more than one map, use the map's lock
445 vm_map_lock(map);
[all...]
H A Dvm_map.c103 * memory from one map to another.
112 * which may not align with existing map entries, all
120 * by copying VM object references from one map to
131 static void _vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min,
134 static void vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry);
135 static void vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry);
136 static void vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot,
142 static int vm_map_stack_locked(vm_map_t map, vm_offset_t addrbos,
145 static void vm_map_wire_entry_failure(vm_map_t map, vm_map_entry_t entry,
163 * addresses fall within the valid range of the map
233 vm_map_t map; local
255 vm_map_t map; local
500 _vm_map_lock(vm_map_t map, const char *file, int line) argument
540 _vm_map_unlock(vm_map_t map, const char *file, int line) argument
552 _vm_map_lock_read(vm_map_t map, const char *file, int line) argument
562 _vm_map_unlock_read(vm_map_t map, const char *file, int line) argument
574 _vm_map_trylock(vm_map_t map, const char *file, int line) argument
587 _vm_map_trylock_read(vm_map_t map, const char *file, int line) argument
608 _vm_map_lock_upgrade(vm_map_t map, const char *file, int line) argument
635 _vm_map_lock_downgrade(vm_map_t map, const char *file, int line) argument
651 vm_map_locked(vm_map_t map) argument
662 _vm_map_assert_locked(vm_map_t map, const char *file, int line) argument
692 _vm_map_unlock_and_wait(vm_map_t map, int timo, const char *file, int line) argument
711 vm_map_wakeup(vm_map_t map) argument
725 vm_map_busy(vm_map_t map) argument
733 vm_map_unbusy(vm_map_t map) argument
745 vm_map_wait_busy(vm_map_t map) argument
788 _vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max) argument
804 vm_map_init(vm_map_t map, pmap_t pmap, vm_offset_t min, vm_offset_t max) argument
818 vm_map_entry_dispose(vm_map_t map, vm_map_entry_t entry) argument
830 vm_map_entry_create(vm_map_t map) argument
990 vm_map_entry_link(vm_map_t map, vm_map_entry_t after_where, vm_map_entry_t entry) argument
1033 vm_map_entry_unlink(vm_map_t map, vm_map_entry_t entry) argument
1072 vm_map_entry_resize_free(vm_map_t map, vm_map_entry_t entry) argument
1099 vm_map_lookup_entry( vm_map_t map, vm_offset_t address, vm_map_entry_t *entry) argument
1179 vm_map_insert(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t start, vm_offset_t end, vm_prot_t prot, vm_prot_t max, int cow) argument
1380 vm_map_findspace(vm_map_t map, vm_offset_t start, vm_size_t length, vm_offset_t *addr) argument
1447 vm_map_fixed(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t start, vm_size_t length, vm_prot_t prot, vm_prot_t max, int cow) argument
1483 vm_map_find(vm_map_t map, vm_object_t object, vm_ooffset_t offset, vm_offset_t *addr, vm_size_t length, vm_offset_t max_addr, int find_space, vm_prot_t prot, vm_prot_t max, int cow) argument
1561 vm_map_simplify_entry(vm_map_t map, vm_map_entry_t entry) argument
1657 _vm_map_clip_start(vm_map_t map, vm_map_entry_t entry, vm_offset_t start) argument
1741 _vm_map_clip_end(vm_map_t map, vm_map_entry_t entry, vm_offset_t end) argument
1814 vm_map_submap( vm_map_t map, vm_offset_t start, vm_offset_t end, vm_map_t submap) argument
1866 vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot, vm_object_t object, vm_pindex_t pindex, vm_size_t size, int flags) argument
1957 vm_map_protect(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_prot_t new_prot, boolean_t set_max) argument
2104 vm_map_madvise( vm_map_t map, vm_offset_t start, vm_offset_t end, int behav) argument
2284 vm_map_inherit(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_inherit_t new_inheritance) argument
2323 vm_map_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end, int flags) argument
2490 vm_map_wire_entry_failure(vm_map_t map, vm_map_entry_t entry, vm_offset_t failed_addr) argument
2524 vm_map_wire(vm_map_t map, vm_offset_t start, vm_offset_t end, int flags) argument
2785 vm_map_sync( vm_map_t map, vm_offset_t start, vm_offset_t end, boolean_t syncio, boolean_t invalidate) argument
2880 vm_map_entry_unwire(vm_map_t map, vm_map_entry_t entry) argument
2907 vm_map_entry_delete(vm_map_t map, vm_map_entry_t entry) argument
2978 vm_map_delete(vm_map_t map, vm_offset_t start, vm_offset_t end) argument
3071 vm_map_remove(vm_map_t map, vm_offset_t start, vm_offset_t end) argument
3097 vm_map_check_protection(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_prot_t protection) argument
3462 vm_map_stack(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize, vm_prot_t prot, vm_prot_t max, int cow) argument
3493 vm_map_stack_locked(vm_map_t map, vm_offset_t addrbos, vm_size_t max_ssize, vm_size_t growsize, vm_prot_t prot, vm_prot_t max, int cow) argument
3588 vm_map_t map = &vm->vm_map; local
3976 vm_map_t map = *var_map; local
4149 vm_map_t map = *var_map; local
4220 vm_map_lookup_done(vm_map_t map, vm_map_entry_t entry) argument
4235 vm_map_print(vm_map_t map) argument
[all...]
H A Dvm_map.h64 * Virtual memory map module definitions.
76 * vm_map_t the high-level address map data structure.
77 * vm_map_entry_t an entry in an address map.
85 * another map (called a "sharing map") which denotes read-write
90 struct vm_map *sub_map; /* belongs to another map */
94 * Address map entries consist of start and end addresses,
95 * a VM object (or sharing map) and offset into that object,
112 vm_eflags_t eflags; /* map entry flags */
169 * A map i
204 vm_map_max(const struct vm_map *map) argument
210 vm_map_min(const struct vm_map *map) argument
216 vm_map_pmap(vm_map_t map) argument
222 vm_map_modflags(vm_map_t map, vm_flags_t set, vm_flags_t clear) argument
[all...]
H A Dvm_mmap.c223 * ld.so sometimes issues anonymous map requests with non-zero
474 vm_map_t map; local
491 map = &td->td_proc->p_vmspace->vm_map;
496 rv = vm_map_sync(map, addr, addr + size, (flags & MS_ASYNC) == 0,
532 vm_map_t map; local
549 map = &td->td_proc->p_vmspace->vm_map;
550 if (addr < vm_map_min(map) || addr + size > vm_map_max(map))
552 vm_map_lock(map);
559 if (vm_map_lookup_entry(map, add
686 vm_map_t map; local
746 vm_map_t map; local
1011 vm_map_t map; local
1074 vm_map_t map; local
1151 vm_map_t map; local
1195 vm_map_t map; local
1406 vm_mmap(vm_map_t map, vm_offset_t *addr, vm_size_t size, vm_prot_t prot, vm_prot_t maxprot, int flags, objtype_t handle_type, void *handle, vm_ooffset_t foff) argument
1477 vm_mmap_object(vm_map_t map, vm_offset_t *addr, vm_size_t size, vm_prot_t prot, vm_prot_t maxprot, int flags, vm_object_t object, vm_ooffset_t foff, boolean_t writecounted, struct thread *td) argument
[all...]
H A Dvm_object.c1288 * Split the pages in a map entry into a new object. This affords
2124 * cause allocation of the separate object for the map
2415 _vm_object_in_map(vm_map_t map, vm_object_t object, vm_map_entry_t entry) argument
2422 if (map == 0)
2426 tmpe = map->header.next;
2427 entcount = map->nentries;
2428 while (entcount-- && (tmpe != &map->header)) {
2429 if (_vm_object_in_map(map, object, tmpe)) {
2478 * make sure that internal objs are in a map somewhere
2490 "vmochk: internal obj is not in a map
[all...]
H A Dvm_pageout.c575 * The object and map must be locked.
655 * deactivate some number of pages in a map, try to do it fairly, but
659 vm_pageout_map_deactivate_pages(map, desired)
660 vm_map_t map;
667 if (!vm_map_trylock(map))
677 tmpe = map->header.next;
678 while (tmpe != &map->header) {
698 vm_pageout_object_deactivate_pages(map->pmap, bigobj, desired);
705 tmpe = map->header.next;
706 while (tmpe != &map
1365 vm_map_t map; local
[all...]

Completed in 146 milliseconds

1234567