#
d1f280c8 |
|
01-Apr-2012 |
Hamish Morrison <hamishm53@gmail.com> |
Add support for pthread_attr_get/setguardsize() * Added the aforementioned functions. * create_area_etc() now takes a guard size parameter. * The thread_info::stack_base/end range now refers to the usable range only.
|
#
6e2f6d1a |
|
29-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Changed cookie type for get_next_area_info() to ssize_t. The cookie is used to store the base address of the area that was just visited. On 64-bit systems, int32 is not sufficient. Therefore, changed to ssize_t which retains compatibility on x86 while expanding to a sufficient size on x86_64.
|
#
7418dbd9 |
|
03-Dec-2011 |
Michael Lotz <mmlr@mlotz.ch> |
Introduce debug page wise kernel area protection functions. This adds a pair of functions vm_prepare_kernel_area_debug_protection() and vm_set_kernel_area_debug_protection() to set a kernel area up for page wise protection and to actually protect individual pages respectively. It was already possible to read and write protect full areas via area protection flags and not mapping any actual pages. For areas that actually have mapped pages this doesn't work however as no fault, at which the permissions could be checked, is generated on access. These new functions use the debug helpers of the translation map to mark individual pages as non-present without unmapping them. This allows them to be "protected", i.e. causing a fault on read and write access. As they aren't actually unmapped they can later be marked present again. Note that these are debug helpers and have quite a few restrictions as described in the comment above the function and is only useful for some very specific and constrained use cases.
|
#
e8d73efc |
|
12-Jun-2011 |
Rene Gollent <anevilyak@gmail.com> |
Should have been part of previous commit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42130 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4535495d |
|
10-Jan-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Merged the signals branch into trunk, with these changes: * The team and thread kernel structures have been renamed to Team and Thread respectively and moved into the new BKernel namespace. * Several (kernel add-on) sources have been converted from C to C++ since private kernel headers are included that are no longer C compatible. Changes after merging: * Fixed gcc 2 build (warnings mainly in the scary firewire bus manager). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c955359c |
|
18-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added vm_available_not_needed_memory_debug(), a vm_available_not_needed_memory() version that can be called from within the kernel debugger. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37167 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
377ecfe7 |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Renamed cache_type_to_string() to vm_cache_type_to_string() and made in kernel private. * Moved dumping code from dump_cache() to new VMCache::Dump(). * Override VMCache::Dump() in VMVnodeCache to also print the vnode. * Removed no longer needed VMCache::GetLock(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37138 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a8ad734f |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced structures {virtual,physical}_address_restrictions, which specify restrictions for virtual/physical addresses. * vm_page_allocate_page_run(): - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not taken into account. - Takes a physical_address_restrictions instead of base/limit and also supports alignment and boundary restrictions, now. * map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/ ReserveAddressRange() take a virtual_address_restrictions parameter, now. They also support an alignment independent from the range size. * create_area_etc(), vm_create_anonymous_area(): Take {virtual,physical}_address_restrictions parameters, now. * Removed no longer needed B_PHYSICAL_BASE_ADDRESS. * DMAResources: - Fixed potential overflows of uint32 when initializing from device node attributes. - Fixed bounce buffer creation TODOs: By using create_area_etc() with the new restrictions parameters we can directly support physical high address, boundary, and alignment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
641b3c82 |
|
09-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Renamed allocate_early_physical_page() to vm_allocate_early_physical_page() and made it public. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37072 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2ea7b17c |
|
09-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible "addr_t aligmnent". * X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(), generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment feature instead of aligning by hand. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
435c43f5 |
|
02-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced type generic_io_vec, which is similar to iovec, but uses types that are wide enough for both virtual and physical addresses. * DMABuffer, IORequest, IOScheduler,... and code using them: Use generic_io_vec and generic_{addr,size}_t where necessary. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36997 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
147133b7 |
|
25-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* First run through the kernel's private parts to use phys_{addr,size}_t where appropriate. * Typedef'ed page_num_t to phys_addr_t and used it in more places in vm_page.{h,cpp}. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
13fa4c84 |
|
06-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced new area creation flag CREATE_AREA_DONT_COMMIT_MEMORY. map_backing_store() doesn't commit memory when this flag is given. * Used the new flag vm_copy_area(): We no longer commit memory for read-only areas. This prevents read-only mapped files from suddenly requiring memory after fork(). Might improve the situation on machines with very little RAM a bit. We should probably mark writable copies over-committing, since the usual case is fork() + exec() where the child normally doesn't need more than a few pages until calling exec(). That would significantly reduce the memory requirement for jamming the Haiku tree. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36651 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
90788614 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Changed some parameters of VM syscalls from int to uint32, mostly for sake of consistency. * Moved the B_OVERCOMMITTING_AREA flag from B_KERNEL_AREA_FLAGS to B_USER_AREA_FLAGS, since we really allow it to be passed from userland. * Most VM syscalls check the provided protection against B_USER_AREA_FLAGS instead of B_USER_PROTECTION, now. This way they allow for B_OVERCOMMITTING_AREA as well. * _user_map_file(), _user_set_memory_protection(): Check the protection like the other syscalls do and use fix_protection() instead of doing that manually. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36572 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c3676b54 |
|
13-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added vm_debug_copy_page_memory() which copies memory from a potentially not mapped page. * debug_{mem,strl}cpy(): - Added "team" parameter for specifying the address space the address are to be interpreted in. - When the standard memcpy() (with fault handler) fails, fall back to vm_debug_copy_page_memory(). * Added debug_is_debugged_team(): Predicate returning true, if the supplied team_id refers to the same team debug_get_debugged_thread() belongs to. * Added DebuggedThreadSetter class for scope-based debug_set_debugged_thread(). Made use of it in several debugger functions. * print_demangled_call() (x86): Fixed unsafe memory access. Allows KDL stack traces to work correctly again, even if the page daemon has already unmapped the concerned pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36230 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
349039ff |
|
11-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added vm_[un]wire_page(), which are essentially versions of [un]lock_memory_etc() optimized for a single page. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36156 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40bb9481 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed useless return parameter from vm_remove_all_page_mappings(). * Added vm_clear_page_mapping_accessed_flags() and vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality of vm_test_map_activation(), vm_clear_map_flags(), and vm_remove_all_page_mappings(), thus saving lots of calls to translation map methods. The backend is the new method VMTranslationMap::ClearAccessedAndModified(). * Started to make use of the cached page queue and changed the meaning of the other non-free queues slightly: - Active queue: Contains mapped pages that have been used recently. - Inactive queue: Contains mapped pages that have not been used recently. Also contains unmapped temporary pages. - Modified queue: Contains unmapped modified pages. - Cached queue: Contains unmapped unmodified pages (LRU sorted). Unless we're actually low on memory and actively do paging, modified and cached queues only contain non-temporary pages. Cached pages are considered quasi free. They still belong to a cache, but since they are unmodified and unmapped, they can be freed immediately. And this is what vm_page_[try_]reserve_pages() do now when there are no more actually free pages at hand. Essentially this means that pages storing cached file data, unless mmap()ped, no longer are considered used and don't contribute to page pressure. Paging will not happen as long there are enough free + cached pages available. * Reimplemented the page daemon. It no longer scans all pages, but instead works the page queues. As long as the free pages situation is harmless, it only iterates through the active queue and deactivates pages that have not been used recently. When paging occurs it additionally scans the inactive queue and frees pages that have not been used recently. * Changed the page reservation/allocation interface: vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and vm_page_allocate_page() now take a vm_page_reservation structure pointer. The reservation functions initialize the structure -- currently consisting only of a count member for the number of still reserved pages. vm_page_allocate_page() decrements the count and vm_page_unreserve_pages() unreserves the remaining pages (if any). Advantages are that reservation/ unreservation mismatches cannot occur anymore, that vm_page_allocate_page() can verify that the caller has indeed a reserved page left, and that there's no unnecessary pressure on the free page pool anymore. The only disadvantage is that the vm_page_reservation object needs to be passed around a bit. * Reworked the page reservation implementation: - Got rid of sSystemReservedPages and sPageDeficit. Instead sUnreservedFreePages now actually contains the number of free pages that have not yet been reserved (it cannot become negative anymore) and the new sUnsatisfiedPageReservations contains the number of pages that are still needed for reservation. - Threads waiting for reservations do now add themselves to a waiter queue, which is ordered by descending priority (VM priority and thread priority). High priority waiters are served first when pages become available. Fixes #5328. * cache_prefetch_vnode(): Would reserve one less page than allocated later, if the size wasn't page aligned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cff6e9e4 |
|
26-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* The system now holds back a small reserve of committable memory and pages. The memory and page reservation functions have a new "priority" parameter that indicates how deep the function may tap into that reserve. The currently existing priority levels are "user", "system", and "VIP". The idea is that user programs should never be able to cause a state that gets the kernel into trouble due to heavy battling for memory. The "VIP" level (not really used yet) is intended for allocations that are required to free memory eventually (in the page writer). More levels are thinkable in the future, like "user real time" or "user system server". * Added "priority" parameters to several VMCache methods. * Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags" parameter. * Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag CACHE_PRIORITY_VIP indicating the importance of the request. * Changed most code to pass the right priorities/flags. These changes already significantly improve the behavior in low memory situations. I've tested a bit with 64 MB (virtual) RAM and, while not particularly fast and responsive, the system remains at least usable under high memory pressure. As a side effect the slab allocator can now be used as general memory allocator. Not done by default yet, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b4e5e498 |
|
25-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager: * Added support to do larger raw allocations (up to one large chunk (128 pages)) in the slab areas. For an even larger allocation an area is created (haven't seen that happen yet, though). * Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING). * _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area would be added to the free lists instead of being removed from them. This would corrupt the lists and also lead to all kinds of misuse of meta chunks. object caches: * Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object caches, but the block allocator sets it on all power of two size caches. * object_cache_reserve_internal(): Detect recursion and don't wait in such a case. The function could deadlock itself, since HashedObjectCache::CreateSlab() does allocate memory, thus potentially reentering. * object_cache_low_memory(): - I missed some returns when reworking that one in r35254, so the function might stop early and also leave the cache in maintenance mode, which would cause it to be ignored by object cache resizer and low memory handler from that point on. - Since ReturnSlab() potentially unlocks, the conditions weren't quite correct and too many slabs could be freed. - Simplified things a bit. * object_cache_alloc(): Since object_cache_reserve_internal() does potentially unlock the cache, the situation might have changed and their might not be an empty slab available, but a partial one. The function would crash. * Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING. * Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with the VMCache commands. * ObjectCache::usage was not maintained anymore since I introduced the MemoryManager. object_cache_get_usage() would thus always return 0 and the block cache would not be considered cached memory. This was only of informational relevance, though. slab allocator misc.: * Disable the object depots of block allocator caches for object sizes > 2 KB. Allocations of those sizes aren't so common that the object depots yield any benefit. * The slab allocator is now fully self-sufficient. It allocates its bootstrap memory from the MemoryManager, and the hash tables for HashedObjectCaches use the block allocator instead of the heap, now. * Added option to use the slab allocator for malloc() and friends (USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and has virtually no lock contention. Handling for low memory situations is yet missing, though. * Improved the output of some debugger commands. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
86c794e5 |
|
21-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
slab allocator: * Implemented a more elaborated raw memory allocation backend (MemoryManager). We allocate 8 MB areas whose pages we allocate and map when needed. An area is divided into equally-sized chunks which form the basic units of allocation. We have areas with three possible chunk sizes (small, medium, large), which is basically what the ObjectCache implementations were using anyway. * Added "uint32 flags" parameter to several of the slab allocator's object cache and object depot functions. E.g. object_depot_store() potentially wants to allocate memory for a magazine. But also in pure freeing functions it might eventually become useful to have those flags, since they could end up deleting an area, which might not be allowable in all situations. We should introduce specific flags to indicate that. * Reworked the block allocator. Since the MemoryManager allocates block-aligned areas, maintains a hash table for lookup, and maps chunks to object caches, we can quickly find out which object cache a to be freed allocation belongs to and thus don't need the boundary tags anymore. * Reworked the slab boot strap process. We allocate from the initial area only when really necessary, i.e. when the object cache for the respective allocation size has not been created yet. A single page is thus sufficient. other: * vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS. * Use an object cache for page mappings. This significantly reduces the contention on the heap bin locks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f082f7f0 |
|
15-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added vm_page::accessed flag. Works analogously to vm_page::modified. * Reorganized the code for [un]mapping pages: - Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what vm_unmap_page[s]() did before, just in the architecture specific code, which allows for specific optimizations. UnmapArea() is for the special case that the complete area is unmapped. Particularly in case the address space is deleted, some work can be saved. Several TODOs could be slain. - Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]() are now static and have lost their prefix (and the "preserveModified" parameter). * Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers for Protect(). * X86VMTranslationMap::Protect(): Make sure not to accidentally clear the accessed/dirty flags. * X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually work. It was only skipping to the next page. * Adjusted the PPC code to at least compile. No measurable effect for the -j8 Haiku image build time, though the kernel time drops minimally. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
94632505 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added boolean "alreadyWired" parameter to vm_map_physical_memory(). * ioapic_init(): map_physical_memory() was called for already mapped addresses. This worked fine, but only because the x86 page mapping code didn't mind. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35059 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e50cf876 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the VM headers into subdirectory vm/. * Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6e2f6d1ace7490a200dcff70c52acf2af59c5bc3 |
|
29-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Changed cookie type for get_next_area_info() to ssize_t. The cookie is used to store the base address of the area that was just visited. On 64-bit systems, int32 is not sufficient. Therefore, changed to ssize_t which retains compatibility on x86 while expanding to a sufficient size on x86_64.
|
#
7418dbd90806ceb5702ad20c73b143928b75e212 |
|
03-Dec-2011 |
Michael Lotz <mmlr@mlotz.ch> |
Introduce debug page wise kernel area protection functions. This adds a pair of functions vm_prepare_kernel_area_debug_protection() and vm_set_kernel_area_debug_protection() to set a kernel area up for page wise protection and to actually protect individual pages respectively. It was already possible to read and write protect full areas via area protection flags and not mapping any actual pages. For areas that actually have mapped pages this doesn't work however as no fault, at which the permissions could be checked, is generated on access. These new functions use the debug helpers of the translation map to mark individual pages as non-present without unmapping them. This allows them to be "protected", i.e. causing a fault on read and write access. As they aren't actually unmapped they can later be marked present again. Note that these are debug helpers and have quite a few restrictions as described in the comment above the function and is only useful for some very specific and constrained use cases.
|
#
e8d73efc9c022353809a4c83cc1e7b0ec278d11f |
|
12-Jun-2011 |
Rene Gollent <anevilyak@gmail.com> |
Should have been part of previous commit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42130 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4535495d80c86e19e2610e7444a4fcefe3e0f8e6 |
|
10-Jan-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Merged the signals branch into trunk, with these changes: * The team and thread kernel structures have been renamed to Team and Thread respectively and moved into the new BKernel namespace. * Several (kernel add-on) sources have been converted from C to C++ since private kernel headers are included that are no longer C compatible. Changes after merging: * Fixed gcc 2 build (warnings mainly in the scary firewire bus manager). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40196 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c955359cb6cf3e66b961fadc7f07552abdec4dad |
|
18-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added vm_available_not_needed_memory_debug(), a vm_available_not_needed_memory() version that can be called from within the kernel debugger. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37167 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
377ecfe797a9f1009dd38702151d2a8c84723018 |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Renamed cache_type_to_string() to vm_cache_type_to_string() and made in kernel private. * Moved dumping code from dump_cache() to new VMCache::Dump(). * Override VMCache::Dump() in VMVnodeCache to also print the vnode. * Removed no longer needed VMCache::GetLock(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37138 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a8ad734f1c698917badb15e1641e0f38b3e9a013 |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced structures {virtual,physical}_address_restrictions, which specify restrictions for virtual/physical addresses. * vm_page_allocate_page_run(): - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not taken into account. - Takes a physical_address_restrictions instead of base/limit and also supports alignment and boundary restrictions, now. * map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/ ReserveAddressRange() take a virtual_address_restrictions parameter, now. They also support an alignment independent from the range size. * create_area_etc(), vm_create_anonymous_area(): Take {virtual,physical}_address_restrictions parameters, now. * Removed no longer needed B_PHYSICAL_BASE_ADDRESS. * DMAResources: - Fixed potential overflows of uint32 when initializing from device node attributes. - Fixed bounce buffer creation TODOs: By using create_area_etc() with the new restrictions parameters we can directly support physical high address, boundary, and alignment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
641b3c82df3ef9e3611eb73989db338e97eb5780 |
|
09-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Renamed allocate_early_physical_page() to vm_allocate_early_physical_page() and made it public. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37072 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2ea7b17cf341036024bd84c4f791f332b2e15e48 |
|
09-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible "addr_t aligmnent". * X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(), generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment feature instead of aligning by hand. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
435c43f5912b109e7d5cf682865d2061e62fad8c |
|
02-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced type generic_io_vec, which is similar to iovec, but uses types that are wide enough for both virtual and physical addresses. * DMABuffer, IORequest, IOScheduler,... and code using them: Use generic_io_vec and generic_{addr,size}_t where necessary. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36997 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
147133b76cbb1603bdbff295505f5b830cb4e688 |
|
25-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* First run through the kernel's private parts to use phys_{addr,size}_t where appropriate. * Typedef'ed page_num_t to phys_addr_t and used it in more places in vm_page.{h,cpp}. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
13fa4c845ac5f1e2167a30fcc4cee3925ec6619e |
|
06-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced new area creation flag CREATE_AREA_DONT_COMMIT_MEMORY. map_backing_store() doesn't commit memory when this flag is given. * Used the new flag vm_copy_area(): We no longer commit memory for read-only areas. This prevents read-only mapped files from suddenly requiring memory after fork(). Might improve the situation on machines with very little RAM a bit. We should probably mark writable copies over-committing, since the usual case is fork() + exec() where the child normally doesn't need more than a few pages until calling exec(). That would significantly reduce the memory requirement for jamming the Haiku tree. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36651 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
907886143fe5656bb1ae0e616a9286b0ca8538c6 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Changed some parameters of VM syscalls from int to uint32, mostly for sake of consistency. * Moved the B_OVERCOMMITTING_AREA flag from B_KERNEL_AREA_FLAGS to B_USER_AREA_FLAGS, since we really allow it to be passed from userland. * Most VM syscalls check the provided protection against B_USER_AREA_FLAGS instead of B_USER_PROTECTION, now. This way they allow for B_OVERCOMMITTING_AREA as well. * _user_map_file(), _user_set_memory_protection(): Check the protection like the other syscalls do and use fix_protection() instead of doing that manually. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36572 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c3676b54bfd2e06b73646d1846b2ab0272cb96e2 |
|
13-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added vm_debug_copy_page_memory() which copies memory from a potentially not mapped page. * debug_{mem,strl}cpy(): - Added "team" parameter for specifying the address space the address are to be interpreted in. - When the standard memcpy() (with fault handler) fails, fall back to vm_debug_copy_page_memory(). * Added debug_is_debugged_team(): Predicate returning true, if the supplied team_id refers to the same team debug_get_debugged_thread() belongs to. * Added DebuggedThreadSetter class for scope-based debug_set_debugged_thread(). Made use of it in several debugger functions. * print_demangled_call() (x86): Fixed unsafe memory access. Allows KDL stack traces to work correctly again, even if the page daemon has already unmapped the concerned pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36230 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
349039ff2e745b4b4e5870757e670ed51e0128e4 |
|
11-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added vm_[un]wire_page(), which are essentially versions of [un]lock_memory_etc() optimized for a single page. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36156 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40bb94819e6c39d72ab29edc1a0dcd80b15b8b42 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed useless return parameter from vm_remove_all_page_mappings(). * Added vm_clear_page_mapping_accessed_flags() and vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality of vm_test_map_activation(), vm_clear_map_flags(), and vm_remove_all_page_mappings(), thus saving lots of calls to translation map methods. The backend is the new method VMTranslationMap::ClearAccessedAndModified(). * Started to make use of the cached page queue and changed the meaning of the other non-free queues slightly: - Active queue: Contains mapped pages that have been used recently. - Inactive queue: Contains mapped pages that have not been used recently. Also contains unmapped temporary pages. - Modified queue: Contains unmapped modified pages. - Cached queue: Contains unmapped unmodified pages (LRU sorted). Unless we're actually low on memory and actively do paging, modified and cached queues only contain non-temporary pages. Cached pages are considered quasi free. They still belong to a cache, but since they are unmodified and unmapped, they can be freed immediately. And this is what vm_page_[try_]reserve_pages() do now when there are no more actually free pages at hand. Essentially this means that pages storing cached file data, unless mmap()ped, no longer are considered used and don't contribute to page pressure. Paging will not happen as long there are enough free + cached pages available. * Reimplemented the page daemon. It no longer scans all pages, but instead works the page queues. As long as the free pages situation is harmless, it only iterates through the active queue and deactivates pages that have not been used recently. When paging occurs it additionally scans the inactive queue and frees pages that have not been used recently. * Changed the page reservation/allocation interface: vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and vm_page_allocate_page() now take a vm_page_reservation structure pointer. The reservation functions initialize the structure -- currently consisting only of a count member for the number of still reserved pages. vm_page_allocate_page() decrements the count and vm_page_unreserve_pages() unreserves the remaining pages (if any). Advantages are that reservation/ unreservation mismatches cannot occur anymore, that vm_page_allocate_page() can verify that the caller has indeed a reserved page left, and that there's no unnecessary pressure on the free page pool anymore. The only disadvantage is that the vm_page_reservation object needs to be passed around a bit. * Reworked the page reservation implementation: - Got rid of sSystemReservedPages and sPageDeficit. Instead sUnreservedFreePages now actually contains the number of free pages that have not yet been reserved (it cannot become negative anymore) and the new sUnsatisfiedPageReservations contains the number of pages that are still needed for reservation. - Threads waiting for reservations do now add themselves to a waiter queue, which is ordered by descending priority (VM priority and thread priority). High priority waiters are served first when pages become available. Fixes #5328. * cache_prefetch_vnode(): Would reserve one less page than allocated later, if the size wasn't page aligned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cff6e9e406132a76bfc20cb35ff5228dd0ba94d8 |
|
26-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* The system now holds back a small reserve of committable memory and pages. The memory and page reservation functions have a new "priority" parameter that indicates how deep the function may tap into that reserve. The currently existing priority levels are "user", "system", and "VIP". The idea is that user programs should never be able to cause a state that gets the kernel into trouble due to heavy battling for memory. The "VIP" level (not really used yet) is intended for allocations that are required to free memory eventually (in the page writer). More levels are thinkable in the future, like "user real time" or "user system server". * Added "priority" parameters to several VMCache methods. * Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags" parameter. * Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag CACHE_PRIORITY_VIP indicating the importance of the request. * Changed most code to pass the right priorities/flags. These changes already significantly improve the behavior in low memory situations. I've tested a bit with 64 MB (virtual) RAM and, while not particularly fast and responsive, the system remains at least usable under high memory pressure. As a side effect the slab allocator can now be used as general memory allocator. Not done by default yet, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b4e5e4982360e684c5a13d227b9a958dbe725554 |
|
25-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager: * Added support to do larger raw allocations (up to one large chunk (128 pages)) in the slab areas. For an even larger allocation an area is created (haven't seen that happen yet, though). * Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING). * _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area would be added to the free lists instead of being removed from them. This would corrupt the lists and also lead to all kinds of misuse of meta chunks. object caches: * Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object caches, but the block allocator sets it on all power of two size caches. * object_cache_reserve_internal(): Detect recursion and don't wait in such a case. The function could deadlock itself, since HashedObjectCache::CreateSlab() does allocate memory, thus potentially reentering. * object_cache_low_memory(): - I missed some returns when reworking that one in r35254, so the function might stop early and also leave the cache in maintenance mode, which would cause it to be ignored by object cache resizer and low memory handler from that point on. - Since ReturnSlab() potentially unlocks, the conditions weren't quite correct and too many slabs could be freed. - Simplified things a bit. * object_cache_alloc(): Since object_cache_reserve_internal() does potentially unlock the cache, the situation might have changed and their might not be an empty slab available, but a partial one. The function would crash. * Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING. * Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with the VMCache commands. * ObjectCache::usage was not maintained anymore since I introduced the MemoryManager. object_cache_get_usage() would thus always return 0 and the block cache would not be considered cached memory. This was only of informational relevance, though. slab allocator misc.: * Disable the object depots of block allocator caches for object sizes > 2 KB. Allocations of those sizes aren't so common that the object depots yield any benefit. * The slab allocator is now fully self-sufficient. It allocates its bootstrap memory from the MemoryManager, and the hash tables for HashedObjectCaches use the block allocator instead of the heap, now. * Added option to use the slab allocator for malloc() and friends (USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and has virtually no lock contention. Handling for low memory situations is yet missing, though. * Improved the output of some debugger commands. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
86c794e5c10f1b2d99d672d424a8637639c703dd |
|
21-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
slab allocator: * Implemented a more elaborated raw memory allocation backend (MemoryManager). We allocate 8 MB areas whose pages we allocate and map when needed. An area is divided into equally-sized chunks which form the basic units of allocation. We have areas with three possible chunk sizes (small, medium, large), which is basically what the ObjectCache implementations were using anyway. * Added "uint32 flags" parameter to several of the slab allocator's object cache and object depot functions. E.g. object_depot_store() potentially wants to allocate memory for a magazine. But also in pure freeing functions it might eventually become useful to have those flags, since they could end up deleting an area, which might not be allowable in all situations. We should introduce specific flags to indicate that. * Reworked the block allocator. Since the MemoryManager allocates block-aligned areas, maintains a hash table for lookup, and maps chunks to object caches, we can quickly find out which object cache a to be freed allocation belongs to and thus don't need the boundary tags anymore. * Reworked the slab boot strap process. We allocate from the initial area only when really necessary, i.e. when the object cache for the respective allocation size has not been created yet. A single page is thus sufficient. other: * vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS. * Use an object cache for page mappings. This significantly reduces the contention on the heap bin locks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f082f7f019941732f1d2b99f627fbeeeec3746af |
|
15-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added vm_page::accessed flag. Works analogously to vm_page::modified. * Reorganized the code for [un]mapping pages: - Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what vm_unmap_page[s]() did before, just in the architecture specific code, which allows for specific optimizations. UnmapArea() is for the special case that the complete area is unmapped. Particularly in case the address space is deleted, some work can be saved. Several TODOs could be slain. - Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]() are now static and have lost their prefix (and the "preserveModified" parameter). * Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers for Protect(). * X86VMTranslationMap::Protect(): Make sure not to accidentally clear the accessed/dirty flags. * X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually work. It was only skipping to the next page. * Adjusted the PPC code to at least compile. No measurable effect for the -j8 Haiku image build time, though the kernel time drops minimally. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
946325051bcbdb05b1bf2466ed03bed13f36bf59 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added boolean "alreadyWired" parameter to vm_map_physical_memory(). * ioapic_init(): map_physical_memory() was called for already mapped addresses. This worked fine, but only because the x86 page mapping code didn't mind. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35059 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e50cf8765be50a7454c9488db38b638cf90805af |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the VM headers into subdirectory vm/. * Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
|