#
c650846d |
|
14-Mar-2023 |
Augustin Cavalier <waddlesplash@gmail.com> |
vm: Replace the VMAreas OpenHashTable with an AVLTree. Since we used a hash table with a fixed size (1024), collisions were obviously inevitable, meaning that while insertions would always be fast, lookups and deletions would take linear time to search the linked-list for the area in question. For recently-created areas, this would be fast; for less-recently-created areas, it would get slower and slower and slower. A particularly pathological case was the "mmap/24-1" test from the Open POSIX Testsuite, which creates millions of areas until it hits ENOMEM; it then simply exits, at which point it would run for minutes and minutes in the kernel team deletion routines; how long I don't know, as I rebooted before it finished. This change fixes that problem, among others, at the cost of increased area creation time, by using an AVL tree instead of a hash. For comparison, mmap'ing 2 million areas with the "24-1" test before this change took around 0m2.706s of real time, while afterwards it takes about 0m3.118s, or around a 15% increase (1.152x). On the other hand, the total test runtime for 2 million areas went from around 2m11.050s to 0m4.035s, or around a 97% decrease (0.031x); in other words, with this new code, it is *32 times faster.* Area insertion will no longer be O(1), however, so the time increase may go up with the number of areas present on the system; but if it's only around 3 seconds to create 2 million areas, or about 1.56 us per area, vs. 1.35 us before, I don't think that's worth worrying about. My nonscientific "compile HaikuDepot with 2 cores in VM" benchmark seems to be within the realm of "noise", anyway, with most results both before and after this change coming in around 47s real time. Change-Id: I230e17de4f80304d082152af83db8bd5abe7b831
|
#
8f68daed |
|
25-Apr-2022 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel: Make use of the new ConditionVariable lock-switching APIs in a few places.
|
#
e12a5fe2 |
|
08-Feb-2022 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/slab: Change checks in FreeRawOrReturnCache to panic instead. The function is largely useless if we cannot lock kernel space, and the consumers may not expect it to silently "fail." Hence, we now put the invocation of deferred_free in free_etc directly.
|
#
2f001d82 |
|
09-Sep-2021 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/slab: Add missing support for CACHE_DONT_LOCK_KERNEL_SPACE. This codepath is only hit for especially large malloc'd areas that do not fit into an object cache, which is a rather rare case to have in tandem with DONT_LOCK_KERNEL_SPACE, but we need to be prepared for it in this code anyways. Probably this codepath would previously have caused a hang due to attempted double lock of the kernel addres space. Unfortunately it appears our rw_locks do not actually detect that at present, so likely I should investigate that next... Change-Id: I3c9059d3e3939271beeeff221ebc921bc07ddf00 Reviewed-on: https://review.haiku-os.org/c/haiku/+/4438 Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org> Reviewed-by: waddlesplash <waddlesplash@gmail.com>
|
#
86e43bdf |
|
09-Jul-2019 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel: Add missing error checks to MemoryManager::_AllocateChunks.
|
#
d1f280c8 |
|
01-Apr-2012 |
Hamish Morrison <hamishm53@gmail.com> |
Add support for pthread_attr_get/setguardsize() * Added the aforementioned functions. * create_area_etc() now takes a guard size parameter. * The thread_info::stack_base/end range now refers to the usable range only.
|
#
1bd07482 |
|
11-Sep-2012 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fix crash in MemoryManager::PerformMaintenance() sFreeAreaCount wasn't decremented after removing an area from sFreeAreas, thus causing the loop to continue until enountering and crashing on a NULL pointer after removing the last area. Introduce helper methods _PushFreeArea() and _PopFreeArea() to ensure this cannot easily happen again. Fixes ticket #8972.
|
#
c3f0fd28 |
|
12-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Fixed formatting of output in some debugger commands. Currently all debugger commands assume 32-bit pointers when formatting their output. This means that on x86_64 the output is incorrectly formatted. Fixed this by adding a B_PRINTF_POINTER_WIDTH definition (16 on 64-bit, 8 on 32-bit), and using this to correctly format the output. Not all commands have been fixed yet, but all VM, slab, VFS, team, thread and image commands should be correct.
|
#
4be4fc6b |
|
15-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
More 64-bit compilation/safety fixes.
|
#
81fea743 |
|
02-Nov-2011 |
Michael Lotz <mmlr@mlotz.ch> |
bonefish+mmlr: Add MemoryManager::DebugObjectCacheForAddress() to retrieve the ObjectCache for a certain address from KDL. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43117 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f606e8fd |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * AllocationTrackingCallback::ProcessTrackingInfo(): Also pass the allocation pointer. * "allocations_per_caller" KDL command: Add option "-d". When given, each allocation for the specified caller is printed, including the respective stack trace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43087 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
50175c99 |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: Refactor the "allocations_per_caller" KDL command related functions. They expect an instance of a class implementing the new AllocationTrackingCallback interface, now. The only implementation ATM is AllocationCollectorCallback, which does the work the now removed slab_debug_add_allocation_for_caller() did before. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43082 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f908ff9b |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * Fix build broken in r43078. The slab_debug_add_allocation_for_caller() wasn't guarded correctly. * slab_debug_add_allocation_for_caller(): Add bool resetAllocationInfos parameter, which makes the function clear the allocation tracking infos after processing the data. * "allocations_per_caller" KDL command: Add option "-r" to reset the allocation tracking infos. The next invocation of the command will only show the allocations made after the reset. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43079 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
0422a0f3 |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * dump_allocations_per_caller(): Compute the total allocation count and size from the caller infos instead of using return arguments in the helper functions called. * Move caller info update code from analyze_allocation_callers() to new function slab_debug_add_allocation_for_caller(), so it can be reused. * Add MemoryManager::AnalyzeAllocationCallers() to collect the allocation information for the memory manager. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43078 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e1c6140e |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * Add optional stack trace capturing for slab memory manager tracing. * Add allocation tracking for the slab allocator (enabled via SLAB_ALLOCATION_TRACKING). The allocation tracking requires tracing with stack traces to be enabled for object caches and/or the memory manager. - Add class AllocationTrackingInfo that associates an allocation with its respective tracing entry. The structure is added to the end of an allocation done by the memory manager. For the object caches there's a separate array for each slab. - Add code range markers to the slab code, so that the first caller into the slab code can be retrieved from the stack traces. - Add KDL command "allocations_per_caller" that lists all allocations summarized by caller. * Move debug definitions from slab_private.h to slab_debug.h. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43072 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
72156a40 |
|
31-Oct-2011 |
Michael Lotz <mmlr@mlotz.ch> |
bonefish+mmlr: * Introduce "paranoid" malloc/free into the slab allocator (initializing allocated memory to 0xcc and setting freed memory to 0xdeadbeef). * Allow for optional stack traces for slab object cache tracing. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43046 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1e393751 |
|
06-Jul-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager::_GetChunks(): Fixed incorrect check. The area structure offset introduced in r37701 was not taken into account for computing the short meta chunk size. Fixes the reopened #6237. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42385 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e653b865 |
|
22-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Free(), FreeRawOrReturnCache(), GetAllocationInfo(), CacheForAddress(): Check Assert that the meta chunk the given address lies in is actually in use. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37703 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
93a56baf |
|
22-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved the Area structure from the beginning of area to one page inside the area. The first page is not mapped, so someone writing over the bounds of the previous area will be axed immediately. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37701 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b9447668 |
|
10-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the vm_page initialization from vm_page.cpp:vm_page_init() to the new vm_page::Init(). * Made vm_page::wired_count private and added accessor methods. * Added VMCache::fWiredPagesCount (the number of wired pages the cache contains) and accessor methods. * Made more use of vm_page::IsMapped(). * vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be used to request a special handling for wired pages. If given the wired pages are replaced by copies and the original pages are moved to the upper cache. * vm_copy_area(): - We don't need to do any wired ranges handling, if the source area is a B_SHARED_AREA, since we don't touch the area's mappings in this case. - We no longer wait for wired ranges of the concerned areas to disappear. Instead we use the new vm_copy_on_write_area() feature and just let it copy the wired pages. This fixes #6288, an issue introduced with the use of user mutexes in libroot: When executing multiple concurrent fork()s all but the first one would wait on the fork mutex, which (being a user mutex) would wire a page that the vm_copy_area() of the first fork() would wait for. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
05951b9b |
|
02-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added optional paranoid checking of the slab memory manager meta chunks after each chunk allocation/deallocation. * The commands that dump chunks also verify, whether chunks that look free are in the free list. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37360 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a8ad734f |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced structures {virtual,physical}_address_restrictions, which specify restrictions for virtual/physical addresses. * vm_page_allocate_page_run(): - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not taken into account. - Takes a physical_address_restrictions instead of base/limit and also supports alignment and boundary restrictions, now. * map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/ ReserveAddressRange() take a virtual_address_restrictions parameter, now. They also support an alignment independent from the range size. * create_area_etc(), vm_create_anonymous_area(): Take {virtual,physical}_address_restrictions parameters, now. * Removed no longer needed B_PHYSICAL_BASE_ADDRESS. * DMAResources: - Fixed potential overflows of uint32 when initializing from device node attributes. - Fixed bounce buffer creation TODOs: By using create_area_etc() with the new restrictions parameters we can directly support physical high address, boundary, and alignment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2ea7b17c |
|
09-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible "addr_t aligmnent". * X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(), generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment feature instead of aligning by hand. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c1be1e07 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMTranslationMap::Map()/Protect(): Added "memoryType" parameter. Not implemented for any architecture yet. * vm_set_area_memory_type(): Call VMTranslationMap::ProtectArea() to change the memory type for the already mapped pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36574 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a93fa3c9 |
|
02-Mar-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
More debug output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35725 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40bb9481 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed useless return parameter from vm_remove_all_page_mappings(). * Added vm_clear_page_mapping_accessed_flags() and vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality of vm_test_map_activation(), vm_clear_map_flags(), and vm_remove_all_page_mappings(), thus saving lots of calls to translation map methods. The backend is the new method VMTranslationMap::ClearAccessedAndModified(). * Started to make use of the cached page queue and changed the meaning of the other non-free queues slightly: - Active queue: Contains mapped pages that have been used recently. - Inactive queue: Contains mapped pages that have not been used recently. Also contains unmapped temporary pages. - Modified queue: Contains unmapped modified pages. - Cached queue: Contains unmapped unmodified pages (LRU sorted). Unless we're actually low on memory and actively do paging, modified and cached queues only contain non-temporary pages. Cached pages are considered quasi free. They still belong to a cache, but since they are unmodified and unmapped, they can be freed immediately. And this is what vm_page_[try_]reserve_pages() do now when there are no more actually free pages at hand. Essentially this means that pages storing cached file data, unless mmap()ped, no longer are considered used and don't contribute to page pressure. Paging will not happen as long there are enough free + cached pages available. * Reimplemented the page daemon. It no longer scans all pages, but instead works the page queues. As long as the free pages situation is harmless, it only iterates through the active queue and deactivates pages that have not been used recently. When paging occurs it additionally scans the inactive queue and frees pages that have not been used recently. * Changed the page reservation/allocation interface: vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and vm_page_allocate_page() now take a vm_page_reservation structure pointer. The reservation functions initialize the structure -- currently consisting only of a count member for the number of still reserved pages. vm_page_allocate_page() decrements the count and vm_page_unreserve_pages() unreserves the remaining pages (if any). Advantages are that reservation/ unreservation mismatches cannot occur anymore, that vm_page_allocate_page() can verify that the caller has indeed a reserved page left, and that there's no unnecessary pressure on the free page pool anymore. The only disadvantage is that the vm_page_reservation object needs to be passed around a bit. * Reworked the page reservation implementation: - Got rid of sSystemReservedPages and sPageDeficit. Instead sUnreservedFreePages now actually contains the number of free pages that have not yet been reserved (it cannot become negative anymore) and the new sUnsatisfiedPageReservations contains the number of pages that are still needed for reservation. - Threads waiting for reservations do now add themselves to a waiter queue, which is ordered by descending priority (VM priority and thread priority). High priority waiters are served first when pages become available. Fixes #5328. * cache_prefetch_vnode(): Would reserve one less page than allocated later, if the size wasn't page aligned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
0e81d474 |
|
30-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Don't access the area after deleting it. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35340 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e65c4002 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Replaced the vm_page_allocate_page*() "pageState" parameter by a more general "flags" parameter. It encodes the target state of the page -- so that the page isn't unnecessarily put in the wrong page queue first -- a flag whether the page should be cleared, and one to indicate whether the page should be marked busy. * Added page state PAGE_STATE_CACHED. Not used yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
72382fa6 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed the page state PAGE_STATE_BUSY and instead introduced a vm_page::busy flag. The obvious advantage is that one can still see what state a page is in and even move it between states while being marked busy. * Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which in all cases has the same effect. Introduced a vm_page_is_dummy() that can still check whether a given page is a dummy page. * vm_page_unreserve_pages(): Before adding to the system reserve make sure sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages available for allocation. steal_pages() still has the same problem and it can't be solved that easily. * map_page(): No longer changes the page state/mark the page unbusy. That's the caller's responsibility. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1196f305 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Unconditionally log when the memory manager creates/deletes areas. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35330 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cff6e9e4 |
|
26-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* The system now holds back a small reserve of committable memory and pages. The memory and page reservation functions have a new "priority" parameter that indicates how deep the function may tap into that reserve. The currently existing priority levels are "user", "system", and "VIP". The idea is that user programs should never be able to cause a state that gets the kernel into trouble due to heavy battling for memory. The "VIP" level (not really used yet) is intended for allocations that are required to free memory eventually (in the page writer). More levels are thinkable in the future, like "user real time" or "user system server". * Added "priority" parameters to several VMCache methods. * Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags" parameter. * Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag CACHE_PRIORITY_VIP indicating the importance of the request. * Changed most code to pass the right priorities/flags. These changes already significantly improve the behavior in low memory situations. I've tested a bit with 64 MB (virtual) RAM and, while not particularly fast and responsive, the system remains at least usable under high memory pressure. As a side effect the slab allocator can now be used as general memory allocator. Not done by default yet, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
30f2c6c8 |
|
25-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
When we create areas for large raw allocations there's no need to allocate clear pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35286 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b4e5e498 |
|
25-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager: * Added support to do larger raw allocations (up to one large chunk (128 pages)) in the slab areas. For an even larger allocation an area is created (haven't seen that happen yet, though). * Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING). * _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area would be added to the free lists instead of being removed from them. This would corrupt the lists and also lead to all kinds of misuse of meta chunks. object caches: * Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object caches, but the block allocator sets it on all power of two size caches. * object_cache_reserve_internal(): Detect recursion and don't wait in such a case. The function could deadlock itself, since HashedObjectCache::CreateSlab() does allocate memory, thus potentially reentering. * object_cache_low_memory(): - I missed some returns when reworking that one in r35254, so the function might stop early and also leave the cache in maintenance mode, which would cause it to be ignored by object cache resizer and low memory handler from that point on. - Since ReturnSlab() potentially unlocks, the conditions weren't quite correct and too many slabs could be freed. - Simplified things a bit. * object_cache_alloc(): Since object_cache_reserve_internal() does potentially unlock the cache, the situation might have changed and their might not be an empty slab available, but a partial one. The function would crash. * Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING. * Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with the VMCache commands. * ObjectCache::usage was not maintained anymore since I introduced the MemoryManager. object_cache_get_usage() would thus always return 0 and the block cache would not be considered cached memory. This was only of informational relevance, though. slab allocator misc.: * Disable the object depots of block allocator caches for object sizes > 2 KB. Allocations of those sizes aren't so common that the object depots yield any benefit. * The slab allocator is now fully self-sufficient. It allocates its bootstrap memory from the MemoryManager, and the hash tables for HashedObjectCaches use the block allocator instead of the heap, now. * Added option to use the slab allocator for malloc() and friends (USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and has virtually no lock contention. Handling for low memory situations is yet missing, though. * Improved the output of some debugger commands. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c3f84a1d |
|
23-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
_FreeChunk(): When freeing a chunk from a formerly full meta chunk, we have to add it back to its partial list or it would be leaked. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35266 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5726fbda |
|
23-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager: * Does now keep one or two empty areas around, so that even in case of CACHE_DONT_LOCK_KERNEL_SPACE memory can be provided as long as pages are available. The object cache maintainer thread is used to asynchronously allocate/delete the free areas. * Added new debugger commands "slab_meta_chunk[s]" and improved the existing ones. * Moved Area::chunks to MetaChunk. * Removed unused _AllocationArea() "chunkSize" parameter. * Fixed serious bug in _FreeChunk(): Empty meta chunks were not removed from the partial chunk lists and could thus be used twice. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35264 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
10189b50 |
|
23-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Changed the memory management algorithm: Instead of designating the area for a certain chunk size, the areas are split into meta chunks (which are as large as a large chunk) each of which can be a used independently for chunks of a certain size. This reduces the vulnerablity to fragmentation, so that we need fewer areas overall. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35250 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8d1316fd |
|
22-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Replaced CACHE_DONT_SLEEP by two new flags CACHE_DONT_WAIT_FOR_MEMORY and CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager does not wait when reserving memory or pages. The latter prevents area operations. The new flags add a bit of flexibility. E.g. when allocating page mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient, i.e. the allocation will succeed as long as pages are available. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
86c794e5 |
|
21-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
slab allocator: * Implemented a more elaborated raw memory allocation backend (MemoryManager). We allocate 8 MB areas whose pages we allocate and map when needed. An area is divided into equally-sized chunks which form the basic units of allocation. We have areas with three possible chunk sizes (small, medium, large), which is basically what the ObjectCache implementations were using anyway. * Added "uint32 flags" parameter to several of the slab allocator's object cache and object depot functions. E.g. object_depot_store() potentially wants to allocate memory for a magazine. But also in pure freeing functions it might eventually become useful to have those flags, since they could end up deleting an area, which might not be allowable in all situations. We should introduce specific flags to indicate that. * Reworked the block allocator. Since the MemoryManager allocates block-aligned areas, maintains a hash table for lookup, and maps chunks to object caches, we can quickly find out which object cache a to be freed allocation belongs to and thus don't need the boundary tags anymore. * Reworked the slab boot strap process. We allocate from the initial area only when really necessary, i.e. when the object cache for the respective allocation size has not been created yet. A single page is thus sufficient. other: * vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS. * Use an object cache for page mappings. This significantly reduces the contention on the heap bin locks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
d1f280c80529d5f0bc55030c2934f9255bc7f6a2 |
|
01-Apr-2012 |
Hamish Morrison <hamishm53@gmail.com> |
Add support for pthread_attr_get/setguardsize() * Added the aforementioned functions. * create_area_etc() now takes a guard size parameter. * The thread_info::stack_base/end range now refers to the usable range only.
|
#
1bd07482531e9fae514212e729ce94eb7b534f42 |
|
11-Sep-2012 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fix crash in MemoryManager::PerformMaintenance() sFreeAreaCount wasn't decremented after removing an area from sFreeAreas, thus causing the loop to continue until enountering and crashing on a NULL pointer after removing the last area. Introduce helper methods _PushFreeArea() and _PopFreeArea() to ensure this cannot easily happen again. Fixes ticket #8972.
|
#
c3f0fd28cda13d70a8c092003609442e4e29cf78 |
|
12-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Fixed formatting of output in some debugger commands. Currently all debugger commands assume 32-bit pointers when formatting their output. This means that on x86_64 the output is incorrectly formatted. Fixed this by adding a B_PRINTF_POINTER_WIDTH definition (16 on 64-bit, 8 on 32-bit), and using this to correctly format the output. Not all commands have been fixed yet, but all VM, slab, VFS, team, thread and image commands should be correct.
|
#
4be4fc6b1faddbd037146214a0011d320842b4f3 |
|
15-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
More 64-bit compilation/safety fixes.
|
#
81fea743949e8fcd9f7738204f5853b2cd15c868 |
|
02-Nov-2011 |
Michael Lotz <mmlr@mlotz.ch> |
bonefish+mmlr: Add MemoryManager::DebugObjectCacheForAddress() to retrieve the ObjectCache for a certain address from KDL. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43117 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f606e8fd79b26ccfa5a7a50e3caeb560ab1ac9a9 |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * AllocationTrackingCallback::ProcessTrackingInfo(): Also pass the allocation pointer. * "allocations_per_caller" KDL command: Add option "-d". When given, each allocation for the specified caller is printed, including the respective stack trace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43087 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
50175c99c4747dcc83dcdeff24c259500cbf7478 |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: Refactor the "allocations_per_caller" KDL command related functions. They expect an instance of a class implementing the new AllocationTrackingCallback interface, now. The only implementation ATM is AllocationCollectorCallback, which does the work the now removed slab_debug_add_allocation_for_caller() did before. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43082 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f908ff9bb658aa86aa53760c51070415e69f951d |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * Fix build broken in r43078. The slab_debug_add_allocation_for_caller() wasn't guarded correctly. * slab_debug_add_allocation_for_caller(): Add bool resetAllocationInfos parameter, which makes the function clear the allocation tracking infos after processing the data. * "allocations_per_caller" KDL command: Add option "-r" to reset the allocation tracking infos. The next invocation of the command will only show the allocations made after the reset. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43079 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
0422a0f3ae8f12f32bd27868a7e742293786f0a0 |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * dump_allocations_per_caller(): Compute the total allocation count and size from the caller infos instead of using return arguments in the helper functions called. * Move caller info update code from analyze_allocation_callers() to new function slab_debug_add_allocation_for_caller(), so it can be reused. * Add MemoryManager::AnalyzeAllocationCallers() to collect the allocation information for the memory manager. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43078 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e1c6140eaa641aa95fc6d82f0d5c53cf4fe41a16 |
|
01-Nov-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
mmlr + bonefish: * Add optional stack trace capturing for slab memory manager tracing. * Add allocation tracking for the slab allocator (enabled via SLAB_ALLOCATION_TRACKING). The allocation tracking requires tracing with stack traces to be enabled for object caches and/or the memory manager. - Add class AllocationTrackingInfo that associates an allocation with its respective tracing entry. The structure is added to the end of an allocation done by the memory manager. For the object caches there's a separate array for each slab. - Add code range markers to the slab code, so that the first caller into the slab code can be retrieved from the stack traces. - Add KDL command "allocations_per_caller" that lists all allocations summarized by caller. * Move debug definitions from slab_private.h to slab_debug.h. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43072 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
72156a402f54ea4be9dc3e3e9704c612f7d9ad16 |
|
31-Oct-2011 |
Michael Lotz <mmlr@mlotz.ch> |
bonefish+mmlr: * Introduce "paranoid" malloc/free into the slab allocator (initializing allocated memory to 0xcc and setting freed memory to 0xdeadbeef). * Allow for optional stack traces for slab object cache tracing. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@43046 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1e393751ba7363de3c6b76be6ffd277b5f6fdb54 |
|
06-Jul-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager::_GetChunks(): Fixed incorrect check. The area structure offset introduced in r37701 was not taken into account for computing the short meta chunk size. Fixes the reopened #6237. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42385 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e653b86511ca373abd4cf624978f59aa5d7790b9 |
|
22-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Free(), FreeRawOrReturnCache(), GetAllocationInfo(), CacheForAddress(): Check Assert that the meta chunk the given address lies in is actually in use. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37703 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
93a56baf09e19689d669f7f24f8cf74b23c7b4b3 |
|
22-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved the Area structure from the beginning of area to one page inside the area. The first page is not mapped, so someone writing over the bounds of the previous area will be axed immediately. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37701 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b9447668707741085389f650383b018d33d7d0bf |
|
10-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the vm_page initialization from vm_page.cpp:vm_page_init() to the new vm_page::Init(). * Made vm_page::wired_count private and added accessor methods. * Added VMCache::fWiredPagesCount (the number of wired pages the cache contains) and accessor methods. * Made more use of vm_page::IsMapped(). * vm_copy_on_write_area(): Added vm_page_reservation* parameter that can be used to request a special handling for wired pages. If given the wired pages are replaced by copies and the original pages are moved to the upper cache. * vm_copy_area(): - We don't need to do any wired ranges handling, if the source area is a B_SHARED_AREA, since we don't touch the area's mappings in this case. - We no longer wait for wired ranges of the concerned areas to disappear. Instead we use the new vm_copy_on_write_area() feature and just let it copy the wired pages. This fixes #6288, an issue introduced with the use of user mutexes in libroot: When executing multiple concurrent fork()s all but the first one would wait on the fork mutex, which (being a user mutex) would wire a page that the vm_copy_area() of the first fork() would wait for. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37460 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
05951b9b6a3bbf94106b2167283166aae78af820 |
|
02-Jul-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added optional paranoid checking of the slab memory manager meta chunks after each chunk allocation/deallocation. * The commands that dump chunks also verify, whether chunks that look free are in the free list. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37360 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a8ad734f1c698917badb15e1641e0f38b3e9a013 |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced structures {virtual,physical}_address_restrictions, which specify restrictions for virtual/physical addresses. * vm_page_allocate_page_run(): - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not taken into account. - Takes a physical_address_restrictions instead of base/limit and also supports alignment and boundary restrictions, now. * map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/ ReserveAddressRange() take a virtual_address_restrictions parameter, now. They also support an alignment independent from the range size. * create_area_etc(), vm_create_anonymous_area(): Take {virtual,physical}_address_restrictions parameters, now. * Removed no longer needed B_PHYSICAL_BASE_ADDRESS. * DMAResources: - Fixed potential overflows of uint32 when initializing from device node attributes. - Fixed bounce buffer creation TODOs: By using create_area_etc() with the new restrictions parameters we can directly support physical high address, boundary, and alignment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2ea7b17cf341036024bd84c4f791f332b2e15e48 |
|
09-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible "addr_t aligmnent". * X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(), generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment feature instead of aligning by hand. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c1be1e0761d0904d99dabd3d1638d94802b4b600 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMTranslationMap::Map()/Protect(): Added "memoryType" parameter. Not implemented for any architecture yet. * vm_set_area_memory_type(): Call VMTranslationMap::ProtectArea() to change the memory type for the already mapped pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36574 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a93fa3c9ea06751239205e2f4630ce2b2a0aec30 |
|
02-Mar-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
More debug output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35725 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40bb94819e6c39d72ab29edc1a0dcd80b15b8b42 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed useless return parameter from vm_remove_all_page_mappings(). * Added vm_clear_page_mapping_accessed_flags() and vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality of vm_test_map_activation(), vm_clear_map_flags(), and vm_remove_all_page_mappings(), thus saving lots of calls to translation map methods. The backend is the new method VMTranslationMap::ClearAccessedAndModified(). * Started to make use of the cached page queue and changed the meaning of the other non-free queues slightly: - Active queue: Contains mapped pages that have been used recently. - Inactive queue: Contains mapped pages that have not been used recently. Also contains unmapped temporary pages. - Modified queue: Contains unmapped modified pages. - Cached queue: Contains unmapped unmodified pages (LRU sorted). Unless we're actually low on memory and actively do paging, modified and cached queues only contain non-temporary pages. Cached pages are considered quasi free. They still belong to a cache, but since they are unmodified and unmapped, they can be freed immediately. And this is what vm_page_[try_]reserve_pages() do now when there are no more actually free pages at hand. Essentially this means that pages storing cached file data, unless mmap()ped, no longer are considered used and don't contribute to page pressure. Paging will not happen as long there are enough free + cached pages available. * Reimplemented the page daemon. It no longer scans all pages, but instead works the page queues. As long as the free pages situation is harmless, it only iterates through the active queue and deactivates pages that have not been used recently. When paging occurs it additionally scans the inactive queue and frees pages that have not been used recently. * Changed the page reservation/allocation interface: vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and vm_page_allocate_page() now take a vm_page_reservation structure pointer. The reservation functions initialize the structure -- currently consisting only of a count member for the number of still reserved pages. vm_page_allocate_page() decrements the count and vm_page_unreserve_pages() unreserves the remaining pages (if any). Advantages are that reservation/ unreservation mismatches cannot occur anymore, that vm_page_allocate_page() can verify that the caller has indeed a reserved page left, and that there's no unnecessary pressure on the free page pool anymore. The only disadvantage is that the vm_page_reservation object needs to be passed around a bit. * Reworked the page reservation implementation: - Got rid of sSystemReservedPages and sPageDeficit. Instead sUnreservedFreePages now actually contains the number of free pages that have not yet been reserved (it cannot become negative anymore) and the new sUnsatisfiedPageReservations contains the number of pages that are still needed for reservation. - Threads waiting for reservations do now add themselves to a waiter queue, which is ordered by descending priority (VM priority and thread priority). High priority waiters are served first when pages become available. Fixes #5328. * cache_prefetch_vnode(): Would reserve one less page than allocated later, if the size wasn't page aligned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
0e81d474e796f6ece7a8dc637aa394f48e2988fc |
|
30-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Don't access the area after deleting it. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35340 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e65c400299386f99a251395ff2e59572705d7e49 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Replaced the vm_page_allocate_page*() "pageState" parameter by a more general "flags" parameter. It encodes the target state of the page -- so that the page isn't unnecessarily put in the wrong page queue first -- a flag whether the page should be cleared, and one to indicate whether the page should be marked busy. * Added page state PAGE_STATE_CACHED. Not used yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
72382fa6291e810be2949a70abd8f274f92dbd2c |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed the page state PAGE_STATE_BUSY and instead introduced a vm_page::busy flag. The obvious advantage is that one can still see what state a page is in and even move it between states while being marked busy. * Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which in all cases has the same effect. Introduced a vm_page_is_dummy() that can still check whether a given page is a dummy page. * vm_page_unreserve_pages(): Before adding to the system reserve make sure sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages available for allocation. steal_pages() still has the same problem and it can't be solved that easily. * map_page(): No longer changes the page state/mark the page unbusy. That's the caller's responsibility. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1196f305d77621e74d4ad4f439d543f6885355d6 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Unconditionally log when the memory manager creates/deletes areas. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35330 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cff6e9e406132a76bfc20cb35ff5228dd0ba94d8 |
|
26-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* The system now holds back a small reserve of committable memory and pages. The memory and page reservation functions have a new "priority" parameter that indicates how deep the function may tap into that reserve. The currently existing priority levels are "user", "system", and "VIP". The idea is that user programs should never be able to cause a state that gets the kernel into trouble due to heavy battling for memory. The "VIP" level (not really used yet) is intended for allocations that are required to free memory eventually (in the page writer). More levels are thinkable in the future, like "user real time" or "user system server". * Added "priority" parameters to several VMCache methods. * Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags" parameter. * Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag CACHE_PRIORITY_VIP indicating the importance of the request. * Changed most code to pass the right priorities/flags. These changes already significantly improve the behavior in low memory situations. I've tested a bit with 64 MB (virtual) RAM and, while not particularly fast and responsive, the system remains at least usable under high memory pressure. As a side effect the slab allocator can now be used as general memory allocator. Not done by default yet, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
30f2c6c8599407d744e7979d7fe69db725ec343e |
|
25-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
When we create areas for large raw allocations there's no need to allocate clear pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35286 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b4e5e4982360e684c5a13d227b9a958dbe725554 |
|
25-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager: * Added support to do larger raw allocations (up to one large chunk (128 pages)) in the slab areas. For an even larger allocation an area is created (haven't seen that happen yet, though). * Added kernel tracing (SLAB_MEMORY_MANAGER_TRACING). * _FreeArea(): Copy and paste bug: The meta chunks of the to be freed area would be added to the free lists instead of being removed from them. This would corrupt the lists and also lead to all kinds of misuse of meta chunks. object caches: * Implemented CACHE_ALIGN_ON_SIZE. It is no longer set for all small object caches, but the block allocator sets it on all power of two size caches. * object_cache_reserve_internal(): Detect recursion and don't wait in such a case. The function could deadlock itself, since HashedObjectCache::CreateSlab() does allocate memory, thus potentially reentering. * object_cache_low_memory(): - I missed some returns when reworking that one in r35254, so the function might stop early and also leave the cache in maintenance mode, which would cause it to be ignored by object cache resizer and low memory handler from that point on. - Since ReturnSlab() potentially unlocks, the conditions weren't quite correct and too many slabs could be freed. - Simplified things a bit. * object_cache_alloc(): Since object_cache_reserve_internal() does potentially unlock the cache, the situation might have changed and their might not be an empty slab available, but a partial one. The function would crash. * Renamed the object cache tracing variable to SLAB_OBJECT_CACHE_TRACING. * Renamed debugger command "cache_info" to "slab_cache" to avoid confusion with the VMCache commands. * ObjectCache::usage was not maintained anymore since I introduced the MemoryManager. object_cache_get_usage() would thus always return 0 and the block cache would not be considered cached memory. This was only of informational relevance, though. slab allocator misc.: * Disable the object depots of block allocator caches for object sizes > 2 KB. Allocations of those sizes aren't so common that the object depots yield any benefit. * The slab allocator is now fully self-sufficient. It allocates its bootstrap memory from the MemoryManager, and the hash tables for HashedObjectCaches use the block allocator instead of the heap, now. * Added option to use the slab allocator for malloc() and friends (USE_SLAB_ALLOCATOR_FOR_MALLOC). Currently disabled. Works in principle and has virtually no lock contention. Handling for low memory situations is yet missing, though. * Improved the output of some debugger commands. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35283 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c3f84a1df542f9accc1ed208c940f2b59cdd0680 |
|
23-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
_FreeChunk(): When freeing a chunk from a formerly full meta chunk, we have to add it back to its partial list or it would be leaked. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35266 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5726fbda3ccb0b57c3504053a67110dd0d3922bd |
|
23-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
MemoryManager: * Does now keep one or two empty areas around, so that even in case of CACHE_DONT_LOCK_KERNEL_SPACE memory can be provided as long as pages are available. The object cache maintainer thread is used to asynchronously allocate/delete the free areas. * Added new debugger commands "slab_meta_chunk[s]" and improved the existing ones. * Moved Area::chunks to MetaChunk. * Removed unused _AllocationArea() "chunkSize" parameter. * Fixed serious bug in _FreeChunk(): Empty meta chunks were not removed from the partial chunk lists and could thus be used twice. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35264 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
10189b502d840f2adc5e8d98fdc4fc4d54eb15ca |
|
23-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Changed the memory management algorithm: Instead of designating the area for a certain chunk size, the areas are split into meta chunks (which are as large as a large chunk) each of which can be a used independently for chunks of a certain size. This reduces the vulnerablity to fragmentation, so that we need fewer areas overall. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35250 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8d1316fd23616f6dac131a0eba5dab08acc6e76d |
|
22-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Replaced CACHE_DONT_SLEEP by two new flags CACHE_DONT_WAIT_FOR_MEMORY and CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager does not wait when reserving memory or pages. The latter prevents area operations. The new flags add a bit of flexibility. E.g. when allocating page mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient, i.e. the allocation will succeed as long as pages are available. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
86c794e5c10f1b2d99d672d424a8637639c703dd |
|
21-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
slab allocator: * Implemented a more elaborated raw memory allocation backend (MemoryManager). We allocate 8 MB areas whose pages we allocate and map when needed. An area is divided into equally-sized chunks which form the basic units of allocation. We have areas with three possible chunk sizes (small, medium, large), which is basically what the ObjectCache implementations were using anyway. * Added "uint32 flags" parameter to several of the slab allocator's object cache and object depot functions. E.g. object_depot_store() potentially wants to allocate memory for a magazine. But also in pure freeing functions it might eventually become useful to have those flags, since they could end up deleting an area, which might not be allowable in all situations. We should introduce specific flags to indicate that. * Reworked the block allocator. Since the MemoryManager allocates block-aligned areas, maintains a hash table for lookup, and maps chunks to object caches, we can quickly find out which object cache a to be freed allocation belongs to and thus don't need the boundary tags anymore. * Reworked the slab boot strap process. We allocate from the initial area only when really necessary, i.e. when the object cache for the respective allocation size has not been created yet. A single page is thus sufficient. other: * vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS. * Use an object cache for page mappings. This significantly reduces the contention on the heap bin locks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
|