#
c650846d |
|
14-Mar-2023 |
Augustin Cavalier <waddlesplash@gmail.com> |
vm: Replace the VMAreas OpenHashTable with an AVLTree. Since we used a hash table with a fixed size (1024), collisions were obviously inevitable, meaning that while insertions would always be fast, lookups and deletions would take linear time to search the linked-list for the area in question. For recently-created areas, this would be fast; for less-recently-created areas, it would get slower and slower and slower. A particularly pathological case was the "mmap/24-1" test from the Open POSIX Testsuite, which creates millions of areas until it hits ENOMEM; it then simply exits, at which point it would run for minutes and minutes in the kernel team deletion routines; how long I don't know, as I rebooted before it finished. This change fixes that problem, among others, at the cost of increased area creation time, by using an AVL tree instead of a hash. For comparison, mmap'ing 2 million areas with the "24-1" test before this change took around 0m2.706s of real time, while afterwards it takes about 0m3.118s, or around a 15% increase (1.152x). On the other hand, the total test runtime for 2 million areas went from around 2m11.050s to 0m4.035s, or around a 97% decrease (0.031x); in other words, with this new code, it is *32 times faster.* Area insertion will no longer be O(1), however, so the time increase may go up with the number of areas present on the system; but if it's only around 3 seconds to create 2 million areas, or about 1.56 us per area, vs. 1.35 us before, I don't think that's worth worrying about. My nonscientific "compile HaikuDepot with 2 cores in VM" benchmark seems to be within the realm of "noise", anyway, with most results both before and after this change coming in around 47s real time. Change-Id: I230e17de4f80304d082152af83db8bd5abe7b831
|
#
45872f7f |
|
17-Mar-2021 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/vm: restrict permission changes on shared file-mapped areas a protection_max attribute is added in VMArea. a read-only opened file already can't be mapped shared read-write at the moment, but can later be changed to read-write with mprotect() or set_area_protection(). When creating the VMArea, the actual maximum protection is stored in the area, so that it can be checked when needed. this fixes a VM TODO. Change-Id: I33b144c192034eeb059f1dede5dbef5af947280d Reviewed-on: https://review.haiku-os.org/c/haiku/+/3804 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
39665db1 |
|
12-Jul-2019 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/vm: Inline the VMArea::name string. B_OS_NAME_LENGTH is 32, char* is 8 (on x64), and this structure has quite a lot of pointers in it so it is not like we really needed to save those 24 bytes. Hitting malloc() in here is not so great, especially because we usually have B_DONT_LOCK_KERNEL_SPACE turned on, so just inline and avoid it. Change-Id: I5c94955324cfda08972895826b61748c3b69096a
|
#
30c9d3c0 |
|
01-Dec-2017 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel: Correct class/struct mixups. Almost certainly harmless. Spotted by Clang.
|
#
078a965f |
|
28-Oct-2014 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
vm_soft_fault(): Avoid deadlock waiting for wired ranges * VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a flags argument and introduce (currently only) flag IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing are ignored. Ignoring just a single specified range doesn't cut it in vm_soft_fault(), and there aren't any other users of that feature. * vm_soft_fault(): When having to unmap a page of a lower cache, this page cannot be wired for writing. So we can safely ignore all writed-wired ranges, instead of just our own. We even have to do that in case there's another thread that concurrently tries to write-wire the same page, since otherwise we'd deadlock waiting for each other.
|
#
147133b7 |
|
25-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* First run through the kernel's private parts to use phys_{addr,size}_t where appropriate. * Typedef'ed page_num_t to phys_addr_t and used it in more places in vm_page.{h,cpp}. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3b0c1b52 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMArea: Made memory_type private and added setter and getter methods. * Don't set the VMArea's memory type in arch_vm_set_memory_type(), but let the callers do that. * vm_set_area_memory_type(): Does nothing, if the memory type doesn't change. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36573 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
349039ff |
|
11-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added vm_[un]wire_page(), which are essentially versions of [un]lock_memory_etc() optimized for a single page. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36156 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4f774c50 |
|
04-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMArea::Unwire(addr_t, size_t, bool): Don't delete the removed range, but return it. * lock_memory_etc(): On error the VMAreaWiredRange object could be leaked. * [un]lock_memory_etc(): Call VMArea::Unwire() with the cache locked and explicitly delete the range object after unlocking the cache to avoid potential deadlocks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36035 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
369111e7 |
|
05-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed the VMArea::Wire() version that has to allocate a VMAreaWiredRange. Since the requirement is that the area's top cache is locked, allocating memory isn't allowed. * lock_memory_etc(): Create the VMAreaWiredRange object explicitly before locking the area's top cache. Fixes #5680 (deadlocks when using the slab as malloc() backend). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36033 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
550376ff |
|
03-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* vm_delete_areas(): Changed return type to void (was status_t and not used). * _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is page aligned. * Reworked memory locking (wiring): - VMArea does now have a list of wired memory ranges and supports waiting for a range to be removed. - vm_soft_fault(): - Added "wirePage" parameter that, if given, makes the function wire the page and return it. - Added "wiredRange" parameter (for calls from lock_memory_etc()) and made sure we never unmap wired pages. This could e.g. happen when a page from a lower cache was read-mapped and a write fault occurred. Now in such a situation the function waits for the page to be unwired and restarts. - All functions that manipulate areas in a way that could affect wired ranges do now either require the caller to make sure there are no wired ranges in the way or do that themselves. Added a few wait_if_*_is_wired() helper functions for that purpose. - lock_memory_etc(): - Does now also work correctly when the range spans more than one area. - Adds VMAreaWiredRanges to the affected VMAreas and retains an address space reference (so that the address space won't be deleted as long as a wired range exists). - Resolved TODO: The area's caches are now locked when increment_page_wired_count() is called. - Resolved TODO: The race condition due to missing locking after looking up the page mapping is now prevented. We hold the cache locks (in case the page is already mapped) and the new vm_soft_fault() parameter allows us to get the page wired. - unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and resolved all TODOs. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
deee8524 |
|
26-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced {malloc,memalign,free}_etc() which take an additional "flags" argument. They replace the previous special-purpose allocation functions (malloc_nogrow(), vip_io_request_malloc()). * Moved the I/O VIP heap to heap.cpp accordingly. * Added quite a bit of passing around of allocation flags in the VM, particularly in the VM*AddressSpace classes. * Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always allocated on the normal heap. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2c1886ae |
|
04-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added VMArea subclasses VM{Kernel,User}Area and moved the address space list link to them. * VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and VMAddressSpace has grown factory methods {Create,Delete}Area. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e2518ddb |
|
04-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Made VMAddressSpace an abstract base class and moved the area management into new derived classes VM{Kernel,User}AddressSpace. Currently those are identical, but that will change. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34492 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f69032f2 |
|
03-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added VMAddressSpace::ResizeArea{Head,Tail}() to adjust an area's base and size. * Made VMArea::Set{Base,Size}() private and made VMAddressSpace a friend. In vm.cpp the new VMAddressSpace::ResizeArea{Head,Tail}() are used instead. Finally all address space changes happen in VMAddressSpace only. *phew* Now it's ready to be thoroughly butchered. :-) git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34467 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bbd97b4b |
|
03-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Made the VMArea fields base and size private and added accessors instead. This makes it more explicit where the fields are modified. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34464 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
35d94001 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Changed the address space area list to doubly linked. The reason is to simplify migration of the area management, but as a side effect, it also makes area deletion O(1) (instead of O(n), n == number of areas in the address space). * Moved more area management functionality from vm.cpp to VMAddressSpace and VMArea structure creation to VMArea. Made the list and list link members itself private. * VMAddressSpace tracks its amount of free space, now. This also replaces the previous mechanism to do that only for the kernel address space. It was broken anyway, since delete_area() subtracted the area size instead of adding it. * vm_free_unused_boot_loader_range(): - lastEnd could be set to a value < start, which could cause memory outside of the given range to be unmapped. Haven't checked whether this could happen in practice -- if so, it would be seriously unhealthy. - The range between the end of the last area in the range and the end of the range would never be freed. - Fixed potential integer overflows when computing addresses. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f34a1dd5 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Created VMArea.{h,cpp} and moved VMArea and the global area hash table (new class VMAreaHash) there. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34450 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
078a965f65b5f56ecb3f0c72fc97a36238509ca8 |
|
28-Oct-2014 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
vm_soft_fault(): Avoid deadlock waiting for wired ranges * VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a flags argument and introduce (currently only) flag IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing are ignored. Ignoring just a single specified range doesn't cut it in vm_soft_fault(), and there aren't any other users of that feature. * vm_soft_fault(): When having to unmap a page of a lower cache, this page cannot be wired for writing. So we can safely ignore all writed-wired ranges, instead of just our own. We even have to do that in case there's another thread that concurrently tries to write-wire the same page, since otherwise we'd deadlock waiting for each other.
|
#
147133b76cbb1603bdbff295505f5b830cb4e688 |
|
25-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* First run through the kernel's private parts to use phys_{addr,size}_t where appropriate. * Typedef'ed page_num_t to phys_addr_t and used it in more places in vm_page.{h,cpp}. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3b0c1b5227ab487b1963faea4d046cba4d0622ff |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMArea: Made memory_type private and added setter and getter methods. * Don't set the VMArea's memory type in arch_vm_set_memory_type(), but let the callers do that. * vm_set_area_memory_type(): Does nothing, if the memory type doesn't change. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36573 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
349039ff2e745b4b4e5870757e670ed51e0128e4 |
|
11-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added vm_[un]wire_page(), which are essentially versions of [un]lock_memory_etc() optimized for a single page. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36156 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4f774c503c464ae50bfd3c710d78c372f9ca03e2 |
|
04-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMArea::Unwire(addr_t, size_t, bool): Don't delete the removed range, but return it. * lock_memory_etc(): On error the VMAreaWiredRange object could be leaked. * [un]lock_memory_etc(): Call VMArea::Unwire() with the cache locked and explicitly delete the range object after unlocking the cache to avoid potential deadlocks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36035 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
369111e741ab2164041a0f3b4c1b4a6112d3532e |
|
05-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed the VMArea::Wire() version that has to allocate a VMAreaWiredRange. Since the requirement is that the area's top cache is locked, allocating memory isn't allowed. * lock_memory_etc(): Create the VMAreaWiredRange object explicitly before locking the area's top cache. Fixes #5680 (deadlocks when using the slab as malloc() backend). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36033 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
550376ffb83d09c9cdffdff7b9ab067d9fd70312 |
|
03-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* vm_delete_areas(): Changed return type to void (was status_t and not used). * _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is page aligned. * Reworked memory locking (wiring): - VMArea does now have a list of wired memory ranges and supports waiting for a range to be removed. - vm_soft_fault(): - Added "wirePage" parameter that, if given, makes the function wire the page and return it. - Added "wiredRange" parameter (for calls from lock_memory_etc()) and made sure we never unmap wired pages. This could e.g. happen when a page from a lower cache was read-mapped and a write fault occurred. Now in such a situation the function waits for the page to be unwired and restarts. - All functions that manipulate areas in a way that could affect wired ranges do now either require the caller to make sure there are no wired ranges in the way or do that themselves. Added a few wait_if_*_is_wired() helper functions for that purpose. - lock_memory_etc(): - Does now also work correctly when the range spans more than one area. - Adds VMAreaWiredRanges to the affected VMAreas and retains an address space reference (so that the address space won't be deleted as long as a wired range exists). - Resolved TODO: The area's caches are now locked when increment_page_wired_count() is called. - Resolved TODO: The race condition due to missing locking after looking up the page mapping is now prevented. We hold the cache locks (in case the page is already mapped) and the new vm_soft_fault() parameter allows us to get the page wired. - unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and resolved all TODOs. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
deee8524b7534d9b586cbcbf366d0660c9769a8e |
|
26-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced {malloc,memalign,free}_etc() which take an additional "flags" argument. They replace the previous special-purpose allocation functions (malloc_nogrow(), vip_io_request_malloc()). * Moved the I/O VIP heap to heap.cpp accordingly. * Added quite a bit of passing around of allocation flags in the VM, particularly in the VM*AddressSpace classes. * Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always allocated on the normal heap. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2c1886aeae1be8dc6bb9656701b2aab5bf3311ca |
|
04-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added VMArea subclasses VM{Kernel,User}Area and moved the address space list link to them. * VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and VMAddressSpace has grown factory methods {Create,Delete}Area. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e2518ddbb14677f379b8dc8f66088473c553b113 |
|
04-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Made VMAddressSpace an abstract base class and moved the area management into new derived classes VM{Kernel,User}AddressSpace. Currently those are identical, but that will change. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34492 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f69032f22b9fdf7ea2a3287e162abe39882ce373 |
|
03-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added VMAddressSpace::ResizeArea{Head,Tail}() to adjust an area's base and size. * Made VMArea::Set{Base,Size}() private and made VMAddressSpace a friend. In vm.cpp the new VMAddressSpace::ResizeArea{Head,Tail}() are used instead. Finally all address space changes happen in VMAddressSpace only. *phew* Now it's ready to be thoroughly butchered. :-) git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34467 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bbd97b4bb41cc03735528962ff53d89a2a2d7ff2 |
|
03-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Made the VMArea fields base and size private and added accessors instead. This makes it more explicit where the fields are modified. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34464 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
35d940014e100b8ca09eaf294b86fb9ef905b1e0 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Changed the address space area list to doubly linked. The reason is to simplify migration of the area management, but as a side effect, it also makes area deletion O(1) (instead of O(n), n == number of areas in the address space). * Moved more area management functionality from vm.cpp to VMAddressSpace and VMArea structure creation to VMArea. Made the list and list link members itself private. * VMAddressSpace tracks its amount of free space, now. This also replaces the previous mechanism to do that only for the kernel address space. It was broken anyway, since delete_area() subtracted the area size instead of adding it. * vm_free_unused_boot_loader_range(): - lastEnd could be set to a value < start, which could cause memory outside of the given range to be unmapped. Haven't checked whether this could happen in practice -- if so, it would be seriously unhealthy. - The range between the end of the last area in the range and the end of the range would never be freed. - Fixed potential integer overflows when computing addresses. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f34a1dd5d701373687b6f3f0e6e76bd2b1ae6007 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Created VMArea.{h,cpp} and moved VMArea and the global area hash table (new class VMAreaHash) there. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34450 a95241bf-73f2-0310-859d-f6bbb57e9c96
|