History log of /haiku/src/system/kernel/vm/VMArea.cpp
Revision Date Author Comments
# c650846d 14-Mar-2023 Augustin Cavalier <waddlesplash@gmail.com>

vm: Replace the VMAreas OpenHashTable with an AVLTree.

Since we used a hash table with a fixed size (1024), collisions were
obviously inevitable, meaning that while insertions would always be
fast, lookups and deletions would take linear time to search the
linked-list for the area in question. For recently-created areas,
this would be fast; for less-recently-created areas, it would get
slower and slower and slower.

A particularly pathological case was the "mmap/24-1" test from the
Open POSIX Testsuite, which creates millions of areas until it hits
ENOMEM; it then simply exits, at which point it would run for minutes
and minutes in the kernel team deletion routines; how long I don't know,
as I rebooted before it finished.

This change fixes that problem, among others, at the cost of increased
area creation time, by using an AVL tree instead of a hash. For comparison,
mmap'ing 2 million areas with the "24-1" test before this change took
around 0m2.706s of real time, while afterwards it takes about 0m3.118s,
or around a 15% increase (1.152x).

On the other hand, the total test runtime for 2 million areas went from
around 2m11.050s to 0m4.035s, or around a 97% decrease (0.031x); in other
words, with this new code, it is *32 times faster.*

Area insertion will no longer be O(1), however, so the time increase
may go up with the number of areas present on the system; but if it's
only around 3 seconds to create 2 million areas, or about 1.56 us per area,
vs. 1.35 us before, I don't think that's worth worrying about.

My nonscientific "compile HaikuDepot with 2 cores in VM" benchmark
seems to be within the realm of "noise", anyway, with most results
both before and after this change coming in around 47s real time.

Change-Id: I230e17de4f80304d082152af83db8bd5abe7b831


# 60fee365 09-Sep-2021 Augustin Cavalier <waddlesplash@gmail.com>

kernel/vm: Allow locking kernel space if allocating page_protections for userspace.

Some applications may request per-page protections for an especially
large area (e.g. multiple GB), which leads to allocating a rather
large page protections area (e.g. 512KB), which we cannot allocate
without locking the kernel space to get new slabs.

We only need to avoid locking the kernel space if the area in question
is in the kernel space, as in that case, kernel space will already be
read-locked by the current thread.

Fixes #16898.

Change-Id: If52413a594da66edfc2821811d959085a2c3c78e
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4436
Reviewed-by: waddlesplash <waddlesplash@gmail.com>


# 45872f7f 17-Mar-2021 Jérôme Duval <jerome.duval@gmail.com>

kernel/vm: restrict permission changes on shared file-mapped areas

a protection_max attribute is added in VMArea.
a read-only opened file already can't be mapped shared read-write at the moment,
but can later be changed to read-write with mprotect() or set_area_protection().
When creating the VMArea, the actual maximum protection is stored in the area,
so that it can be checked when needed.
this fixes a VM TODO.

Change-Id: I33b144c192034eeb059f1dede5dbef5af947280d
Reviewed-on: https://review.haiku-os.org/c/haiku/+/3804
Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>


# 39665db1 12-Jul-2019 Augustin Cavalier <waddlesplash@gmail.com>

kernel/vm: Inline the VMArea::name string.

B_OS_NAME_LENGTH is 32, char* is 8 (on x64), and this structure
has quite a lot of pointers in it so it is not like we really
needed to save those 24 bytes. Hitting malloc() in here is not
so great, especially because we usually have B_DONT_LOCK_KERNEL_SPACE
turned on, so just inline and avoid it.

Change-Id: I5c94955324cfda08972895826b61748c3b69096a


# 078a965f 28-Oct-2014 Ingo Weinhold <ingo_weinhold@gmx.de>

vm_soft_fault(): Avoid deadlock waiting for wired ranges

* VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a
flags argument and introduce (currently only) flag
IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing
are ignored. Ignoring just a single specified range doesn't cut it
in vm_soft_fault(), and there aren't any other users of that feature.
* vm_soft_fault(): When having to unmap a page of a lower cache, this
page cannot be wired for writing. So we can safely ignore all
writed-wired ranges, instead of just our own. We even have to do that
in case there's another thread that concurrently tries to write-wire
the same page, since otherwise we'd deadlock waiting for each other.


# 9d3e7188 08-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

We must not resize the area hash table while holding an address space lock,
since it could be the kernel address space.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36091 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 4f774c50 04-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* VMArea::Unwire(addr_t, size_t, bool): Don't delete the removed range, but
return it.
* lock_memory_etc(): On error the VMAreaWiredRange object could be leaked.
* [un]lock_memory_etc(): Call VMArea::Unwire() with the cache locked and
explicitly delete the range object after unlocking the cache to avoid
potential deadlocks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36035 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 369111e7 05-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Removed the VMArea::Wire() version that has to allocate a VMAreaWiredRange.
Since the requirement is that the area's top cache is locked, allocating
memory isn't allowed.
* lock_memory_etc(): Create the VMAreaWiredRange object explicitly before
locking the area's top cache.

Fixes #5680 (deadlocks when using the slab as malloc() backend).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36033 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 550376ff 03-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* vm_delete_areas(): Changed return type to void (was status_t and not used).
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96


# deee8524 26-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Introduced {malloc,memalign,free}_etc() which take an additional "flags"
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 40cd019e 06-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Renamed VMAddressSpace::ResizeArea{Head,Tail}() to ShrinkArea{Head,Tail}()
to clarify that they never enlarge the area.
* Reimplemented VMKernelAddressSpace. It is somewhat inspired by Bonwick's
vmem resource allocator (though we have different requirements):
- We consider the complete address space to be divided into contiguous
ranges of type free, reserved, or area, each range being represented by
a VMKernelAddressRange object.
- The range objects are managed in an AVL tree and a doubly linked list
(the latter only for faster iteration) sorted by address. This provides
O(log(n)) lookup, insertion and removal.
- For each power of two size we maintain a list of free ranges of at least
that size. Thus for the most common case of B_ANY*_ADDRESS area
allocation, we find a free range in constant time (the rest of the
processing being O(log(n))) with a rather good fit. This should also
help avoiding address space fragmentation.
While the new implementation should be faster, particularly with an
increasing number of areas, I couldn't measure any difference in the -j2
haiku build. From a cursory test the -j8 build hasn't tangibly benefitted
either.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34528 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2c1886ae 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added VMArea subclasses VM{Kernel,User}Area and moved the address space list
link to them.
* VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and
VMAddressSpace has grown factory methods {Create,Delete}Area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bbd97b4b 03-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Made the VMArea fields base and size private and added accessors instead.
This makes it more explicit where the fields are modified.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34464 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 35d94001 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Changed the address space area list to doubly linked. The reason is to
simplify migration of the area management, but as a side effect, it also
makes area deletion O(1) (instead of O(n), n == number of areas in the
address space).
* Moved more area management functionality from vm.cpp to VMAddressSpace and
VMArea structure creation to VMArea. Made the list and list link members
itself private.
* VMAddressSpace tracks its amount of free space, now. This also replaces
the previous mechanism to do that only for the kernel address space. It
was broken anyway, since delete_area() subtracted the area size instead of
adding it.
* vm_free_unused_boot_loader_range():
- lastEnd could be set to a value < start, which could cause memory
outside of the given range to be unmapped. Haven't checked whether this
could happen in practice -- if so, it would be seriously unhealthy.
- The range between the end of the last area in the range and the end of
the range would never be freed.
- Fixed potential integer overflows when computing addresses.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f34a1dd5 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Created VMArea.{h,cpp} and moved VMArea and the global area hash table (new
class VMAreaHash) there.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34450 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 078a965f65b5f56ecb3f0c72fc97a36238509ca8 28-Oct-2014 Ingo Weinhold <ingo_weinhold@gmx.de>

vm_soft_fault(): Avoid deadlock waiting for wired ranges

* VMArea::AddWaiterIfWired(): Replace the ignoreRange argument by a
flags argument and introduce (currently only) flag
IGNORE_WRITE_WIRED_RANGES. If specified, ranges wired for writing
are ignored. Ignoring just a single specified range doesn't cut it
in vm_soft_fault(), and there aren't any other users of that feature.
* vm_soft_fault(): When having to unmap a page of a lower cache, this
page cannot be wired for writing. So we can safely ignore all
writed-wired ranges, instead of just our own. We even have to do that
in case there's another thread that concurrently tries to write-wire
the same page, since otherwise we'd deadlock waiting for each other.


# 9d3e7188011b3fcaaf0e825320bdc5d7f8712f04 08-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

We must not resize the area hash table while holding an address space lock,
since it could be the kernel address space.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36091 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 4f774c503c464ae50bfd3c710d78c372f9ca03e2 04-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* VMArea::Unwire(addr_t, size_t, bool): Don't delete the removed range, but
return it.
* lock_memory_etc(): On error the VMAreaWiredRange object could be leaked.
* [un]lock_memory_etc(): Call VMArea::Unwire() with the cache locked and
explicitly delete the range object after unlocking the cache to avoid
potential deadlocks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36035 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 369111e741ab2164041a0f3b4c1b4a6112d3532e 05-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Removed the VMArea::Wire() version that has to allocate a VMAreaWiredRange.
Since the requirement is that the area's top cache is locked, allocating
memory isn't allowed.
* lock_memory_etc(): Create the VMAreaWiredRange object explicitly before
locking the area's top cache.

Fixes #5680 (deadlocks when using the slab as malloc() backend).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36033 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 550376ffb83d09c9cdffdff7b9ab067d9fd70312 03-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* vm_delete_areas(): Changed return type to void (was status_t and not used).
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96


# deee8524b7534d9b586cbcbf366d0660c9769a8e 26-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Introduced {malloc,memalign,free}_etc() which take an additional "flags"
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 40cd019ea0415011db2a82c736e716a22ca842cc 06-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Renamed VMAddressSpace::ResizeArea{Head,Tail}() to ShrinkArea{Head,Tail}()
to clarify that they never enlarge the area.
* Reimplemented VMKernelAddressSpace. It is somewhat inspired by Bonwick's
vmem resource allocator (though we have different requirements):
- We consider the complete address space to be divided into contiguous
ranges of type free, reserved, or area, each range being represented by
a VMKernelAddressRange object.
- The range objects are managed in an AVL tree and a doubly linked list
(the latter only for faster iteration) sorted by address. This provides
O(log(n)) lookup, insertion and removal.
- For each power of two size we maintain a list of free ranges of at least
that size. Thus for the most common case of B_ANY*_ADDRESS area
allocation, we find a free range in constant time (the rest of the
processing being O(log(n))) with a rather good fit. This should also
help avoiding address space fragmentation.
While the new implementation should be faster, particularly with an
increasing number of areas, I couldn't measure any difference in the -j2
haiku build. From a cursory test the -j8 build hasn't tangibly benefitted
either.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34528 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2c1886aeae1be8dc6bb9656701b2aab5bf3311ca 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added VMArea subclasses VM{Kernel,User}Area and moved the address space list
link to them.
* VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and
VMAddressSpace has grown factory methods {Create,Delete}Area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bbd97b4bb41cc03735528962ff53d89a2a2d7ff2 03-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Made the VMArea fields base and size private and added accessors instead.
This makes it more explicit where the fields are modified.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34464 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 35d940014e100b8ca09eaf294b86fb9ef905b1e0 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Changed the address space area list to doubly linked. The reason is to
simplify migration of the area management, but as a side effect, it also
makes area deletion O(1) (instead of O(n), n == number of areas in the
address space).
* Moved more area management functionality from vm.cpp to VMAddressSpace and
VMArea structure creation to VMArea. Made the list and list link members
itself private.
* VMAddressSpace tracks its amount of free space, now. This also replaces
the previous mechanism to do that only for the kernel address space. It
was broken anyway, since delete_area() subtracted the area size instead of
adding it.
* vm_free_unused_boot_loader_range():
- lastEnd could be set to a value < start, which could cause memory
outside of the given range to be unmapped. Haven't checked whether this
could happen in practice -- if so, it would be seriously unhealthy.
- The range between the end of the last area in the range and the end of
the range would never be freed.
- Fixed potential integer overflows when computing addresses.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f34a1dd5d701373687b6f3f0e6e76bd2b1ae6007 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Created VMArea.{h,cpp} and moved VMArea and the global area hash table (new
class VMAreaHash) there.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34450 a95241bf-73f2-0310-859d-f6bbb57e9c96