History log of /haiku/src/system/kernel/vm/VMAddressSpaceLocking.cpp
Revision Date Author Comments
# c650846d 14-Mar-2023 Augustin Cavalier <waddlesplash@gmail.com>

vm: Replace the VMAreas OpenHashTable with an AVLTree.

Since we used a hash table with a fixed size (1024), collisions were
obviously inevitable, meaning that while insertions would always be
fast, lookups and deletions would take linear time to search the
linked-list for the area in question. For recently-created areas,
this would be fast; for less-recently-created areas, it would get
slower and slower and slower.

A particularly pathological case was the "mmap/24-1" test from the
Open POSIX Testsuite, which creates millions of areas until it hits
ENOMEM; it then simply exits, at which point it would run for minutes
and minutes in the kernel team deletion routines; how long I don't know,
as I rebooted before it finished.

This change fixes that problem, among others, at the cost of increased
area creation time, by using an AVL tree instead of a hash. For comparison,
mmap'ing 2 million areas with the "24-1" test before this change took
around 0m2.706s of real time, while afterwards it takes about 0m3.118s,
or around a 15% increase (1.152x).

On the other hand, the total test runtime for 2 million areas went from
around 2m11.050s to 0m4.035s, or around a 97% decrease (0.031x); in other
words, with this new code, it is *32 times faster.*

Area insertion will no longer be O(1), however, so the time increase
may go up with the number of areas present on the system; but if it's
only around 3 seconds to create 2 million areas, or about 1.56 us per area,
vs. 1.35 us before, I don't think that's worth worrying about.

My nonscientific "compile HaikuDepot with 2 cores in VM" benchmark
seems to be within the realm of "noise", anyway, with most results
both before and after this change coming in around 47s real time.

Change-Id: I230e17de4f80304d082152af83db8bd5abe7b831


# 02b151d3 11-Jul-2013 Ingo Weinhold <ingo_weinhold@gmx.de>

MultiAddressSpaceLocker::AddAreaCacheAndLock(): race condition

* Add a VMArea* version of AddArea().
* AddAreaCacheAndLock(): Use the new AddArea() version. This not only
saves the ID hash table lookup, but also fixes a race condition with
delete_area(). delete_area() removes the area from the hash before
removing it from its cache, so iterating through the cache's areas
can turn up an area that no longer is in the hash. In that case we
would fail immediately. The new AddArea() won't fail in this
situation, though.

Fixes #9686: vm_copy_area() could fail for the "commpage" area. That's
an area all teams share, so any team terminating while another one was
fork()ing could trigger it.


# 21ff565f 03-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

AddressSpaceWriteLocker: Added VMAddressSpace* constructor and SetTo()
versions.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36029 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 94a877f0 27-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Lock the kernel address space last.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35315 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2ea2527f 31-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

R/W lock implementation:
* Changed the rw_lock_{read,write}_unlock() return values to void. They
returned a value != B_OK only in case of user error and no-one checked them
anyway.
* Optimized rw_lock_read_[un]lock(). They are inline now and as long as
there's no contending write locker, they will only perform an atomic_add().
* Changed the semantics of nested locking after acquiring a write lock: Read
and write locks are counted separately, so read locks no longer implicitly
become write locks. This does e.g. make degrading a write lock to a read
lock by way of read_lock + write_unlock (as used in the VM) actually work.

These changes speed up the -j8 Haiku image build on my machine by a few
percent, but more interestingly they reduce the total kernel time by 25 %.
Apparently we get more contention on other locks, now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34830 a95241bf-73f2-0310-859d-f6bbb57e9c96


# def9898c 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Moved the three address space locker classes into a separate pair of
header/source files.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34451 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 02b151d3e344772bcc6f91c874b9b87a23350a0a 11-Jul-2013 Ingo Weinhold <ingo_weinhold@gmx.de>

MultiAddressSpaceLocker::AddAreaCacheAndLock(): race condition

* Add a VMArea* version of AddArea().
* AddAreaCacheAndLock(): Use the new AddArea() version. This not only
saves the ID hash table lookup, but also fixes a race condition with
delete_area(). delete_area() removes the area from the hash before
removing it from its cache, so iterating through the cache's areas
can turn up an area that no longer is in the hash. In that case we
would fail immediately. The new AddArea() won't fail in this
situation, though.

Fixes #9686: vm_copy_area() could fail for the "commpage" area. That's
an area all teams share, so any team terminating while another one was
fork()ing could trigger it.


# 21ff565f76443c658cfd5a67e0cc0402a2c64ecd 03-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

AddressSpaceWriteLocker: Added VMAddressSpace* constructor and SetTo()
versions.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36029 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 94a877f0e5d0e13499c24938f53b32b49defec78 27-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Lock the kernel address space last.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35315 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2ea2527fe423046558f682ebabede8f959a875e3 31-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

R/W lock implementation:
* Changed the rw_lock_{read,write}_unlock() return values to void. They
returned a value != B_OK only in case of user error and no-one checked them
anyway.
* Optimized rw_lock_read_[un]lock(). They are inline now and as long as
there's no contending write locker, they will only perform an atomic_add().
* Changed the semantics of nested locking after acquiring a write lock: Read
and write locks are counted separately, so read locks no longer implicitly
become write locks. This does e.g. make degrading a write lock to a read
lock by way of read_lock + write_unlock (as used in the VM) actually work.

These changes speed up the -j8 Haiku image build on my machine by a few
percent, but more interestingly they reduce the total kernel time by 25 %.
Apparently we get more contention on other locks, now.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34830 a95241bf-73f2-0310-859d-f6bbb57e9c96


# def9898c9b96d366f32c39f0143e265442bd429a 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Moved the three address space locker classes into a separate pair of
header/source files.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34451 a95241bf-73f2-0310-859d-f6bbb57e9c96