History log of /haiku/headers/private/kernel/vm/VMAddressSpace.h
Revision Date Author Comments
# a6926d42 29-May-2020 Michael Lotz <mmlr@mlotz.ch>

kernel/vm: Introduce and use VMAddressSpace::AreaRangeIterator.

It iterates over all areas intersecting a given address range and
removes the need for manually skipping uninteresting initial areas. It
uses VMAddressSpace::FindClosestArea() to efficiently find the starting
area.

This speeds up the two iterations in unmap_address_range and one in
wait_if_address_range_is_wired and resolves a TODO in the latter hinting
at such a solution.

Change-Id: Iba1d39942db4e4b27e17706be194496f9d4279ed
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2841
Reviewed-by: waddlesplash <waddlesplash@gmail.com>


# a626bdab 29-May-2020 Michael Lotz <mmlr@mlotz.ch>

kernel/vm: Remove linear search from _get_next_area_info.

This introduces VMAddressSpace::FindClosestArea() that can be used to
find the closest area to a given address in either direction. This is
now trivial and efficient since both kernel and user address spaces use
a binary search tree.

Using FindClosestArea() getting multiple area infos is sped up
dramatically as it removes the need for a linear search from the first
area to the one given in the cookie on each successive invocation.

Change-Id: I227da87d915f6f3d3ef88bfeb6be5d4c97c3baaa
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2840
Reviewed-by: waddlesplash <waddlesplash@gmail.com>


# d6ddb118 05-May-2020 Michael Lotz <mmlr@mlotz.ch>

kernel/vm: Whitespace cleanup only.


# 513403d4 14-Jun-2018 Augustin Cavalier <waddlesplash@gmail.com>

Revert team and thread changes for COMPAT_MODE (hrev52010 & hrev52011).

This reverts commit c558f9c8fe54bc14515aa62bac7826271289f0e4.
This reverts commit 44f24718b1505e8d9c75e00e59f2f471a79b5f56.
This reverts commit a69cb330301c4d697daae57e6019a307f285043e.
This reverts commit 951182620e297d10af7fdcfe32f2b04d56086ae9.

There have been multiple reports that these changes break mounting NTFS partitions
(on all systems, see #14204), and shutting down (on certain systems, see #12405.)
Until they can be fixed, they are being backed out.


# 95118262 20-May-2018 Jérôme Duval <jerome.duval@gmail.com>

kernel/x86_64: setup a new team in compatibility mode.

* in load_image_internal(), elf32_load_user_image checks whether the binary
format requires the compatibility mode.
* we then set up the flag THREAD_FLAGS_COMPAT_MODE and the address space size.
* the compatibility mode runtime_loader is hardcoded with x86/runtime_loader.
* if needed, the 64-bit flat_args structure is converted in-place to its 32-bit
layout.
* a 32-bit flat_args isn't handled yet (a 32-bit team execs a 64-bit binary).

Change-Id: Ia6a066bde8d1774d85de29b48dc500e27ae9668f


# 7bf85edf 30-Nov-2013 Ingo Weinhold <ingo_weinhold@gmx.de>

Allow disabling ASLR via DISABLE_ASLR environment variable

* VMAddressSpace: Add randomizingEnabled property.
* VMUserAddressSpace: Randomize addresses only when randomizingEnabled
property is set.
* create_team_arg(): Check, if the team's environment contains
"DISABLE_ASLR=1". Set the team's address space property
randomizingEnabled accordingly in load_image_internal() and
exec_team().


# a8ad734f 14-Jun-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Introduced structures {virtual,physical}_address_restrictions, which specify
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1ba89e67 05-Jun-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Removed no-op VMTranslationMap::InitPostSem() and
VMAddressSpace::InitPostSem().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37025 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 09418c86 13-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Added DebugGet() method for use in the kernel debugger.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36227 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 550376ff 03-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* vm_delete_areas(): Changed return type to void (was status_t and not used).
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96


# deee8524 26-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Introduced {malloc,memalign,free}_etc() which take an additional "flags"
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f082f7f0 15-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added vm_page::accessed flag. Works analogously to vm_page::modified.
* Reorganized the code for [un]mapping pages:
- Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what
vm_unmap_page[s]() did before, just in the architecture specific code, which
allows for specific optimizations. UnmapArea() is for the special case that
the complete area is unmapped. Particularly in case the address space is
deleted, some work can be saved. Several TODOs could be slain.
- Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]()
are now static and have lost their prefix (and the "preserveModified"
parameter).
* Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers
for Protect().
* X86VMTranslationMap::Protect(): Make sure not to accidentally clear the
accessed/dirty flags.
* X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually
work. It was only skipping to the next page.
* Adjusted the PPC code to at least compile.

No measurable effect for the -j8 Haiku image build time, though the kernel time
drops minimally.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bcc2c157 13-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Refactored vm_translation_map:
* Pulled the physical page mapping functions out of vm_translation_map into
a new interface VMPhysicalPageMapper.
* Renamed vm_translation_map to VMTranslationMap and made it a proper C++
class. The functions in the operations vector have become methods.
* Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper
as far as possible (without actually writing new code).
* Adjusted the x86 and the PPC specifics accordingly (untested for the
latter). For the other architectures the build is, I'm afraid, seriously
broken.

The next steps will modify and extend the VMTranslationMap interface, so that
it will be possible to fix the bugs in vm_unmap_page[s]() and employ
architecture specific optimizations.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 509f1174 09-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added Size() method.
* Added Debug{First,Next}() methods to allow easy iteration through the
address spaces in kernel debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34978 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 8ab820f0 07-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

VMAddressSpace::Put() is too hot to always write lock the address spaces
table. It is now inline and uses double-checked locking. This reduces the
contention on the lock to insignificant. Total -j8 Haiku image build speedup
is marginal, but the total kernel time drops 12%.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34934 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e55886c3 06-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Make iteration safe. VMKernelAddressSpace::Next() doesn't like NULL pointers.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34531 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 40cd019e 06-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Renamed VMAddressSpace::ResizeArea{Head,Tail}() to ShrinkArea{Head,Tail}()
to clarify that they never enlarge the area.
* Reimplemented VMKernelAddressSpace. It is somewhat inspired by Bonwick's
vmem resource allocator (though we have different requirements):
- We consider the complete address space to be divided into contiguous
ranges of type free, reserved, or area, each range being represented by
a VMKernelAddressRange object.
- The range objects are managed in an AVL tree and a doubly linked list
(the latter only for faster iteration) sorted by address. This provides
O(log(n)) lookup, insertion and removal.
- For each power of two size we maintain a list of free ranges of at least
that size. Thus for the most common case of B_ANY*_ADDRESS area
allocation, we find a free range in constant time (the rest of the
processing being O(log(n))) with a rather good fit. This should also
help avoiding address space fragmentation.
While the new implementation should be faster, particularly with an
increasing number of areas, I couldn't measure any difference in the -j2
haiku build. From a cursory test the -j8 build hasn't tangibly benefitted
either.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34528 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2c1886ae 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added VMArea subclasses VM{Kernel,User}Area and moved the address space list
link to them.
* VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and
VMAddressSpace has grown factory methods {Create,Delete}Area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e2518ddb 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Made VMAddressSpace an abstract base class and moved the area management into
new derived classes VM{Kernel,User}AddressSpace. Currently those are
identical, but that will change.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34492 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 38a97b2c 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Moved all knowledge of reserved areas from vm.cpp to VMAddressSpace. It's a
pure address space feature, so it should be handled there.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34491 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f69032f2 03-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added VMAddressSpace::ResizeArea{Head,Tail}() to adjust an area's base
and size.
* Made VMArea::Set{Base,Size}() private and made VMAddressSpace a friend.
In vm.cpp the new VMAddressSpace::ResizeArea{Head,Tail}() are used
instead.
Finally all address space changes happen in VMAddressSpace only. *phew*
Now it's ready to be thoroughly butchered. :-)


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34467 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 35d94001 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Changed the address space area list to doubly linked. The reason is to
simplify migration of the area management, but as a side effect, it also
makes area deletion O(1) (instead of O(n), n == number of areas in the
address space).
* Moved more area management functionality from vm.cpp to VMAddressSpace and
VMArea structure creation to VMArea. Made the list and list link members
itself private.
* VMAddressSpace tracks its amount of free space, now. This also replaces
the previous mechanism to do that only for the kernel address space. It
was broken anyway, since delete_area() subtracted the area size instead of
adding it.
* vm_free_unused_boot_loader_range():
- lastEnd could be set to a value < start, which could cause memory
outside of the given range to be unmapped. Haven't checked whether this
could happen in practice -- if so, it would be seriously unhealthy.
- The range between the end of the last area in the range and the end of
the range would never be freed.
- Fixed potential integer overflows when computing addresses.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e50cf876 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Moved the VM headers into subdirectory vm/.
* Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 7bf85edf58b140433802e19e992d32f0a824b702 30-Nov-2013 Ingo Weinhold <ingo_weinhold@gmx.de>

Allow disabling ASLR via DISABLE_ASLR environment variable

* VMAddressSpace: Add randomizingEnabled property.
* VMUserAddressSpace: Randomize addresses only when randomizingEnabled
property is set.
* create_team_arg(): Check, if the team's environment contains
"DISABLE_ASLR=1". Set the team's address space property
randomizingEnabled accordingly in load_image_internal() and
exec_team().


# a8ad734f1c698917badb15e1641e0f38b3e9a013 14-Jun-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Introduced structures {virtual,physical}_address_restrictions, which specify
restrictions for virtual/physical addresses.
* vm_page_allocate_page_run():
- Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not
taken into account.
- Takes a physical_address_restrictions instead of base/limit and also
supports alignment and boundary restrictions, now.
* map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/
ReserveAddressRange() take a virtual_address_restrictions parameter, now. They
also support an alignment independent from the range size.
* create_area_etc(), vm_create_anonymous_area(): Take
{virtual,physical}_address_restrictions parameters, now.
* Removed no longer needed B_PHYSICAL_BASE_ADDRESS.
* DMAResources:
- Fixed potential overflows of uint32 when initializing from device node
attributes.
- Fixed bounce buffer creation TODOs: By using create_area_etc() with the
new restrictions parameters we can directly support physical high address,
boundary, and alignment.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1ba89e67eda21e7ad9e1ec57a53ae0a3436b8721 05-Jun-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Removed no-op VMTranslationMap::InitPostSem() and
VMAddressSpace::InitPostSem().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37025 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 09418c869bf79427b822ebf0f11b14938ec8154b 13-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Added DebugGet() method for use in the kernel debugger.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36227 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 550376ffb83d09c9cdffdff7b9ab067d9fd70312 03-Apr-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* vm_delete_areas(): Changed return type to void (was status_t and not used).
* _user_map_file(), _user_unmap_memory(): Verify that the address (if given) is
page aligned.
* Reworked memory locking (wiring):
- VMArea does now have a list of wired memory ranges and supports waiting for
a range to be removed.
- vm_soft_fault():
- Added "wirePage" parameter that, if given, makes the function wire the
page and return it.
- Added "wiredRange" parameter (for calls from lock_memory_etc()) and made
sure we never unmap wired pages. This could e.g. happen when a page from a
lower cache was read-mapped and a write fault occurred. Now in such a
situation the function waits for the page to be unwired and restarts.
- All functions that manipulate areas in a way that could affect wired ranges
do now either require the caller to make sure there are no wired ranges in
the way or do that themselves. Added a few wait_if_*_is_wired() helper
functions for that purpose.
- lock_memory_etc():
- Does now also work correctly when the range spans more than one area.
- Adds VMAreaWiredRanges to the affected VMAreas and retains an address
space reference (so that the address space won't be deleted as long as a
wired range exists).
- Resolved TODO: The area's caches are now locked when
increment_page_wired_count() is called.
- Resolved TODO: The race condition due to missing locking after looking up
the page mapping is now prevented. We hold the cache locks (in case the
page is already mapped) and the new vm_soft_fault() parameter allows us
to get the page wired.
- unlock_memory_etc(): Changes symmetrical to those in lock_memory_etc() and
resolved all TODOs.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36030 a95241bf-73f2-0310-859d-f6bbb57e9c96


# deee8524b7534d9b586cbcbf366d0660c9769a8e 26-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Introduced {malloc,memalign,free}_etc() which take an additional "flags"
argument. They replace the previous special-purpose allocation functions
(malloc_nogrow(), vip_io_request_malloc()).
* Moved the I/O VIP heap to heap.cpp accordingly.
* Added quite a bit of passing around of allocation flags in the VM,
particularly in the VM*AddressSpace classes.
* Fixed IOBuffer::GetNextVirtualVec(): It was ignoring the VIP flag and always
allocated on the normal heap.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35316 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f082f7f019941732f1d2b99f627fbeeeec3746af 15-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added vm_page::accessed flag. Works analogously to vm_page::modified.
* Reorganized the code for [un]mapping pages:
- Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what
vm_unmap_page[s]() did before, just in the architecture specific code, which
allows for specific optimizations. UnmapArea() is for the special case that
the complete area is unmapped. Particularly in case the address space is
deleted, some work can be saved. Several TODOs could be slain.
- Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]()
are now static and have lost their prefix (and the "preserveModified"
parameter).
* Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers
for Protect().
* X86VMTranslationMap::Protect(): Make sure not to accidentally clear the
accessed/dirty flags.
* X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually
work. It was only skipping to the next page.
* Adjusted the PPC code to at least compile.

No measurable effect for the -j8 Haiku image build time, though the kernel time
drops minimally.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96


# bcc2c157a1c54f5169de1e7a3e32c49e92bbe0aa 13-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

Refactored vm_translation_map:
* Pulled the physical page mapping functions out of vm_translation_map into
a new interface VMPhysicalPageMapper.
* Renamed vm_translation_map to VMTranslationMap and made it a proper C++
class. The functions in the operations vector have become methods.
* Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper
as far as possible (without actually writing new code).
* Adjusted the x86 and the PPC specifics accordingly (untested for the
latter). For the other architectures the build is, I'm afraid, seriously
broken.

The next steps will modify and extend the VMTranslationMap interface, so that
it will be possible to fix the bugs in vm_unmap_page[s]() and employ
architecture specific optimizations.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 509f1174ce3473701df0b73219f420b69d0954f9 09-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added Size() method.
* Added Debug{First,Next}() methods to allow easy iteration through the
address spaces in kernel debugger commands.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34978 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 8ab820f076cad39859a9a674b44fd82f1abfb88b 07-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

VMAddressSpace::Put() is too hot to always write lock the address spaces
table. It is now inline and uses double-checked locking. This reduces the
contention on the lock to insignificant. Total -j8 Haiku image build speedup
is marginal, but the total kernel time drops 12%.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34934 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e55886c3a3970c23a1fab3bc70287d03d9de6c12 06-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Make iteration safe. VMKernelAddressSpace::Next() doesn't like NULL pointers.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34531 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 40cd019ea0415011db2a82c736e716a22ca842cc 06-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Renamed VMAddressSpace::ResizeArea{Head,Tail}() to ShrinkArea{Head,Tail}()
to clarify that they never enlarge the area.
* Reimplemented VMKernelAddressSpace. It is somewhat inspired by Bonwick's
vmem resource allocator (though we have different requirements):
- We consider the complete address space to be divided into contiguous
ranges of type free, reserved, or area, each range being represented by
a VMKernelAddressRange object.
- The range objects are managed in an AVL tree and a doubly linked list
(the latter only for faster iteration) sorted by address. This provides
O(log(n)) lookup, insertion and removal.
- For each power of two size we maintain a list of free ranges of at least
that size. Thus for the most common case of B_ANY*_ADDRESS area
allocation, we find a free range in constant time (the rest of the
processing being O(log(n))) with a rather good fit. This should also
help avoiding address space fragmentation.
While the new implementation should be faster, particularly with an
increasing number of areas, I couldn't measure any difference in the -j2
haiku build. From a cursory test the -j8 build hasn't tangibly benefitted
either.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34528 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2c1886aeae1be8dc6bb9656701b2aab5bf3311ca 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added VMArea subclasses VM{Kernel,User}Area and moved the address space list
link to them.
* VM{Kernel,User}AddressSpace manage the respective VMArea subclass now, and
VMAddressSpace has grown factory methods {Create,Delete}Area.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34493 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e2518ddbb14677f379b8dc8f66088473c553b113 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Made VMAddressSpace an abstract base class and moved the area management into
new derived classes VM{Kernel,User}AddressSpace. Currently those are
identical, but that will change.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34492 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 38a97b2c36a42758a143aef034e0a3fc70440934 04-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

Moved all knowledge of reserved areas from vm.cpp to VMAddressSpace. It's a
pure address space feature, so it should be handled there.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34491 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f69032f22b9fdf7ea2a3287e162abe39882ce373 03-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Added VMAddressSpace::ResizeArea{Head,Tail}() to adjust an area's base
and size.
* Made VMArea::Set{Base,Size}() private and made VMAddressSpace a friend.
In vm.cpp the new VMAddressSpace::ResizeArea{Head,Tail}() are used
instead.
Finally all address space changes happen in VMAddressSpace only. *phew*
Now it's ready to be thoroughly butchered. :-)


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34467 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 35d940014e100b8ca09eaf294b86fb9ef905b1e0 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Changed the address space area list to doubly linked. The reason is to
simplify migration of the area management, but as a side effect, it also
makes area deletion O(1) (instead of O(n), n == number of areas in the
address space).
* Moved more area management functionality from vm.cpp to VMAddressSpace and
VMArea structure creation to VMArea. Made the list and list link members
itself private.
* VMAddressSpace tracks its amount of free space, now. This also replaces
the previous mechanism to do that only for the kernel address space. It
was broken anyway, since delete_area() subtracted the area size instead of
adding it.
* vm_free_unused_boot_loader_range():
- lastEnd could be set to a value < start, which could cause memory
outside of the given range to be unmapped. Haven't checked whether this
could happen in practice -- if so, it would be seriously unhealthy.
- The range between the end of the last area in the range and the end of
the range would never be freed.
- Fixed potential integer overflows when computing addresses.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34459 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e50cf8765be50a7454c9488db38b638cf90805af 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Moved the VM headers into subdirectory vm/.
* Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96