History log of /haiku/src/system/kernel/arch/generic/generic_vm_physical_page_mapper.cpp
Revision Date Author Comments
# bec80c1c 10-Feb-2018 Jérôme Duval <jerome.duval@gmail.com>

white space cleanup


# aa3083e0 29-Jan-2017 Freeman Lou <freemanlou2430@yahoo.com>

Style fixes to various parts of the system.

Signed-off-by: Adrien Destugues <pulkomandy@pulkomandy.tk>

This patch was never applied after GSoC 2012. Rebase the parts that
still apply so we can close the ticket.

Fixes #9490.


# 2ea7b17c 09-Jun-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible
"addr_t aligmnent".
* X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(),
generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment
feature instead of aligning by hand.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 147133b7 25-May-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* First run through the kernel's private parts to use phys_{addr,size}_t
where appropriate.
* Typedef'ed page_num_t to phys_addr_t and used it in more places in
vm_page.{h,cpp}.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96


# cff6e9e4 26-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* The system now holds back a small reserve of committable memory and pages. The
memory and page reservation functions have a new "priority" parameter that
indicates how deep the function may tap into that reserve. The currently
existing priority levels are "user", "system", and "VIP". The idea is that
user programs should never be able to cause a state that gets the kernel into
trouble due to heavy battling for memory. The "VIP" level (not really used
yet) is intended for allocations that are required to free memory eventually
(in the page writer). More levels are thinkable in the future, like "user real
time" or "user system server".
* Added "priority" parameters to several VMCache methods.
* Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags"
parameter.
* Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag
CACHE_PRIORITY_VIP indicating the importance of the request.
* Changed most code to pass the right priorities/flags.

These changes already significantly improve the behavior in low memory
situations. I've tested a bit with 64 MB (virtual) RAM and, while not
particularly fast and responsive, the system remains at least usable under high
memory pressure.
As a side effect the slab allocator can now be used as general memory allocator.
Not done by default yet, though.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 86c794e5 21-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e50cf876 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Moved the VM headers into subdirectory vm/.
* Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 90d870c1 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Moved VMAddressSpace definition to vm_address_space.h.
* "Classified" VMAddressSpace, i.e. turned the vm_address_space_*() functions
into methods, made all attributes (but "areas") private, and added
accessors.
* Also turned the vm.cpp functions vm_area_lookup() and
remove_area_from_address_space() into VMAddressSpace methods. The rest of
the area management functionality will follow soon.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34447 a95241bf-73f2-0310-859d-f6bbb57e9c96


# b7b7ca0f 17-Oct-2008 Ingo Weinhold <ingo_weinhold@gmx.de>

* Removed debug check.
* "iospace p" should only print the entries for actually existing
memory.
* Fixed output of "iospace v".


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28222 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1b6eff28 11-Oct-2008 Ingo Weinhold <ingo_weinhold@gmx.de>

* Replaced the vm_get_physical_page() "flags"
PHYSICAL_PAGE_{NO,CAN}_WAIT into an actual flag
PHYSICAL_PAGE_DONT_WAIT.
* Pass the flags through to the chunk mapper callback.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27979 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f5b3a6a7 05-Jun-2008 Michael Lotz <mmlr@mlotz.ch>

* Initialize all static mutexes in the kernel through a MUTEX_INITIALIZER()
and remove the then unneeded mutex_init() for them.
* Remove the workaround for allowing uninitialized mutexes on kernel startup.
As they are all initialized statically through the MUTEX_INITIALIZER() now
this is not needed anymore.
* An uninitialized mutex will now cause a panic when used to find possibly
remaining cases.
* Remove now unnecessary driver_settings_init_post_sem() function.

git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25812 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 0c615a01 01-May-2008 Ingo Weinhold <ingo_weinhold@gmx.de>

* Removed old mutex implementation and renamed cutex to mutex.
* Trivial adjustments of code using mutexes. Mostly removing the
mutex_init() return value check.
* Added mutex_lock_threads_locked(), which is called with the threads
spinlock being held. The spinlock is released while waiting, of
course. This function is useful in cases where the existence of the
mutex object is ensured by holding the threads spinlock.
* Changed the two instances in the VFS code where an IO context of
another team needs to be locked to use mutex_lock_threads_locked().
Before it required a semaphore-based mutex implementation.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25283 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 3eca8585 27-Feb-2007 Axel Dörfler <axeld@pinc-software.de>

* Moved the early startup VM allocation functions from vm_page.c to vm.cpp.
* Renamed them, made everything static besides vm_allocate_early() (previous
vm_alloc_from_kernel_args()) which now allows you to specify a different
virtual than physical size, and therefore makes vm_alloc_virtual_from_kernel_args()
superfluous (which isn't exported anymore, and is now called allocate_early_virtual()).
* Enabled printing a stack trace on serial output on team crash - it doesn't hurt
for now, anyway.
* Cleanup.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@20244 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 02c92f82 18-Dec-2006 Travis Geiselbrecht <geist@foobox.com>

change the 'iospace' debug to use a more reasonable number of columns


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@19563 a95241bf-73f2-0310-859d-f6bbb57e9c96


# fb39ecb5 26-Feb-2006 Stephan Aßmus <superstippi@gmx.de>

* Added an "iospace" debugger command.
* Fixed broken "full sem" mechanism.
* Fixed recycling of previously mapped IO space chunks. Should avoid the
panic() call earlier introduced by Marcus.

(tracked down by axeld and bonefish over the course of three days
(in between skiing) and finally nailed on the bus back home)



git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@16511 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 3b36b30f 07-Jan-2006 Ingo Weinhold <ingo_weinhold@gmx.de>

* Made vm_alloc_virtual_from_kernel_args() available to other kernel
parts, too. Fixed a potential overflow.
* The generic physical page mapper reserves the virtual address range
for the IO space now, so that noone can interfere until an area has
been created. The location of the IO space is no longer fixed; it
didn't look to me like it was necessary for x86, and we definitely
need to be flexible for PPC.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@15855 a95241bf-73f2-0310-859d-f6bbb57e9c96


# a71974c1 06-Jan-2006 Ingo Weinhold <ingo_weinhold@gmx.de>

Pulled the algorithm for dynamically mapping physical pages into an
"IO space" out of the x86 specific source into arch/generic. We'll use
it for PPC as well.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@15853 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 2ea7b17cf341036024bd84c4f791f332b2e15e48 09-Jun-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* vm_allocate_early(): Replace "bool blockAlign" parameter by a more flexible
"addr_t aligmnent".
* X86PagingMethod32Bit::PhysicalPageSlotPool::InitInitial(),
generic_vm_physical_page_mapper_init(): Use vm_allocate_early()'s alignment
feature instead of aligning by hand.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37070 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 147133b76cbb1603bdbff295505f5b830cb4e688 25-May-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* First run through the kernel's private parts to use phys_{addr,size}_t
where appropriate.
* Typedef'ed page_num_t to phys_addr_t and used it in more places in
vm_page.{h,cpp}.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96


# cff6e9e406132a76bfc20cb35ff5228dd0ba94d8 26-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

* The system now holds back a small reserve of committable memory and pages. The
memory and page reservation functions have a new "priority" parameter that
indicates how deep the function may tap into that reserve. The currently
existing priority levels are "user", "system", and "VIP". The idea is that
user programs should never be able to cause a state that gets the kernel into
trouble due to heavy battling for memory. The "VIP" level (not really used
yet) is intended for allocations that are required to free memory eventually
(in the page writer). More levels are thinkable in the future, like "user real
time" or "user system server".
* Added "priority" parameters to several VMCache methods.
* Replaced the map_backing_store() "unmapAddressRange" parameter by a "flags"
parameter.
* Added area creation flag CREATE_AREA_PRIORITY_VIP and slab allocator flag
CACHE_PRIORITY_VIP indicating the importance of the request.
* Changed most code to pass the right priorities/flags.

These changes already significantly improve the behavior in low memory
situations. I've tested a bit with 64 MB (virtual) RAM and, while not
particularly fast and responsive, the system remains at least usable under high
memory pressure.
As a side effect the slab allocator can now be used as general memory allocator.
Not done by default yet, though.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35295 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 86c794e5c10f1b2d99d672d424a8637639c703dd 21-Jan-2010 Ingo Weinhold <ingo_weinhold@gmx.de>

slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96


# e50cf8765be50a7454c9488db38b638cf90805af 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Moved the VM headers into subdirectory vm/.
* Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 90d870c1556bdc415c7f41de5474ebebb0ceebdd 02-Dec-2009 Ingo Weinhold <ingo_weinhold@gmx.de>

* Moved VMAddressSpace definition to vm_address_space.h.
* "Classified" VMAddressSpace, i.e. turned the vm_address_space_*() functions
into methods, made all attributes (but "areas") private, and added
accessors.
* Also turned the vm.cpp functions vm_area_lookup() and
remove_area_from_address_space() into VMAddressSpace methods. The rest of
the area management functionality will follow soon.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34447 a95241bf-73f2-0310-859d-f6bbb57e9c96


# b7b7ca0fc329f6274fc38a4cc96135c8979b5a9e 17-Oct-2008 Ingo Weinhold <ingo_weinhold@gmx.de>

* Removed debug check.
* "iospace p" should only print the entries for actually existing
memory.
* Fixed output of "iospace v".


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28222 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 1b6eff280f4afcf4d7c9dc9ccdc3a65f4e6ca0fd 11-Oct-2008 Ingo Weinhold <ingo_weinhold@gmx.de>

* Replaced the vm_get_physical_page() "flags"
PHYSICAL_PAGE_{NO,CAN}_WAIT into an actual flag
PHYSICAL_PAGE_DONT_WAIT.
* Pass the flags through to the chunk mapper callback.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27979 a95241bf-73f2-0310-859d-f6bbb57e9c96


# f5b3a6a7968b849c8cb3d06ffe6d19acccf910a8 05-Jun-2008 Michael Lotz <mmlr@mlotz.ch>

* Initialize all static mutexes in the kernel through a MUTEX_INITIALIZER()
and remove the then unneeded mutex_init() for them.
* Remove the workaround for allowing uninitialized mutexes on kernel startup.
As they are all initialized statically through the MUTEX_INITIALIZER() now
this is not needed anymore.
* An uninitialized mutex will now cause a panic when used to find possibly
remaining cases.
* Remove now unnecessary driver_settings_init_post_sem() function.

git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25812 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 0c615a01ae49634aaf59fbe35b3d55b3bb8890df 01-May-2008 Ingo Weinhold <ingo_weinhold@gmx.de>

* Removed old mutex implementation and renamed cutex to mutex.
* Trivial adjustments of code using mutexes. Mostly removing the
mutex_init() return value check.
* Added mutex_lock_threads_locked(), which is called with the threads
spinlock being held. The spinlock is released while waiting, of
course. This function is useful in cases where the existence of the
mutex object is ensured by holding the threads spinlock.
* Changed the two instances in the VFS code where an IO context of
another team needs to be locked to use mutex_lock_threads_locked().
Before it required a semaphore-based mutex implementation.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25283 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 3eca85851531dfe2cd8a3135eeac91b540c2c55f 27-Feb-2007 Axel Dörfler <axeld@pinc-software.de>

* Moved the early startup VM allocation functions from vm_page.c to vm.cpp.
* Renamed them, made everything static besides vm_allocate_early() (previous
vm_alloc_from_kernel_args()) which now allows you to specify a different
virtual than physical size, and therefore makes vm_alloc_virtual_from_kernel_args()
superfluous (which isn't exported anymore, and is now called allocate_early_virtual()).
* Enabled printing a stack trace on serial output on team crash - it doesn't hurt
for now, anyway.
* Cleanup.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@20244 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 02c92f8256f62dbf3876e9e33050a958381f3738 18-Dec-2006 Travis Geiselbrecht <geist@foobox.com>

change the 'iospace' debug to use a more reasonable number of columns


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@19563 a95241bf-73f2-0310-859d-f6bbb57e9c96


# fb39ecb53477f457fe9cb199eae7ae0f3cf140b4 26-Feb-2006 Stephan Aßmus <superstippi@gmx.de>

* Added an "iospace" debugger command.
* Fixed broken "full sem" mechanism.
* Fixed recycling of previously mapped IO space chunks. Should avoid the
panic() call earlier introduced by Marcus.

(tracked down by axeld and bonefish over the course of three days
(in between skiing) and finally nailed on the bus back home)



git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@16511 a95241bf-73f2-0310-859d-f6bbb57e9c96


# 3b36b30fef6479cffbb4eb1d1aa83239f30fca24 07-Jan-2006 Ingo Weinhold <ingo_weinhold@gmx.de>

* Made vm_alloc_virtual_from_kernel_args() available to other kernel
parts, too. Fixed a potential overflow.
* The generic physical page mapper reserves the virtual address range
for the IO space now, so that noone can interfere until an area has
been created. The location of the IO space is no longer fixed; it
didn't look to me like it was necessary for x86, and we definitely
need to be flexible for PPC.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@15855 a95241bf-73f2-0310-859d-f6bbb57e9c96


# a71974c1f8098070e66beb3830666c6fa487f1fa 06-Jan-2006 Ingo Weinhold <ingo_weinhold@gmx.de>

Pulled the algorithm for dynamically mapping physical pages into an
"IO space" out of the x86 specific source into arch/generic. We'll use
it for PPC as well.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@15853 a95241bf-73f2-0310-859d-f6bbb57e9c96