#
58353b38 |
|
02-Oct-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86_64: LA57 aka 5-level paging this enables the kernel to correctly take over when the bootloader prepares the paging in 4-level or 5-level. Change-Id: I0444486d8e17aade574e2afe255a3c2cfc49f21f Reviewed-on: https://review.haiku-os.org/c/haiku/+/3551 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com> Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
|
#
647b768f |
|
10-Dec-2013 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86: Disable PAE, if 4 GB memory limit safemode is set
|
#
bcb74636 |
|
17-Sep-2013 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
arch_vm_translation_map_early_map(): Fix debug output
|
#
966f2076 |
|
06-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: enable data execution prevention Set execute disable bit for any page that belongs to area with neither B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set. In order to take advanage of NX bit in 32 bit protected mode PAE must be enabled. Thus, from now on it is also enabled when the CPU supports NX bit. vm_page_fault() takes additional argument which indicates whether page fault was caused by an illegal instruction fetch.
|
#
950b24e3 |
|
04-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Begun work on VMTranslationMap implementation for x86_64. * Added empty source files for all the 64-bit paging method code, and a stub implementation of X86PagingMethod64Bit. * arch_vm_translation_map.cpp has been modified to use X86PagingMethod64Bit on x86_64.
|
#
428b9e75 |
|
04-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Check whether gX86PagingMethod is NULL in arch_vm_translation_map_is_kernel_page_accessible. This means that the kernel debugger won't cause a recursive panic if a panic occurs before vm_init().
|
#
17a33898 |
|
21-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Remove phys_addr_range, just use addr_range for both virtual and physical address ranges (as requested by Ingo).
|
#
192af9e0 |
|
20-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Changed addr_range to use uint64. I've tested this change on x86, causing no issues. I've checked over the code for all other platforms and made the necessary changes and to the best of my knowledge they should also still work, but I haven't actually built and tested them. Once I've completed the kernel_args changes the other platforms will need testing.
|
#
45bd7bb3 |
|
25-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Removed unnecessary inclusions of <boot/kernel_args.h> in private kernel headers and respectively added includes in source files. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37259 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
61d2b06c |
|
12-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Use PAE only when there's memory beyond the 4G limit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37115 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a410098f |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Only use PAE, if supported by the CPU. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37068 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5b4d62a2 |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Skeleton classes for PAE support. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37066 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2434bdc4 |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced global variable gX86PagingMethod, so the paging method can be accessed from anywhere. Added static X86PagingMethod32Bit::Method() returning it as the subtype pointer -- to be used in the code related to that method only, of course. * Made a bunch of static variables non-static members of X86PagingMethod32Bit and added accessors for them. This makes them accessible in other source files (allowing for more refactoring) and saves memory, when we actually have another paging method implementation. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37062 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1b3e83ad |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved paging related files to new subdirectories paging and paging/32bit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37060 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5aa0503c |
|
07-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed i386_translation_map_get_pgdir() and adjusted the one place where it was used. * Renamed X86VMTranslationMap to X86VMTranslationMap32Bit and pulled the paging method agnostic part into new base class X86VMTranslationMap. * Moved X86PagingStructures into its own header/source pair. * Moved pgdir_virt from X86PagingStructures to X86PagingStructures32Bit where it is actually used. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37055 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c6caf520 |
|
07-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added a level of indirection for the arch_vm_translation_map functions. Introduced the interface X86PagingMethod which is used by those. ATM there's one implementing class, X86PagingMethod32Bit. * Made X86PagingStructures a base class, with one derived class, X86PagingStructures32Bit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37050 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1ba89e67 |
|
05-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Removed no-op VMTranslationMap::InitPostSem() and VMAddressSpace::InitPostSem(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37025 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
84217140 |
|
05-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86: * Renamed vm_translation_map_arch_info to X86PagingStructures, and all members and local variables of that type accordingly. * arch_thread_context_switch(): Added TODO: The still active paging structures can indeed be deleted before we stop using them. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37022 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
521aff92 |
|
04-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved the page mapper and the page invalidation cache from vm_translation_map_arch_info to X86VMTranslationMap where they actually belong. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37014 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
78dde7ab |
|
04-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Consequently use uint32 for the physical page directory address. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37011 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
23aa437d |
|
02-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed nasty cast that breaks with sizeof(phys_addr_t) == 64. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37001 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
147133b7 |
|
25-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* First run through the kernel's private parts to use phys_{addr,size}_t where appropriate. * Typedef'ed page_num_t to phys_addr_t and used it in more places in vm_page.{h,cpp}. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
001a0e09 |
|
02-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
memory_type_to_pte_flags(): Also set the write-through flag for uncacheable memory. This avoids implementation defined behavior on Pentium Pro/II when intersecting with an write-combining or write-protected MTRR range. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36589 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f23be5bb |
|
02-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Support memory types in page mappings to the degree that is possible without PAT support (i.e. uncacheable, write-through, and write-back). Has pretty much no effect ATM, as the MTRRs restrict the types to what is actually requested. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36583 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c1be1e07 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMTranslationMap::Map()/Protect(): Added "memoryType" parameter. Not implemented for any architecture yet. * vm_set_area_memory_type(): Call VMTranslationMap::ProtectArea() to change the memory type for the already mapped pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36574 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
50e4dd93 |
|
12-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
axeld + bonefish: X86VMTranslationMap::Protect(): * Removed rounding up the end address to page alignment. It's not necessary and could cause an overflow. * Fixed possible infinite loop triggered by a rare race condition: When two threads of a team were accessing the same unmapped page at the same time each would trigger a page fault. One thread would map the page again, the second would wait until the first one was done and update the page protection (unnecessarily but harmlessly). If the first thread accessed the page again at an unfortunate time, it would implicitly change the accessed/dirty flags of the page's PTE, which was a situation the loop in Protect() didn't consider and thus run forever. Seen the problem twice today in form of an app server freeze. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36197 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dcdf2ab9 |
|
02-Mar-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Extended assert output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35724 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
84328c26 |
|
01-Mar-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Extended assert output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35697 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
af0572ea |
|
28-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed debug output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35685 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2340fc36 |
|
28-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed build with tracing turned on and improve/added debug output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35658 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4891bed4 |
|
27-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* large_memory_physical_page_ops_init(): Don't assign the return value before it is fully initialized. * arch_vm_translation_map_is_kernel_page_accessible(): Check whether sPhysicalPageMapper has already been initialized. If a panic() during or before the initialization of the physical page mapper occurred, we no longer access a partially initialized object or a NULL pointer. This should fix the triple fault part of #1925. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35644 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ba3d62b6 |
|
15-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
X86VMTranslationMap::UnmapArea(): Don't change the page state before it has been unmapped. This way modified pages could end up in the "cached" queue without having been written back. That would be a good explanation for #5374 (partially wrong file contents) -- as soon as such a page was freed, the invalid on-disk contents would become visible. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35477 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4e4cfe8f |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Missing page access debug markers. Fixes #5359. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35398 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40bb9481 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed useless return parameter from vm_remove_all_page_mappings(). * Added vm_clear_page_mapping_accessed_flags() and vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality of vm_test_map_activation(), vm_clear_map_flags(), and vm_remove_all_page_mappings(), thus saving lots of calls to translation map methods. The backend is the new method VMTranslationMap::ClearAccessedAndModified(). * Started to make use of the cached page queue and changed the meaning of the other non-free queues slightly: - Active queue: Contains mapped pages that have been used recently. - Inactive queue: Contains mapped pages that have not been used recently. Also contains unmapped temporary pages. - Modified queue: Contains unmapped modified pages. - Cached queue: Contains unmapped unmodified pages (LRU sorted). Unless we're actually low on memory and actively do paging, modified and cached queues only contain non-temporary pages. Cached pages are considered quasi free. They still belong to a cache, but since they are unmodified and unmapped, they can be freed immediately. And this is what vm_page_[try_]reserve_pages() do now when there are no more actually free pages at hand. Essentially this means that pages storing cached file data, unless mmap()ped, no longer are considered used and don't contribute to page pressure. Paging will not happen as long there are enough free + cached pages available. * Reimplemented the page daemon. It no longer scans all pages, but instead works the page queues. As long as the free pages situation is harmless, it only iterates through the active queue and deactivates pages that have not been used recently. When paging occurs it additionally scans the inactive queue and frees pages that have not been used recently. * Changed the page reservation/allocation interface: vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and vm_page_allocate_page() now take a vm_page_reservation structure pointer. The reservation functions initialize the structure -- currently consisting only of a count member for the number of still reserved pages. vm_page_allocate_page() decrements the count and vm_page_unreserve_pages() unreserves the remaining pages (if any). Advantages are that reservation/ unreservation mismatches cannot occur anymore, that vm_page_allocate_page() can verify that the caller has indeed a reserved page left, and that there's no unnecessary pressure on the free page pool anymore. The only disadvantage is that the vm_page_reservation object needs to be passed around a bit. * Reworked the page reservation implementation: - Got rid of sSystemReservedPages and sPageDeficit. Instead sUnreservedFreePages now actually contains the number of free pages that have not yet been reserved (it cannot become negative anymore) and the new sUnsatisfiedPageReservations contains the number of pages that are still needed for reservation. - Threads waiting for reservations do now add themselves to a waiter queue, which is ordered by descending priority (VM priority and thread priority). High priority waiters are served first when pages become available. Fixes #5328. * cache_prefetch_vnode(): Would reserve one less page than allocated later, if the size wasn't page aligned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e65c4002 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Replaced the vm_page_allocate_page*() "pageState" parameter by a more general "flags" parameter. It encodes the target state of the page -- so that the page isn't unnecessarily put in the wrong page queue first -- a flag whether the page should be cleared, and one to indicate whether the page should be marked busy. * Added page state PAGE_STATE_CACHED. Not used yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
72382fa6 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed the page state PAGE_STATE_BUSY and instead introduced a vm_page::busy flag. The obvious advantage is that one can still see what state a page is in and even move it between states while being marked busy. * Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which in all cases has the same effect. Introduced a vm_page_is_dummy() that can still check whether a given page is a dummy page. * vm_page_unreserve_pages(): Before adding to the system reserve make sure sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages available for allocation. steal_pages() still has the same problem and it can't be solved that easily. * map_page(): No longer changes the page state/mark the page unbusy. That's the caller's responsibility. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8d1316fd |
|
22-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Replaced CACHE_DONT_SLEEP by two new flags CACHE_DONT_WAIT_FOR_MEMORY and CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager does not wait when reserving memory or pages. The latter prevents area operations. The new flags add a bit of flexibility. E.g. when allocating page mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient, i.e. the allocation will succeed as long as pages are available. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
86c794e5 |
|
21-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
slab allocator: * Implemented a more elaborated raw memory allocation backend (MemoryManager). We allocate 8 MB areas whose pages we allocate and map when needed. An area is divided into equally-sized chunks which form the basic units of allocation. We have areas with three possible chunk sizes (small, medium, large), which is basically what the ObjectCache implementations were using anyway. * Added "uint32 flags" parameter to several of the slab allocator's object cache and object depot functions. E.g. object_depot_store() potentially wants to allocate memory for a magazine. But also in pure freeing functions it might eventually become useful to have those flags, since they could end up deleting an area, which might not be allowable in all situations. We should introduce specific flags to indicate that. * Reworked the block allocator. Since the MemoryManager allocates block-aligned areas, maintains a hash table for lookup, and maps chunks to object caches, we can quickly find out which object cache a to be freed allocation belongs to and thus don't need the boundary tags anymore. * Reworked the slab boot strap process. We allocate from the initial area only when really necessary, i.e. when the object cache for the respective allocation size has not been created yet. A single page is thus sufficient. other: * vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS. * Use an object cache for page mappings. This significantly reduces the contention on the heap bin locks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6379e53e |
|
19-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
vm_page no longer points directly to its containing cache, but rather to a VMCacheRef object which points to the cache. This allows to optimize VMCache::MoveAllPages(), since it no longer needs to iterate over all pages to adjust their cache pointer. It can simple swap the cache refs of the two caches instead. Reduces the total -j8 Haiku image build time only marginally. The kernel time drops almost 10%, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35155 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f082f7f0 |
|
15-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added vm_page::accessed flag. Works analogously to vm_page::modified. * Reorganized the code for [un]mapping pages: - Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what vm_unmap_page[s]() did before, just in the architecture specific code, which allows for specific optimizations. UnmapArea() is for the special case that the complete area is unmapped. Particularly in case the address space is deleted, some work can be saved. Several TODOs could be slain. - Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]() are now static and have lost their prefix (and the "preserveModified" parameter). * Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers for Protect(). * X86VMTranslationMap::Protect(): Make sure not to accidentally clear the accessed/dirty flags. * X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually work. It was only skipping to the next page. * Adjusted the PPC code to at least compile. No measurable effect for the -j8 Haiku image build time, though the kernel time drops minimally. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c6aa0135 |
|
14-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Changed VMTranslationMap::Lock()/Unlock() return types to the usual. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35075 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bcc2c157 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Refactored vm_translation_map: * Pulled the physical page mapping functions out of vm_translation_map into a new interface VMPhysicalPageMapper. * Renamed vm_translation_map to VMTranslationMap and made it a proper C++ class. The functions in the operations vector have become methods. * Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper as far as possible (without actually writing new code). * Adjusted the x86 and the PPC specifics accordingly (untested for the latter). For the other architectures the build is, I'm afraid, seriously broken. The next steps will modify and extend the VMTranslationMap interface, so that it will be possible to fix the bugs in vm_unmap_page[s]() and employ architecture specific optimizations. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
30f42360 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
As per the IA32 specification we can save TLB invalidations in at least two situations: * When mapping the page the page table entry should not have been marked "present" before, i.e. it would not have been cached anyway. * When the page table entry's accessed flag wasn't set, the entry hadn't been cached either. Speeds up the -j8 Haiku image build only minimally, but the total kernel time drops about 9%. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35062 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9435ae93 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86 page mapping: * Removed the page_{table,directory}_entry structures. The bit fields are nice in principle, but modifying individual flags this way is inherently non-atomic and we need atomicity in some situations. * Use atomic operations in protect_tmap(), clear_flags_tmap(), and others. * Aligned the query_tmap_interrupt() semantics with that of query_tmap(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35058 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3cd20943 |
|
06-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added new debug feature (DEBUG_PAGE_ACCESS) to detect invalid concurrent access to a vm_page. It is basically an atomically accessed thread ID field in the vm_page structure, which is explicitly set by macros marking the critical sections. As a first positive effect I had to review quite a bit of code and found several issues. * Added several TODOs and comments. Some harmless ones, but also a few troublesome ones in vm.cpp regarding page unmapping. * file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous vm_page_allocate_page() return value checks. It cannot fail anymore. * Removed the heavily contended "pages" lock. We use different policies now: - sModifiedTemporaryPages is accessed atomically. - sPageDeficitLock and sFreePageCondition are protected by a new mutex. - The page queues have individual locks (mutexes). - Renamed set_page_state_nolock() to set_page_state(). Unless the caller says otherwise, it does now lock the affected pages queues itself. Also changed the return value to void -- we panic() anyway. * set_page_state(): Add free/clear pages to the beginning of their respective queues as this is more cache-friendly. * Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer in any queue. They were in the "active" queue, but there's no good reason to have them there. In case we decide to let the page daemon work the queues (like FreeBSD) they would just be in the way. * Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper function. Also fixed a bug I introduced previously: The functions must not vm_page_unreserve_pages() on success, since they remove the pages from the free/clear queue without decrementing sUnreservedFreePages. * vm_page_set_state(): Changed return type to void. The function cannot really fail and no-one was checking it anyway. * vm_page_free(), vm_page_set_state(): Added assertion: The page must not be free/clear before. This is implied by the policy that no-one is allowed to access free/clear pages without holding the respective queue's lock, which is not the case at this point. This found the bug fixed in r34912. * vm_page_requeue(): Added general assertions. panic() when requeuing of free/clear pages is requested. Same reason as above. * vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is still not correct, though. My usual -j8 Haiku build test runs another 10% faster, now. The total kernel time drops about 18%. As hoped the new locks have only a fraction of the old "pages" lock contention. Other locks lead the "most wanted list" now. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1021fd28 |
|
01-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* agp_gart(): Use vm_page_[un]reserve_pages(). * Removed unused vm_page_allocate_pages(). * Removed now unused (always true) "reserved" parameter from vm_page_allocate_page(). * Removed unused (always false) "stealActive" parameter from steal_page(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34836 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e50cf876 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the VM headers into subdirectory vm/. * Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
90d870c1 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved VMAddressSpace definition to vm_address_space.h. * "Classified" VMAddressSpace, i.e. turned the vm_address_space_*() functions into methods, made all attributes (but "areas") private, and added accessors. * Also turned the vm.cpp functions vm_area_lookup() and remove_area_from_address_space() into VMAddressSpace methods. The rest of the area management functionality will follow soon. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34447 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
96c4511a |
|
17-Nov-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* Shuffled functions around, no functional change. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34083 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1a053eed |
|
08-Sep-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Revert r32994 and add a comment to explain the intention. Thanks Ingo for the clarification. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33001 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
fbcf5f3f |
|
07-Sep-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Don't know what this was supposed to do, but with the VADDR_TO_PDENT() it would end up as 0 again in any case. It certainly looks correct without it, removing so it doesn't confuse the next one reading over it. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32994 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
60a5ced3 |
|
09-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Adding a disabled debug helper. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32217 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ee280b59 |
|
04-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Prevent the user TLB invalidation function from being preempted by turning off interrupts when invoking it. The user TLB invalidation function essentially only reads and writes back control register 3 (cr3) which holds the physical address of the current page directory. Still a preemption between the read and the write can cause problems when the last thread of a team dies and therefore the team is deleted. The context switch on preemption would decrement the refcount of the object that holds the page directory. Then the team address space is deleted causing the context switch returning to that thread to not re-acquire a reference to the object. At that point the page directory as set in cr3 is the one of the previously run thread (which is fine, as all share the kernel space mappings we need). Now when the preempted thread continues though, it would overwrite cr3 with the physical page directory address from before the context switch still stored in eax, therefore setting the page directory to the one of the dying thread that now doesn't have the corresponding reference. Further progressing the thread would release the last reference causing the deletion of the object and freeing of the, now active again, page directory. The memory getting overwritten (by deadbeef) now completely corrupts the page directory causing basically any memory access to fault, in the end resulting in a triplefault. This should fix bug #3399. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32118 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ea2abd11 |
|
02-Aug-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Renamed the ROUNDOWN macro to ROUNDDOWN. Also changed the implementation of ROUNDUP to use '*' and '/' -- the compiler will optimize that for powers of two anyway and this implementation works for other numbers as well. * The thread::fault_handler use in C[++] code was broken with gcc 4. At least when other functions were invoked. Trying to trick the compiler wasn't a particularly good idea anyway, since the next compiler version could break the trick again. So the general policy is to use the fault handlers only in assembly code where we have full control. Changed that for x86 (save for the vm86 mode, which has a similar mechanism), but not for the other architectures. * Introduced fault_handler, fault_handler_stack_pointer, and fault_jump_buffer fields in the cpu_ent structure, which must be used instead of thread::fault_handler in the kernel debugger. Consequently user_memcpy() must not be used in the kernel debugger either. Introduced a debug_memcpy() instead. * Introduced debug_call_with_fault_handler() function which calls a function in a setjmp() and fault handler context. The architecture specific backend arch_debug_call_with_fault_handler() has only been implemented for x86 yet. * Introduced debug_is_kernel_memory_accessible() for use in the kernel debugger. It determines whether a range of memory can be accessed in the way specified. The architecture specific back end arch_vm_translation_map_is_kernel_page_accessible() has only been implemented for x86 yet. * Added arch_debug_unset_current_thread() (only implemented for x86) to unset the current thread pointer in the kernel debugger. When entering the kernel debugger we do some basic sanity checks of the currently set thread structure and unset it, if they fail. This allows certain commands (most importantly the stack trace command) to avoid accessing the thread structure. * x86: When handling a double fault, we do now install a special handler for page faults. This allows us to gracefully catch faulting commands, even if e.g. the thread structure is toast. We are now in much better shape to deal with double faults. Hopefully avoiding the triple faults that some people have been experiencing on their hardware and ideally even allowing to use the kernel debugger normally. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32073 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6a6974b6 |
|
22-Jun-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
gcc 4 warnings. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@31192 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9a42ad7a |
|
22-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
When switching to a kernel thread we no longer set the page directory. This is not necessary, since userland teams' page directories also contain the kernel mappings, and avoids unnecessary TLB flushes. To make that possible the vm_translation_map_arch_info objects are reference counted now. This optimization reduces the kernel time of the Haiku build on my machine with SMP disabled a few percent, but interestingly the total time decreases only marginally. Haven't tested with SMP yet, but for full impact CPU affinity would be needed. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28287 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8f06357d |
|
21-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Style changes: * Renamed static variables. * Enforced 80 columns limit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28273 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
47c40a10 |
|
19-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Prefixed memset_physical() and memcpy_to_physical() with "vm_", added vm_memcpy_from_physical() and vm_memcpy_physical_page(), and added respective functions to the vm_translation_map operations. The architecture specific implementation can now decide how to implement them most efficiently. Added generic implementations that can be used, though. * Changed vm_{get,put}_physical_page(). The former no longer accepts flags (the only flag PHYSICAL_PAGE_DONT_WAIT wasn't needed anymore). Instead it returns an implementation-specific handle that has to be passed to the latter. Added vm_{get,put}_physical_page_current_cpu() and *_debug() variants, that work only for the current CPU, respectively when in the kernel debugger. Also adjusted the vm_translation_map operations accordingly. * Made consequent use of the physical memory operations in the source tree. * Also adjusted the m68k and ppc implementations with respect to the vm_translation_map operation changes, but they are probably broken, nevertheless. * For x86 the generic physical page mapper isn't used anymore. It is suboptimal in any case. For systems with small memory it is too much overhead, since one can just map the complete physical memory (that's not done yet, though). For systems with large memory it counteracts the VM strategy to reuse the least recently used pages. Since those pages will most likely not be mapped by the page mapper anymore, it will keep remapping chunks. This was also the reason why building Haiku in Haiku was significantly faster with only 256 MB RAM (since that much could be kept mapped all the time). Now we're using a different strategy: We have small pools of virtual page slots per CPU that are used for the physical page operations (memset_physical(), memcpy_*_physical()) with CPU-pinned thread. Furthermore we have four slots per translation map, which are used to map page tables. These changes speed up the Haiku image build in Haiku significantly. On my Core2 Duo 2.2 GHz 2 GB machine about 40% to 20 min 40 s (KDEBUG disabled, block cache debug disabled). Still more than factor 3 slower than FreeBSD and Linux, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28244 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1b6eff28 |
|
11-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Replaced the vm_get_physical_page() "flags" PHYSICAL_PAGE_{NO,CAN}_WAIT into an actual flag PHYSICAL_PAGE_DONT_WAIT. * Pass the flags through to the chunk mapper callback. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27979 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5e50de7e |
|
11-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Don't disable interrupts in flush_tmap() and map_iospace_chunk(), just pin the thread to the current CPU. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27975 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
74785e79 |
|
07-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added "from" address space parameter to vm_swap_address_space()/ arch_vm_aspace_swap(). * The x86 implementation does now maintain a bit mask per vm_translation_map_arch_info indicating on which CPUs the address space is active. This allows flush_tmap() to avoid ICI for user address spaces when the team isn't currently running on any other CPU. In this context ICI is relatively expensive, particularly since we map most pages via vm_map_page() and therefore invoke flush_tmap() pretty much for every single page. This optimization speeds up a "hello world" compilation about 20% on my machine (KDEBUG turned off, freshly booted), but interestingly it has virtually no effect on the "-j2" haiku build time. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27912 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dbe295f8 |
|
07-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved vm_translation_map_arch_info definition to the header. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27902 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
567f7889 |
|
01-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fully inline {disable,restore}_interrupts() and friends when including <int.h>. Performance-wise not really significant, but gives nicer profiling results. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27827 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2391ca55 |
|
13-Sep-2008 |
Michael Lotz <mmlr@mlotz.ch> |
CID 56: Fix the wrong NULL check. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27478 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
802d18a9 |
|
02-Aug-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Small semantical change of map_max_pages_need(): If given a 0 start address, it is supposed to consider the worst case address range of the given size. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26740 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1c8de858 |
|
01-Jun-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added optional spinlock contention measurement feature. Enabled when B_DEBUG_SPINLOCK_CONTENTION is defined to 1. It typedefs spinlock to a structure (thus breaking BeOS binary compatibility), containing a counter which is incremented whenever a thread has to wait for the spinlock. * Added macros for spinlock initialization and access and changed code using spinlocks accordingly. This breaks compilation for BeOS -- the macros should be defined in the respective compatibility wrappers. * Added generic syscall to get the spinlock counters for the thread and the team spinlocks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25752 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b0f5179a |
|
28-May-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Changed recursive_lock to use a mutex instead of a semaphore. * Adjusted code using recursive locks respectively. The initialization cannot fail anymore, and it is possible to use recursive locks in the early boot process (even uninitialized, if in BSS), which simplifies things a little. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25687 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
75fe8391 |
|
10-Feb-2008 |
Michael Lotz <mmlr@mlotz.ch> |
Fix the build. Apparently this file wasn't recompiled on my end before. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23942 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f271831f |
|
10-Oct-2007 |
Axel Dörfler <axeld@pinc-software.de> |
Corrected comment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22500 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
0e183340 |
|
06-Oct-2007 |
Axel Dörfler <axeld@pinc-software.de> |
* Mapping a page might actually need memory - since we usually have locks that interfere with the page thief, we always need to have reserved a page for this upfront. I introduced a function to the vm_translation_map layer that estimates how much pages a mapping might need at maximum. All functions that map a page now call this and reserve the needed pages upfront. It might not be a nice solution, but it works. * The page thief could run into a panic when trying to call vm_cache_release_ref() on a non-existing (NULL) cache. * Also, it will now ignore wired active pages. * There is still a race condition between the page writer and the vnode destruction - writing a page back needs a valid vnode, but that might just have been deleted. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22455 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
393fceb5 |
|
27-Sep-2007 |
Axel Dörfler <axeld@pinc-software.de> |
* Cleaned up vm_types.h a bit, and made vm_page, vm_cache, and vm_area opaque types for C. * As a result, I've renamed some more source files to .cpp, and fixed all warnings caused by that. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22326 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
647b768f543c4d275390856edcc733327d7ae30c |
|
10-Dec-2013 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86: Disable PAE, if 4 GB memory limit safemode is set
|
#
bcb7463650e945f2cc2e61d3acef376c6ac39e76 |
|
17-Sep-2013 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
arch_vm_translation_map_early_map(): Fix debug output
|
#
966f207668d19610dae34d5331150e3742815bcf |
|
06-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: enable data execution prevention Set execute disable bit for any page that belongs to area with neither B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set. In order to take advanage of NX bit in 32 bit protected mode PAE must be enabled. Thus, from now on it is also enabled when the CPU supports NX bit. vm_page_fault() takes additional argument which indicates whether page fault was caused by an illegal instruction fetch.
|
#
950b24e32d8ddbc0d2a4e46de77c0fb4cc18b128 |
|
04-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Begun work on VMTranslationMap implementation for x86_64. * Added empty source files for all the 64-bit paging method code, and a stub implementation of X86PagingMethod64Bit. * arch_vm_translation_map.cpp has been modified to use X86PagingMethod64Bit on x86_64.
|
#
428b9e758c30a95154f5ad0d974894ef9ae133d6 |
|
04-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Check whether gX86PagingMethod is NULL in arch_vm_translation_map_is_kernel_page_accessible. This means that the kernel debugger won't cause a recursive panic if a panic occurs before vm_init().
|
#
17a3389882cee19532ddc99bc1f9aa1efd296749 |
|
21-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Remove phys_addr_range, just use addr_range for both virtual and physical address ranges (as requested by Ingo).
|
#
192af9e0afd2f3d0cbaf5c935480343a70c8ff53 |
|
20-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Changed addr_range to use uint64. I've tested this change on x86, causing no issues. I've checked over the code for all other platforms and made the necessary changes and to the best of my knowledge they should also still work, but I haven't actually built and tested them. Once I've completed the kernel_args changes the other platforms will need testing.
|
#
45bd7bb3db9d9e4dcb02b89a3e7c2bf382c0a88c |
|
25-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Removed unnecessary inclusions of <boot/kernel_args.h> in private kernel headers and respectively added includes in source files. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37259 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
61d2b06c401da76573c7fdace35d5a87b728ed8f |
|
12-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Use PAE only when there's memory beyond the 4G limit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37115 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a410098f288d090b2b971136636395e92a568e2f |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Only use PAE, if supported by the CPU. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37068 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5b4d62a2618dd2ae37b975e4ca283b410f39f9c7 |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Skeleton classes for PAE support. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37066 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2434bdc4d9e08c5adab979d84f07cc3734e655a8 |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced global variable gX86PagingMethod, so the paging method can be accessed from anywhere. Added static X86PagingMethod32Bit::Method() returning it as the subtype pointer -- to be used in the code related to that method only, of course. * Made a bunch of static variables non-static members of X86PagingMethod32Bit and added accessors for them. This makes them accessible in other source files (allowing for more refactoring) and saves memory, when we actually have another paging method implementation. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37062 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1b3e83addefd97925b84cebaf4003d14c9062781 |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved paging related files to new subdirectories paging and paging/32bit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37060 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5aa0503c7c1ce7ea4c0595d9a402e612bb290ec8 |
|
07-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed i386_translation_map_get_pgdir() and adjusted the one place where it was used. * Renamed X86VMTranslationMap to X86VMTranslationMap32Bit and pulled the paging method agnostic part into new base class X86VMTranslationMap. * Moved X86PagingStructures into its own header/source pair. * Moved pgdir_virt from X86PagingStructures to X86PagingStructures32Bit where it is actually used. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37055 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c6caf520ca20ba155cbcf24ab28209c9cf028961 |
|
07-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added a level of indirection for the arch_vm_translation_map functions. Introduced the interface X86PagingMethod which is used by those. ATM there's one implementing class, X86PagingMethod32Bit. * Made X86PagingStructures a base class, with one derived class, X86PagingStructures32Bit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37050 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1ba89e67eda21e7ad9e1ec57a53ae0a3436b8721 |
|
05-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Removed no-op VMTranslationMap::InitPostSem() and VMAddressSpace::InitPostSem(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37025 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8421714089091fc545726be0654e13d29de1f1ae |
|
05-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86: * Renamed vm_translation_map_arch_info to X86PagingStructures, and all members and local variables of that type accordingly. * arch_thread_context_switch(): Added TODO: The still active paging structures can indeed be deleted before we stop using them. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37022 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
521aff921f165d2d6814a3d06137c20d5ab8f1f4 |
|
04-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved the page mapper and the page invalidation cache from vm_translation_map_arch_info to X86VMTranslationMap where they actually belong. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37014 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
78dde7abd76bad760d8d8e94908a19a0e583eb6a |
|
04-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Consequently use uint32 for the physical page directory address. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37011 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
23aa437d66e97e1f65c747ea64e4060ce278b46d |
|
02-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed nasty cast that breaks with sizeof(phys_addr_t) == 64. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37001 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
147133b76cbb1603bdbff295505f5b830cb4e688 |
|
25-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* First run through the kernel's private parts to use phys_{addr,size}_t where appropriate. * Typedef'ed page_num_t to phys_addr_t and used it in more places in vm_page.{h,cpp}. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36937 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
001a0e09875037dd4f2eb2091a90c3fd8c1c2450 |
|
02-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
memory_type_to_pte_flags(): Also set the write-through flag for uncacheable memory. This avoids implementation defined behavior on Pentium Pro/II when intersecting with an write-combining or write-protected MTRR range. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36589 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f23be5bbed64c77afa96eea555344142dd068cb4 |
|
02-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Support memory types in page mappings to the degree that is possible without PAT support (i.e. uncacheable, write-through, and write-back). Has pretty much no effect ATM, as the MTRRs restrict the types to what is actually requested. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36583 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c1be1e0761d0904d99dabd3d1638d94802b4b600 |
|
01-May-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* VMTranslationMap::Map()/Protect(): Added "memoryType" parameter. Not implemented for any architecture yet. * vm_set_area_memory_type(): Call VMTranslationMap::ProtectArea() to change the memory type for the already mapped pages. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36574 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
50e4dd932864f2ea5007d00a787a349859d05fea |
|
12-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
axeld + bonefish: X86VMTranslationMap::Protect(): * Removed rounding up the end address to page alignment. It's not necessary and could cause an overflow. * Fixed possible infinite loop triggered by a rare race condition: When two threads of a team were accessing the same unmapped page at the same time each would trigger a page fault. One thread would map the page again, the second would wait until the first one was done and update the page protection (unnecessarily but harmlessly). If the first thread accessed the page again at an unfortunate time, it would implicitly change the accessed/dirty flags of the page's PTE, which was a situation the loop in Protect() didn't consider and thus run forever. Seen the problem twice today in form of an app server freeze. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36197 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dcdf2ab981f1ea6ef10be59ec96ff3b88c2aca5a |
|
02-Mar-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Extended assert output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35724 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
84328c264b9432e3b29cfd084f9e67825fcdc0ba |
|
01-Mar-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Extended assert output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35697 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
af0572ead267dc6167a8bdc7666101352cbc2c7e |
|
28-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed debug output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35685 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2340fc36f1b3478ce0e71cacd21db3bd9196ae37 |
|
28-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed build with tracing turned on and improve/added debug output. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35658 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4891bed4d2d8cfb590b688e6f39c3427f8a1c52b |
|
27-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* large_memory_physical_page_ops_init(): Don't assign the return value before it is fully initialized. * arch_vm_translation_map_is_kernel_page_accessible(): Check whether sPhysicalPageMapper has already been initialized. If a panic() during or before the initialization of the physical page mapper occurred, we no longer access a partially initialized object or a NULL pointer. This should fix the triple fault part of #1925. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35644 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ba3d62b66f8e33304ed71ee6a2d403cc75d95e87 |
|
15-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
X86VMTranslationMap::UnmapArea(): Don't change the page state before it has been unmapped. This way modified pages could end up in the "cached" queue without having been written back. That would be a good explanation for #5374 (partially wrong file contents) -- as soon as such a page was freed, the invalid on-disk contents would become visible. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35477 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4e4cfe8f0ab29f9bd527e09d2792d181854bbfb6 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Missing page access debug markers. Fixes #5359. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35398 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40bb94819e6c39d72ab29edc1a0dcd80b15b8b42 |
|
03-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed useless return parameter from vm_remove_all_page_mappings(). * Added vm_clear_page_mapping_accessed_flags() and vm_remove_all_page_mappings_if_unaccessed(), which combine the functionality of vm_test_map_activation(), vm_clear_map_flags(), and vm_remove_all_page_mappings(), thus saving lots of calls to translation map methods. The backend is the new method VMTranslationMap::ClearAccessedAndModified(). * Started to make use of the cached page queue and changed the meaning of the other non-free queues slightly: - Active queue: Contains mapped pages that have been used recently. - Inactive queue: Contains mapped pages that have not been used recently. Also contains unmapped temporary pages. - Modified queue: Contains unmapped modified pages. - Cached queue: Contains unmapped unmodified pages (LRU sorted). Unless we're actually low on memory and actively do paging, modified and cached queues only contain non-temporary pages. Cached pages are considered quasi free. They still belong to a cache, but since they are unmodified and unmapped, they can be freed immediately. And this is what vm_page_[try_]reserve_pages() do now when there are no more actually free pages at hand. Essentially this means that pages storing cached file data, unless mmap()ped, no longer are considered used and don't contribute to page pressure. Paging will not happen as long there are enough free + cached pages available. * Reimplemented the page daemon. It no longer scans all pages, but instead works the page queues. As long as the free pages situation is harmless, it only iterates through the active queue and deactivates pages that have not been used recently. When paging occurs it additionally scans the inactive queue and frees pages that have not been used recently. * Changed the page reservation/allocation interface: vm_page_[try_]reserve_pages(), vm_page_unreserve_pages(), and vm_page_allocate_page() now take a vm_page_reservation structure pointer. The reservation functions initialize the structure -- currently consisting only of a count member for the number of still reserved pages. vm_page_allocate_page() decrements the count and vm_page_unreserve_pages() unreserves the remaining pages (if any). Advantages are that reservation/ unreservation mismatches cannot occur anymore, that vm_page_allocate_page() can verify that the caller has indeed a reserved page left, and that there's no unnecessary pressure on the free page pool anymore. The only disadvantage is that the vm_page_reservation object needs to be passed around a bit. * Reworked the page reservation implementation: - Got rid of sSystemReservedPages and sPageDeficit. Instead sUnreservedFreePages now actually contains the number of free pages that have not yet been reserved (it cannot become negative anymore) and the new sUnsatisfiedPageReservations contains the number of pages that are still needed for reservation. - Threads waiting for reservations do now add themselves to a waiter queue, which is ordered by descending priority (VM priority and thread priority). High priority waiters are served first when pages become available. Fixes #5328. * cache_prefetch_vnode(): Would reserve one less page than allocated later, if the size wasn't page aligned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35393 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e65c400299386f99a251395ff2e59572705d7e49 |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Replaced the vm_page_allocate_page*() "pageState" parameter by a more general "flags" parameter. It encodes the target state of the page -- so that the page isn't unnecessarily put in the wrong page queue first -- a flag whether the page should be cleared, and one to indicate whether the page should be marked busy. * Added page state PAGE_STATE_CACHED. Not used yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35333 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
72382fa6291e810be2949a70abd8f274f92dbd2c |
|
29-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed the page state PAGE_STATE_BUSY and instead introduced a vm_page::busy flag. The obvious advantage is that one can still see what state a page is in and even move it between states while being marked busy. * Removed the vm_page::is_dummy flag. Instead we mark marker pages busy, which in all cases has the same effect. Introduced a vm_page_is_dummy() that can still check whether a given page is a dummy page. * vm_page_unreserve_pages(): Before adding to the system reserve make sure sUnreservedFreePages is non-negative. Otherwise we'd make nonexisting pages available for allocation. steal_pages() still has the same problem and it can't be solved that easily. * map_page(): No longer changes the page state/mark the page unbusy. That's the caller's responsibility. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35331 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8d1316fd23616f6dac131a0eba5dab08acc6e76d |
|
22-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Replaced CACHE_DONT_SLEEP by two new flags CACHE_DONT_WAIT_FOR_MEMORY and CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager does not wait when reserving memory or pages. The latter prevents area operations. The new flags add a bit of flexibility. E.g. when allocating page mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient, i.e. the allocation will succeed as long as pages are available. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
86c794e5c10f1b2d99d672d424a8637639c703dd |
|
21-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
slab allocator: * Implemented a more elaborated raw memory allocation backend (MemoryManager). We allocate 8 MB areas whose pages we allocate and map when needed. An area is divided into equally-sized chunks which form the basic units of allocation. We have areas with three possible chunk sizes (small, medium, large), which is basically what the ObjectCache implementations were using anyway. * Added "uint32 flags" parameter to several of the slab allocator's object cache and object depot functions. E.g. object_depot_store() potentially wants to allocate memory for a magazine. But also in pure freeing functions it might eventually become useful to have those flags, since they could end up deleting an area, which might not be allowable in all situations. We should introduce specific flags to indicate that. * Reworked the block allocator. Since the MemoryManager allocates block-aligned areas, maintains a hash table for lookup, and maps chunks to object caches, we can quickly find out which object cache a to be freed allocation belongs to and thus don't need the boundary tags anymore. * Reworked the slab boot strap process. We allocate from the initial area only when really necessary, i.e. when the object cache for the respective allocation size has not been created yet. A single page is thus sufficient. other: * vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS. * Use an object cache for page mappings. This significantly reduces the contention on the heap bin locks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6379e53e2dd7021ba0e35d41c276dfe94c079596 |
|
19-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
vm_page no longer points directly to its containing cache, but rather to a VMCacheRef object which points to the cache. This allows to optimize VMCache::MoveAllPages(), since it no longer needs to iterate over all pages to adjust their cache pointer. It can simple swap the cache refs of the two caches instead. Reduces the total -j8 Haiku image build time only marginally. The kernel time drops almost 10%, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35155 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f082f7f019941732f1d2b99f627fbeeeec3746af |
|
15-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added vm_page::accessed flag. Works analogously to vm_page::modified. * Reorganized the code for [un]mapping pages: - Added new VMTranslationMap::Unmap{Area,Page[s]}() which essentially do what vm_unmap_page[s]() did before, just in the architecture specific code, which allows for specific optimizations. UnmapArea() is for the special case that the complete area is unmapped. Particularly in case the address space is deleted, some work can be saved. Several TODOs could be slain. - Since they are only used within vm.cpp vm_map_page() and vm_unmap_page[s]() are now static and have lost their prefix (and the "preserveModified" parameter). * Added VMTranslationMap::Protect{Page,Area}(). They are just inline wrappers for Protect(). * X86VMTranslationMap::Protect(): Make sure not to accidentally clear the accessed/dirty flags. * X86VMTranslationMap::Unmap()/Protect(): Make page table skipping actually work. It was only skipping to the next page. * Adjusted the PPC code to at least compile. No measurable effect for the -j8 Haiku image build time, though the kernel time drops minimally. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35089 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
c6aa013564abfcef737001bd166d1130804bd3d3 |
|
14-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Changed VMTranslationMap::Lock()/Unlock() return types to the usual. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35075 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bcc2c157a1c54f5169de1e7a3e32c49e92bbe0aa |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Refactored vm_translation_map: * Pulled the physical page mapping functions out of vm_translation_map into a new interface VMPhysicalPageMapper. * Renamed vm_translation_map to VMTranslationMap and made it a proper C++ class. The functions in the operations vector have become methods. * Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper as far as possible (without actually writing new code). * Adjusted the x86 and the PPC specifics accordingly (untested for the latter). For the other architectures the build is, I'm afraid, seriously broken. The next steps will modify and extend the VMTranslationMap interface, so that it will be possible to fix the bugs in vm_unmap_page[s]() and employ architecture specific optimizations. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
30f423606d1d782a8281c50556ecc0f0118c0832 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
As per the IA32 specification we can save TLB invalidations in at least two situations: * When mapping the page the page table entry should not have been marked "present" before, i.e. it would not have been cached anyway. * When the page table entry's accessed flag wasn't set, the entry hadn't been cached either. Speeds up the -j8 Haiku image build only minimally, but the total kernel time drops about 9%. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35062 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9435ae9395914c315dbc932e2f15e8895f3f8c21 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86 page mapping: * Removed the page_{table,directory}_entry structures. The bit fields are nice in principle, but modifying individual flags this way is inherently non-atomic and we need atomicity in some situations. * Use atomic operations in protect_tmap(), clear_flags_tmap(), and others. * Aligned the query_tmap_interrupt() semantics with that of query_tmap(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35058 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
3cd2094396dde9ca42263c535041a95d5f0d5fff |
|
06-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added new debug feature (DEBUG_PAGE_ACCESS) to detect invalid concurrent access to a vm_page. It is basically an atomically accessed thread ID field in the vm_page structure, which is explicitly set by macros marking the critical sections. As a first positive effect I had to review quite a bit of code and found several issues. * Added several TODOs and comments. Some harmless ones, but also a few troublesome ones in vm.cpp regarding page unmapping. * file_cache: PrecacheIO::Prepare()/read_into_cache: Removed superfluous vm_page_allocate_page() return value checks. It cannot fail anymore. * Removed the heavily contended "pages" lock. We use different policies now: - sModifiedTemporaryPages is accessed atomically. - sPageDeficitLock and sFreePageCondition are protected by a new mutex. - The page queues have individual locks (mutexes). - Renamed set_page_state_nolock() to set_page_state(). Unless the caller says otherwise, it does now lock the affected pages queues itself. Also changed the return value to void -- we panic() anyway. * set_page_state(): Add free/clear pages to the beginning of their respective queues as this is more cache-friendly. * Pages with the states PAGE_STATE_WIRED or PAGE_STATE_UNUSED are no longer in any queue. They were in the "active" queue, but there's no good reason to have them there. In case we decide to let the page daemon work the queues (like FreeBSD) they would just be in the way. * Pulled the common part of vm_page_allocate_page_run[_no_base]() into a helper function. Also fixed a bug I introduced previously: The functions must not vm_page_unreserve_pages() on success, since they remove the pages from the free/clear queue without decrementing sUnreservedFreePages. * vm_page_set_state(): Changed return type to void. The function cannot really fail and no-one was checking it anyway. * vm_page_free(), vm_page_set_state(): Added assertion: The page must not be free/clear before. This is implied by the policy that no-one is allowed to access free/clear pages without holding the respective queue's lock, which is not the case at this point. This found the bug fixed in r34912. * vm_page_requeue(): Added general assertions. panic() when requeuing of free/clear pages is requested. Same reason as above. * vm_clone_area(), B_FULL_LOCK case: Don't map busy pages. The implementation is still not correct, though. My usual -j8 Haiku build test runs another 10% faster, now. The total kernel time drops about 18%. As hoped the new locks have only a fraction of the old "pages" lock contention. Other locks lead the "most wanted list" now. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34933 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1021fd28262697dbbbe1d54a868f0672900c78f3 |
|
01-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* agp_gart(): Use vm_page_[un]reserve_pages(). * Removed unused vm_page_allocate_pages(). * Removed now unused (always true) "reserved" parameter from vm_page_allocate_page(). * Removed unused (always false) "stealActive" parameter from steal_page(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34836 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e50cf8765be50a7454c9488db38b638cf90805af |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the VM headers into subdirectory vm/. * Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
90d870c1556bdc415c7f41de5474ebebb0ceebdd |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved VMAddressSpace definition to vm_address_space.h. * "Classified" VMAddressSpace, i.e. turned the vm_address_space_*() functions into methods, made all attributes (but "areas") private, and added accessors. * Also turned the vm.cpp functions vm_area_lookup() and remove_area_from_address_space() into VMAddressSpace methods. The rest of the area management functionality will follow soon. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34447 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
96c4511a25706b9ce176ec0cfbbddcfe7d07d190 |
|
17-Nov-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* Shuffled functions around, no functional change. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34083 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1a053eedc0801705aff5e53c01ddb571f035c672 |
|
08-Sep-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Revert r32994 and add a comment to explain the intention. Thanks Ingo for the clarification. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33001 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
fbcf5f3f925a678c0209b93c0b2a7b2ea9227ee7 |
|
07-Sep-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Don't know what this was supposed to do, but with the VADDR_TO_PDENT() it would end up as 0 again in any case. It certainly looks correct without it, removing so it doesn't confuse the next one reading over it. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32994 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
60a5ced394f4c38395f54fa92d322adc2ccc7d6a |
|
09-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Adding a disabled debug helper. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32217 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ee280b59e95cdd6ebec4519aa9b616e58de79f76 |
|
04-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Prevent the user TLB invalidation function from being preempted by turning off interrupts when invoking it. The user TLB invalidation function essentially only reads and writes back control register 3 (cr3) which holds the physical address of the current page directory. Still a preemption between the read and the write can cause problems when the last thread of a team dies and therefore the team is deleted. The context switch on preemption would decrement the refcount of the object that holds the page directory. Then the team address space is deleted causing the context switch returning to that thread to not re-acquire a reference to the object. At that point the page directory as set in cr3 is the one of the previously run thread (which is fine, as all share the kernel space mappings we need). Now when the preempted thread continues though, it would overwrite cr3 with the physical page directory address from before the context switch still stored in eax, therefore setting the page directory to the one of the dying thread that now doesn't have the corresponding reference. Further progressing the thread would release the last reference causing the deletion of the object and freeing of the, now active again, page directory. The memory getting overwritten (by deadbeef) now completely corrupts the page directory causing basically any memory access to fault, in the end resulting in a triplefault. This should fix bug #3399. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32118 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ea2abd110bd6a4518a954477562e2dd94a5fef9d |
|
02-Aug-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Renamed the ROUNDOWN macro to ROUNDDOWN. Also changed the implementation of ROUNDUP to use '*' and '/' -- the compiler will optimize that for powers of two anyway and this implementation works for other numbers as well. * The thread::fault_handler use in C[++] code was broken with gcc 4. At least when other functions were invoked. Trying to trick the compiler wasn't a particularly good idea anyway, since the next compiler version could break the trick again. So the general policy is to use the fault handlers only in assembly code where we have full control. Changed that for x86 (save for the vm86 mode, which has a similar mechanism), but not for the other architectures. * Introduced fault_handler, fault_handler_stack_pointer, and fault_jump_buffer fields in the cpu_ent structure, which must be used instead of thread::fault_handler in the kernel debugger. Consequently user_memcpy() must not be used in the kernel debugger either. Introduced a debug_memcpy() instead. * Introduced debug_call_with_fault_handler() function which calls a function in a setjmp() and fault handler context. The architecture specific backend arch_debug_call_with_fault_handler() has only been implemented for x86 yet. * Introduced debug_is_kernel_memory_accessible() for use in the kernel debugger. It determines whether a range of memory can be accessed in the way specified. The architecture specific back end arch_vm_translation_map_is_kernel_page_accessible() has only been implemented for x86 yet. * Added arch_debug_unset_current_thread() (only implemented for x86) to unset the current thread pointer in the kernel debugger. When entering the kernel debugger we do some basic sanity checks of the currently set thread structure and unset it, if they fail. This allows certain commands (most importantly the stack trace command) to avoid accessing the thread structure. * x86: When handling a double fault, we do now install a special handler for page faults. This allows us to gracefully catch faulting commands, even if e.g. the thread structure is toast. We are now in much better shape to deal with double faults. Hopefully avoiding the triple faults that some people have been experiencing on their hardware and ideally even allowing to use the kernel debugger normally. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32073 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
6a6974b63eb062d0f301c57d44703389f823654c |
|
22-Jun-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
gcc 4 warnings. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@31192 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9a42ad7a77f11cf1b857e84ec70d21b1afaa71cd |
|
22-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
When switching to a kernel thread we no longer set the page directory. This is not necessary, since userland teams' page directories also contain the kernel mappings, and avoids unnecessary TLB flushes. To make that possible the vm_translation_map_arch_info objects are reference counted now. This optimization reduces the kernel time of the Haiku build on my machine with SMP disabled a few percent, but interestingly the total time decreases only marginally. Haven't tested with SMP yet, but for full impact CPU affinity would be needed. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28287 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8f06357d6693ceefed0da116028bc006e102d2b4 |
|
21-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Style changes: * Renamed static variables. * Enforced 80 columns limit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28273 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
47c40a10a10dc615e078754503f2c19b9f98c38d |
|
19-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Prefixed memset_physical() and memcpy_to_physical() with "vm_", added vm_memcpy_from_physical() and vm_memcpy_physical_page(), and added respective functions to the vm_translation_map operations. The architecture specific implementation can now decide how to implement them most efficiently. Added generic implementations that can be used, though. * Changed vm_{get,put}_physical_page(). The former no longer accepts flags (the only flag PHYSICAL_PAGE_DONT_WAIT wasn't needed anymore). Instead it returns an implementation-specific handle that has to be passed to the latter. Added vm_{get,put}_physical_page_current_cpu() and *_debug() variants, that work only for the current CPU, respectively when in the kernel debugger. Also adjusted the vm_translation_map operations accordingly. * Made consequent use of the physical memory operations in the source tree. * Also adjusted the m68k and ppc implementations with respect to the vm_translation_map operation changes, but they are probably broken, nevertheless. * For x86 the generic physical page mapper isn't used anymore. It is suboptimal in any case. For systems with small memory it is too much overhead, since one can just map the complete physical memory (that's not done yet, though). For systems with large memory it counteracts the VM strategy to reuse the least recently used pages. Since those pages will most likely not be mapped by the page mapper anymore, it will keep remapping chunks. This was also the reason why building Haiku in Haiku was significantly faster with only 256 MB RAM (since that much could be kept mapped all the time). Now we're using a different strategy: We have small pools of virtual page slots per CPU that are used for the physical page operations (memset_physical(), memcpy_*_physical()) with CPU-pinned thread. Furthermore we have four slots per translation map, which are used to map page tables. These changes speed up the Haiku image build in Haiku significantly. On my Core2 Duo 2.2 GHz 2 GB machine about 40% to 20 min 40 s (KDEBUG disabled, block cache debug disabled). Still more than factor 3 slower than FreeBSD and Linux, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28244 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1b6eff280f4afcf4d7c9dc9ccdc3a65f4e6ca0fd |
|
11-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Replaced the vm_get_physical_page() "flags" PHYSICAL_PAGE_{NO,CAN}_WAIT into an actual flag PHYSICAL_PAGE_DONT_WAIT. * Pass the flags through to the chunk mapper callback. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27979 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5e50de7e2e9dfb16594352263a435b418d0d2556 |
|
11-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Don't disable interrupts in flush_tmap() and map_iospace_chunk(), just pin the thread to the current CPU. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27975 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
74785e79db32355e0a8ee6b488672ac09ad57b1b |
|
07-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added "from" address space parameter to vm_swap_address_space()/ arch_vm_aspace_swap(). * The x86 implementation does now maintain a bit mask per vm_translation_map_arch_info indicating on which CPUs the address space is active. This allows flush_tmap() to avoid ICI for user address spaces when the team isn't currently running on any other CPU. In this context ICI is relatively expensive, particularly since we map most pages via vm_map_page() and therefore invoke flush_tmap() pretty much for every single page. This optimization speeds up a "hello world" compilation about 20% on my machine (KDEBUG turned off, freshly booted), but interestingly it has virtually no effect on the "-j2" haiku build time. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27912 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dbe295f827020d6ee1e1a8f1c6ab4071a661fbe8 |
|
07-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved vm_translation_map_arch_info definition to the header. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27902 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
567f78895b7878437f43d68fa3091b7bae47fa36 |
|
01-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fully inline {disable,restore}_interrupts() and friends when including <int.h>. Performance-wise not really significant, but gives nicer profiling results. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27827 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
2391ca55684910a7a8c9b75137fb0dd3f5237433 |
|
13-Sep-2008 |
Michael Lotz <mmlr@mlotz.ch> |
CID 56: Fix the wrong NULL check. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27478 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
802d18a97098e5d1fd14924d04514944b5a0e21b |
|
02-Aug-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Small semantical change of map_max_pages_need(): If given a 0 start address, it is supposed to consider the worst case address range of the given size. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26740 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1c8de8581b66c14ea94bccd7ddcea99291955796 |
|
01-Jun-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added optional spinlock contention measurement feature. Enabled when B_DEBUG_SPINLOCK_CONTENTION is defined to 1. It typedefs spinlock to a structure (thus breaking BeOS binary compatibility), containing a counter which is incremented whenever a thread has to wait for the spinlock. * Added macros for spinlock initialization and access and changed code using spinlocks accordingly. This breaks compilation for BeOS -- the macros should be defined in the respective compatibility wrappers. * Added generic syscall to get the spinlock counters for the thread and the team spinlocks. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25752 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b0f5179aa51eb680cdeea656a8b11fdbc6b56d63 |
|
28-May-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Changed recursive_lock to use a mutex instead of a semaphore. * Adjusted code using recursive locks respectively. The initialization cannot fail anymore, and it is possible to use recursive locks in the early boot process (even uninitialized, if in BSS), which simplifies things a little. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@25687 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
75fe8391f9101198d2be910ef9706b18675380e0 |
|
10-Feb-2008 |
Michael Lotz <mmlr@mlotz.ch> |
Fix the build. Apparently this file wasn't recompiled on my end before. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@23942 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
f271831fc8312897dc95f03ed9450f5dc76df059 |
|
10-Oct-2007 |
Axel Dörfler <axeld@pinc-software.de> |
Corrected comment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22500 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
0e183340579eeeb7703088d847cfd1284a511129 |
|
06-Oct-2007 |
Axel Dörfler <axeld@pinc-software.de> |
* Mapping a page might actually need memory - since we usually have locks that interfere with the page thief, we always need to have reserved a page for this upfront. I introduced a function to the vm_translation_map layer that estimates how much pages a mapping might need at maximum. All functions that map a page now call this and reserve the needed pages upfront. It might not be a nice solution, but it works. * The page thief could run into a panic when trying to call vm_cache_release_ref() on a non-existing (NULL) cache. * Also, it will now ignore wired active pages. * There is still a race condition between the page writer and the vnode destruction - writing a page back needs a valid vnode, but that might just have been deleted. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22455 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
393fceb5a0d8bd8b73059481ca0f20d285ac7a89 |
|
27-Sep-2007 |
Axel Dörfler <axeld@pinc-software.de> |
* Cleaned up vm_types.h a bit, and made vm_page, vm_cache, and vm_area opaque types for C. * As a result, I've renamed some more source files to .cpp, and fixed all warnings caused by that. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22326 a95241bf-73f2-0310-859d-f6bbb57e9c96
|