Searched +hist:8 +hist:e0f884c (Results 1 - 3 of 3) sorted by relevance

/haiku/src/system/kernel/cache/
H A Dfile_cache.cppdiff 8be7c1aa Sat Nov 05 04:16:40 MDT 2022 X512 <danger_mail@list.ru> kernel/file_cache: fix VMCache object leak

Fixes #18039.

Change-Id: Ia3cda69f91e56efb36931a97028378ec3ceb2100
Reviewed-on: https://review.haiku-os.org/c/haiku/+/5801
Reviewed-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
diff 8ba0b5eb Sun May 17 11:22:29 MDT 2020 Augustin Cavalier <waddlesplash@gmail.com> kernel/file_cache: Properly size I/O request vectors when writing zeros.

Should fix #16039.

Change-Id: Ifc5c79354979aaa7b27b09acc6d6450e21146e76
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2727
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
diff 8a26f35a Mon Apr 20 02:02:51 MDT 2009 Axel Dörfler <axeld@pinc-software.de> * Sequential write accesses were never detected, due to an incorrect check.
* The previous code would have scheduled a single page to be written out (if it
would have ever been triggered), now we schedule the complete previous write
access. This greatly speeds up a "dd if=/dev/zero of=test ..." beyond the
size of available memory.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30276 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8e0f884c Wed Sep 26 11:42:25 MDT 2007 Axel Dörfler <axeld@pinc-software.de> * Since the page scanner and thief can work more effectively when no vm_caches
are locked, there is now a vm_page_reserve_pages() call to ensure upfront that
there is a page for me when I need it, and may have locked some caches.
* The vm_soft_fault() routine now makes use of that feature.
* vm_page_allocate_page() now resets the vm_page::usage_count, so that the file
cache does not need to do this in read_chunk_into_cache() and
write_chunk_to_cache().
* In cache_io() however, it need to update the usage_count - and it does that
now. Since non-mapped caches don't have mappings, the page scanner will punish
the cache pages stronger than other pages which is accidently just what we
want.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22319 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8e0f884c Wed Sep 26 11:42:25 MDT 2007 Axel Dörfler <axeld@pinc-software.de> * Since the page scanner and thief can work more effectively when no vm_caches
are locked, there is now a vm_page_reserve_pages() call to ensure upfront that
there is a page for me when I need it, and may have locked some caches.
* The vm_soft_fault() routine now makes use of that feature.
* vm_page_allocate_page() now resets the vm_page::usage_count, so that the file
cache does not need to do this in read_chunk_into_cache() and
write_chunk_to_cache().
* In cache_io() however, it need to update the usage_count - and it does that
now. Since non-mapped caches don't have mappings, the page scanner will punish
the cache pages stronger than other pages which is accidently just what we
want.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22319 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8a26f35a6d174a8a7a8f8846fa7f5449cad28a2a Mon Apr 20 02:02:51 MDT 2009 Axel Dörfler <axeld@pinc-software.de> * Sequential write accesses were never detected, due to an incorrect check.
* The previous code would have scheduled a single page to be written out (if it
would have ever been triggered), now we schedule the complete previous write
access. This greatly speeds up a "dd if=/dev/zero of=test ..." beyond the
size of available memory.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30276 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8e0f884c71025810db35b54234f1a2919131b108 Wed Sep 26 11:42:25 MDT 2007 Axel Dörfler <axeld@pinc-software.de> * Since the page scanner and thief can work more effectively when no vm_caches
are locked, there is now a vm_page_reserve_pages() call to ensure upfront that
there is a page for me when I need it, and may have locked some caches.
* The vm_soft_fault() routine now makes use of that feature.
* vm_page_allocate_page() now resets the vm_page::usage_count, so that the file
cache does not need to do this in read_chunk_into_cache() and
write_chunk_to_cache().
* In cache_io() however, it need to update the usage_count - and it does that
now. Since non-mapped caches don't have mappings, the page scanner will punish
the cache pages stronger than other pages which is accidently just what we
want.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22319 a95241bf-73f2-0310-859d-f6bbb57e9c96
/haiku/src/system/kernel/vm/
H A Dvm_page.cppdiff 8ddec5b5 Tue Nov 06 11:40:47 MST 2012 Jerome Duval <jerome.duval@gmail.com> vm_page_allocate_page_run: fix previous commit

* remove superfluous codes
* when aligning, sPhysicalPageOffset would be substracted twice
+alpha4

Signed-off-by: Ingo Weinhold <ingo_weinhold at gmx dot de>
diff 8a8043dc Sun Jun 13 12:59:11 MDT 2010 Ingo Weinhold <ingo_weinhold@gmx.de> vm_page_get_stats(): Forgot to added sIgnoredPages to system_info::max_pages
in r37117 as well, which was really the main point of introducing it.
Improves #6124 (reported memory lower than installed).


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37127 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8ccc58ff Mon May 10 11:26:50 MDT 2010 Ingo Weinhold <ingo_weinhold@gmx.de> Missing line breaks in dprintf().


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36785 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 86c794e5 Thu Jan 21 16:10:52 MST 2010 Ingo Weinhold <ingo_weinhold@gmx.de> slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 7372a88a Sun Jan 10 19:52:53 MST 2010 Ingo Weinhold <ingo_weinhold@gmx.de> Replaced the page queue mutexes by spinlocks. The critical sections are very
short and quite hot, so mutexes just cause more overhead due to frequent
rescheduling than waiting for the spinlocks does. The free and clear queues
are additionally protected by a R/W lock, which is mostly read-locked, save
for rare cases like allocating page runs.

The total -j8 Haiku image build speedup is marginal. The kernel time drops
about 8%, though.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35004 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8c91d29b Tue Apr 07 08:06:07 MDT 2009 Ingo Weinhold <ingo_weinhold@gmx.de> * PageWriterRun::Go(), vm_page_write_modified_page_range(): When writing the
page failed since the cache has been shrunk, we need not only remove the
page from the cache, we also need to remove all of its area mappings and
free it. Not removing the area mappings might have been the cause of #3110,
not freeing it would cause it to be leaked for good.
* vm_page_write_modified_page_range(): When writing failed for another reason
and the page wasn't in the modified queue before, we would lose the info
in which queue it was before and setting the page state to modified would
assume the active queue. This could potentially screw up our page queue
structures.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29992 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8b6f1d5e Wed Oct 08 21:53:00 MDT 2008 Rene Gollent <anevilyak@gmail.com> Fixed warnings with tracing enabled and made some very noisy dprintfs trace-only
as they were flooding the syslog.



git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27935 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8d12bd13 Tue Aug 05 18:28:28 MDT 2008 Ingo Weinhold <ingo_weinhold@gmx.de> * Moved the incrementing/decrementing of vm_page::wired_count into
dedicated functions.
* Introduced gMappedPagesCount variable which counts the total number of
physical pages that are mapped.
* Added vm_page_get_stats() which fills in the memory related part of
the system_info structure. Used and cached pages are computed
differently, now. The "available" (== not committed) memory is no
longer used for the computation as it doesn't say anything about the
actually used/free pages (with swap support enabled it is even
less meaningful, since we first commit swap space when possible).
We do also consider the memory used by the block cache as cached
pages, now. All in all these changes should fix the memory statistics
reported by get_system_info(), IOW bug #2574.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@26837 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8b76a59a Mon Mar 17 20:39:04 MDT 2008 Ingo Weinhold <ingo_weinhold@gmx.de> Fixed race condition in the page writer: The state of the page we have
picked might have changed while we were locking its cache. Might fix
#1931.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@24430 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8e0f884c Wed Sep 26 11:42:25 MDT 2007 Axel Dörfler <axeld@pinc-software.de> * Since the page scanner and thief can work more effectively when no vm_caches
are locked, there is now a vm_page_reserve_pages() call to ensure upfront that
there is a page for me when I need it, and may have locked some caches.
* The vm_soft_fault() routine now makes use of that feature.
* vm_page_allocate_page() now resets the vm_page::usage_count, so that the file
cache does not need to do this in read_chunk_into_cache() and
write_chunk_to_cache().
* In cache_io() however, it need to update the usage_count - and it does that
now. Since non-mapped caches don't have mappings, the page scanner will punish
the cache pages stronger than other pages which is accidently just what we
want.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22319 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8e0f884c Wed Sep 26 11:42:25 MDT 2007 Axel Dörfler <axeld@pinc-software.de> * Since the page scanner and thief can work more effectively when no vm_caches
are locked, there is now a vm_page_reserve_pages() call to ensure upfront that
there is a page for me when I need it, and may have locked some caches.
* The vm_soft_fault() routine now makes use of that feature.
* vm_page_allocate_page() now resets the vm_page::usage_count, so that the file
cache does not need to do this in read_chunk_into_cache() and
write_chunk_to_cache().
* In cache_io() however, it need to update the usage_count - and it does that
now. Since non-mapped caches don't have mappings, the page scanner will punish
the cache pages stronger than other pages which is accidently just what we
want.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@22319 a95241bf-73f2-0310-859d-f6bbb57e9c96
H A Dvm.cppdiff 8ca0f03d Mon Nov 08 23:39:16 MST 2021 X512 <danger_mail@list.ru> riscv64/smp: Implement multi-processor support

* Working under qemu smp 1,2+
* Working on SiFive Unmatched
* x86_64 efi not broken by smp_boot_other_cpus change

Change-Id: I32ebc17913e46ed082be9ade8f56448bbf12f16e
Reviewed-on: https://review.haiku-os.org/c/haiku/+/4705
Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com>
diff 8e74e307 Fri May 29 07:23:49 MDT 2020 Michael Lotz <mmlr@mlotz.ch> kernel/vm: Add discard_address_range that discards pages.

Pages in the given range are unmapped and freed without getting written
back anywhere. It can be used whenever a caller does not care about the
data in the given range anymore and wants to reduce page pressure.

Change-Id: I8bcce68fab278efef710d3714677e1d463504a56
Reviewed-on: https://review.haiku-os.org/c/haiku/+/2843
Reviewed-by: waddlesplash <waddlesplash@gmail.com>
diff 8a0c9d52 Sat Aug 10 13:51:41 MDT 2019 Augustin Cavalier <waddlesplash@gmail.com> OS: Rename B_USER_CLONEABLE_AREA to B_CLONEABLE_AREA.

It now lives in OS.h. The idea is that this will now be
accessible to userland applications, so userland memory
is protected from access by other processes, just as
kernel memory is.

No functional change (the constants are still the same,
though I've changed some to use shifts to make clear
which bits are allocated are which are unused.)
diff 8a1709b3 Tue Dec 11 11:37:39 MST 2018 Augustin Cavalier <waddlesplash@gmail.com> vm: Allow W|X before kernel startup ends.

The altcodepatch mechanism needs to overwrite parts of the kernel
image. This can't be done by setting it to RW-only and not RWX,
as we are already running within the kernel when this occurs,
and so instruction fetches can and will occur between the points
of +W and -W.

As gKernelStartup is turned off before the scheduler is started,
this is not much of a lifted restriction, as no modules are loaded,
no secondary threads started, etc.

Fixes #14751.
diff 8ef857d8 Tue Oct 28 19:34:56 MDT 2014 Ingo Weinhold <ingo_weinhold@gmx.de> vm_soft_fault(): Avoid inconsistent state when seeing wired page

When we encounter a wired page that we'd have to unmap to map our newly
allocated one, we need to get rid of the latter before unlocking
everything and waiting for the wired page. Otherwise we'd leave things
in an inconsistent state (a page from an upper cache shadowing a mapped
page from a lower cache).
diff c3f0fd28 Thu Jul 12 04:29:33 MDT 2012 Alex Smith <alex@alex-smith.me.uk> Fixed formatting of output in some debugger commands.

Currently all debugger commands assume 32-bit pointers when formatting their
output. This means that on x86_64 the output is incorrectly formatted. Fixed
this by adding a B_PRINTF_POINTER_WIDTH definition (16 on 64-bit, 8 on
32-bit), and using this to correctly format the output. Not all commands have
been fixed yet, but all VM, slab, VFS, team, thread and image commands should
be correct.
diff dac21d8b Thu Feb 18 06:52:43 MST 2010 Ingo Weinhold <ingo_weinhold@gmx.de> * map_physical_memory() does now always set a memory type. If none is given (it
needs to be or'ed to the address specification), "uncached" is assumed.
* Set the memory type for the "BIOS" and "DMA" areas to write-back. Not sure, if
that's correct, but that's what was effectively used on my machines before.
* Changed x86_set_mtrrs() and the CPU module hook to also set the default memory
type.
* Rewrote the MTRR computation once more:
- Now we know all used memory ranges, so we are free to extend used ranges
into unused ones in order to simplify them for MTRR setup.
- Leverage the subtractive properties of uncached and write-through ranges to
simplify ranges of any other respectively write-back type.
- Set the default memory type to write-back, so we don't need MTRRs for the
RAM ranges.
- If a new range intersects with an existing one, we no longer just fail.
Instead we use the strictest requirements implied by the ranges. This fixes
#5383.

Overall the new algorithm should be sufficient with far less MTRRs than before
(on my desktop machine 4 are used at maximum, while 8 didn't quite suffice
before). A drawback of the current implementation is that it doesn't deal with
the case of running out of MTRRs at all, which might result in some ranges
having weaker caching/memory ordering properties than requested.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35515 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8d1316fd Fri Jan 22 14:19:23 MST 2010 Ingo Weinhold <ingo_weinhold@gmx.de> Replaced CACHE_DONT_SLEEP by two new flags CACHE_DONT_WAIT_FOR_MEMORY and
CACHE_DONT_LOCK_KERNEL_SPACE. If the former is given, the slab memory manager
does not wait when reserving memory or pages. The latter prevents area
operations. The new flags add a bit of flexibility. E.g. when allocating page
mapping objects for userland areas CACHE_DONT_WAIT_FOR_MEMORY is sufficient,
i.e. the allocation will succeed as long as pages are available.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35246 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 8a65066a Fri Jan 22 12:58:04 MST 2010 Ingo Weinhold <ingo_weinhold@gmx.de> vm_soft_fault(): map_page() can fail for B_NO_LOCK areas, since it needs to
allocate a page mapping. In that case we do at least have to mark the page
not busy again. Furthermore we enforce the minimum page mappings object cache
reserve, so we'll have more luck on the next fault.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35241 a95241bf-73f2-0310-859d-f6bbb57e9c96
diff 86c794e5 Thu Jan 21 16:10:52 MST 2010 Ingo Weinhold <ingo_weinhold@gmx.de> slab allocator:
* Implemented a more elaborated raw memory allocation backend (MemoryManager).
We allocate 8 MB areas whose pages we allocate and map when needed. An area is
divided into equally-sized chunks which form the basic units of allocation. We
have areas with three possible chunk sizes (small, medium, large), which is
basically what the ObjectCache implementations were using anyway.
* Added "uint32 flags" parameter to several of the slab allocator's object
cache and object depot functions. E.g. object_depot_store() potentially wants
to allocate memory for a magazine. But also in pure freeing functions it
might eventually become useful to have those flags, since they could end up
deleting an area, which might not be allowable in all situations. We should
introduce specific flags to indicate that.
* Reworked the block allocator. Since the MemoryManager allocates block-aligned
areas, maintains a hash table for lookup, and maps chunks to object caches,
we can quickly find out which object cache a to be freed allocation belongs
to and thus don't need the boundary tags anymore.
* Reworked the slab boot strap process. We allocate from the initial area only
when really necessary, i.e. when the object cache for the respective
allocation size has not been created yet. A single page is thus sufficient.

other:
* vm_allocate_early(): Added boolean "blockAlign" parameter. If true, the
semantics is the same as for B_ANY_KERNEL_BLOCK_ADDRESS.
* Use an object cache for page mappings. This significantly reduces the
contention on the heap bin locks.


git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35232 a95241bf-73f2-0310-859d-f6bbb57e9c96

Completed in 464 milliseconds