#
310442 |
|
22-Dec-2016 |
jhibbits |
MFC r304052:
Add missing pmap_kremove() method for book-e.
|
#
302408 |
|
07-Jul-2016 |
gjb |
Copy head@r302406 to stable/11 as part of the 11.0-RELEASE cycle. Prune svn:mergeinfo from the new branch, as nothing has been merged here.
Additional commits post-branch will follow.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
296142 |
|
27-Feb-2016 |
jhibbits |
Implement pmap_change_attr() for PowerPC (Book-E only for now)
Summary: Some drivers need special memory requirements. X86 solves this with a pmap_change_attr() API, which DRM uses for changing the mapping of the GART and other memory regions. Implement the same function for PowerPC. AIM currently does not need this, but will in the future for DRM, so a default is added for that, for business as usual. Book-E has some drivers coming down that do require non-default memory coherency. In this case, the Datapath Acceleration Architecture (DPAA) based ethernet controller has 2 regions for the buffer portals: cache-inhibited, and cache-enabled. By default, device memory is cache-inhibited. If the cache-enabled memory regions are mapped cache-inhibited, an alignment exception is thrown on access.
Test Plan: Tested with a new driver to be added after this (DPAA dTSEC ethernet driver). No alignment exceptions thrown, driver works as expected with this.
Reviewed By: nwhitehorn Sponsored by: Alex Perez/Inertial Computing Differential Revision: https://reviews.freebsd.org/D5471
|
#
286296 |
|
04-Aug-2015 |
jah |
Add two new pmap functions: vm_offset_t pmap_quick_enter_page(vm_page_t m) void pmap_quick_remove_page(vm_offset_t kva)
These will create and destroy a temporary, CPU-local KVA mapping of a specified page.
Guarantees: --Will not sleep and will not fail. --Safe to call under a non-sleepable lock or from an ithread
Restrictions: --Not guaranteed to be safe to call from an interrupt filter or under a spin mutex on all platforms --Current implementation does not guarantee more than one page of mapping space across all platforms. MI code should not make nested calls to pmap_quick_enter_page. --MI code should not perform locking while holding onto a mapping created by pmap_quick_enter_page
The idea is to use this in busdma, for bounce buffer copies as well as virtually-indexed cache maintenance on mips and arm.
NOTE: the non-i386, non-amd64 implementations of these functions still need review and testing.
Reviewed by: kib Approved by: kib (mentor) Differential Revision: http://reviews.freebsd.org/D3013
|
#
285148 |
|
04-Jul-2015 |
jhibbits |
Use the correct type for physical addresses.
On Book-E, physical addresses are actually 36-bits, not 32-bits. This is currently worked around by ignoring the top bits. However, in some cases, the boot loader configures CCSR to something above the 32-bit mark. This is stage 1 in updating the pmap to handle 36-bit physaddr.
|
#
276772 |
|
06-Jan-2015 |
markj |
Factor out duplicated code from dumpsys() on each architecture into generic code in sys/kern/kern_dump.c. Most dumpsys() implementations are nearly identical and simply redefine a number of constants and helper subroutines; a generic implementation will make it easier to implement features around kernel core dumps. This change does not alter any minidump code and should have no functional impact.
PR: 193873 Differential Revision: https://reviews.freebsd.org/D904 Submitted by: Conrad Meyer <conrad.meyer@isilon.com> Reviewed by: jhibbits (earlier version) Sponsored by: EMC / Isilon Storage Division
|
#
269728 |
|
08-Aug-2014 |
kib |
Change pmap_enter(9) interface to take flags parameter and superpage mapping size (currently unused). The flags includes the fault access bits, wired flag as PMAP_ENTER_WIRED, and a new flag PMAP_ENTER_NOSLEEP to indicate that pmap should not sleep.
For powerpc aim both 32 and 64 bit, fix implementation to ensure that the requested mapping is created when PMAP_ENTER_NOSLEEP is not specified, in particular, wait for the available memory required to proceed.
In collaboration with: alc Tested by: nwhitehorn (ppc aim32 and booke) Sponsored by: The FreeBSD Foundation and EMC / Isilon Storage Division MFC after: 2 weeks
|
#
269485 |
|
03-Aug-2014 |
alc |
Retire pmap_change_wiring(). We have never used it to wire virtual pages. We continue to use pmap_enter() for that. For unwiring virtual pages, we now use pmap_unwire(), which unwires a range of virtual addresses instead of a single virtual page.
Sponsored by: EMC / Isilon Storage Division
|
#
268591 |
|
13-Jul-2014 |
alc |
Implement pmap_unwire(). See r268327 for the motivation behind this change.
|
#
255887 |
|
26-Sep-2013 |
alc |
Eliminate the declaration for a method that is no longer used. (This change should have been a part of r255724.)
Reminded by: nathan Approved by: re (gjb)
|
#
255028 |
|
29-Aug-2013 |
alc |
Significantly reduce the cost, i.e., run time, of calls to madvise(..., MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new pmap function, pmap_advise(), that operates on a range of virtual addresses within the specified pmap, allowing for a more efficient implementation of MADV_DONTNEED and MADV_FREE. Previously, the implementation of MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as pmap_clear_reference(). Intuitively, the problem with this implementation is that the pmap-level locks are acquired and released and the page table traversed repeatedly, once for each resident page in the range that was specified to madvise(2). A more subtle flaw with the previous implementation is that pmap_clear_reference() would clear the reference bit on all mappings to the specified page, not just the mapping in the range specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a measureable impact. For example, the system time for completing a parallel "buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset of our supported architectures. I will commit implementations for the remaining architectures after further testing. For now, a stub function is sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib Tested by: pho (i386), marcel (ia64) Sponsored by: EMC / Isilon Storage Division
|
#
248280 |
|
14-Mar-2013 |
kib |
Add pmap function pmap_copy_pages(), which copies the content of the pages around, taking array of vm_page_t both for source and destination. Starting offsets and total transfer size are specified.
The function implements optimal algorithm for copying using the platform-specific optimizations. For instance, on the architectures were the direct map is available, no transient mappings are created, for i386 the per-cpu ephemeral page frame is used. The code was typically borrowed from the pmap_copy_page() for the same architecture.
Only i386/amd64, powerpc aim and arm/arm-v6 implementations were tested at the time of commit. High-level code, not committed yet to the tree, ensures that the use of the function is only allowed after explicit enablement.
For sparc64, the existing code has known issues and a stab is added instead, to allow the kernel linking.
Sponsored by: The FreeBSD Foundation Tested by: pho (i386, amd64), scottl (amd64), ian (arm and arm-v6) MFC after: 2 weeks
|
#
238357 |
|
10-Jul-2012 |
alc |
Avoid recursion on the pvh global lock in the aim oea pmap.
Correct the return type of the pmap_ts_referenced() implementations.
Reported by: jhibbits [1] Tested by: andreast
|
#
236000 |
|
25-May-2012 |
raj |
Missing vm_paddr_t bits which should have been part of r235936.
|
#
225418 |
|
06-Sep-2011 |
kib |
Split the vm_page flags PG_WRITEABLE and PG_REFERENCED into atomic flags field. Updates to the atomic flags are performed using the atomic ops on the containing word, do not require any vm lock to be held, and are non-blocking. The vm_page_aflag_set(9) and vm_page_aflag_clear(9) functions are provided to modify afalgs.
Document the changes to flags field to only require the page lock.
Introduce vm_page_reference(9) function to provide a stable KPI and KBI for filesystems like tmpfs and zfs which need to mark a page as referenced.
Reviewed by: alc, attilio Tested by: marius, flo (sparc64); andreast (powerpc, powerpc64) Approved by: re (bz)
|
#
213307 |
|
30-Sep-2010 |
nwhitehorn |
Add support for memory attributes (pmap_mapdev_attr() and friends) on PowerPC/AIM. This is currently stubbed out on Book-E, since I have no idea how to implement it there.
|
#
208504 |
|
24-May-2010 |
alc |
Roughly half of a typical pmap_mincore() implementation is machine- independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore().
Push down the page queues lock into pmap_clear_modify(), pmap_clear_reference(), and pmap_is_modified(). Assert that these functions are never passed an unmanaged page.
Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m: Contrary to what the comment says, pmap_mincore() is not simply an optimization. Without a complete pmap_mincore() implementation, mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED because only the pmap can provide this information.
Eliminate the page queues lock from vfs_setdirty_locked_object(), vm_pageout_clean(), vm_object_page_collect_flush(), and vm_object_page_clean(). Generally speaking, these are all accesses to the page's dirty field, which are synchronized by the containing vm object's lock.
Reduce the scope of the page queues lock in vm_object_madvise() and vm_page_dontneed().
Reviewed by: kib (an earlier version)
|
#
207155 |
|
24-Apr-2010 |
alc |
Resurrect pmap_is_referenced() and use it in mincore(). Essentially, pmap_ts_referenced() is not always appropriate for checking whether or not pages have been referenced because it clears any reference bits that it encounters. For example, in mincore(), clearing the reference bits has two negative consequences. First, it throws off the activity count calculations performed by the page daemon. Specifically, a page on which mincore() has called pmap_ts_referenced() looks less active to the page daemon than it should. Consequently, the page could be deactivated prematurely by the page daemon. Arguably, this problem could be fixed by having mincore() duplicate the activity count calculation on the page. However, there is a second problem for which that is not a solution. In order to clear a reference on a 4KB page, it may be necessary to demote a 2/4MB page mapping. Thus, a mincore() by one process can have the side effect of demoting a superpage mapping within another process!
|
#
198341 |
|
21-Oct-2009 |
marcel |
o Introduce vm_sync_icache() for making the I-cache coherent with the memory or D-cache, depending on the semantics of the platform. vm_sync_icache() is basically a wrapper around pmap_sync_icache(), that translates the vm_map_t argumument to pmap_t. o Introduce pmap_sync_icache() to all PMAP implementation. For powerpc it replaces the pmap_page_executable() function, added to solve the I-cache problem in uiomove_fromphys(). o In proc_rwmem() call vm_sync_icache() when writing to a page that has execute permissions. This assures that when breakpoints are written, the I-cache will be coherent and the process will actually hit the breakpoint. o This also fixes the Book-E PMAP implementation that was missing necessary locking while trying to deal with the I-cache coherency in pmap_enter() (read: mmu_booke_enter_locked).
The key property of this change is that the I-cache is made coherent *after* writes have been done. Doing it in the PMAP layer when adding or changing a mapping means that the I-cache is made coherent *before* any writes happen. The difference is key when the I-cache prefetches.
|
#
190684 |
|
04-Apr-2009 |
marcel |
PowerPC, meet kernel core dumps. The support is based on a generic dumper that creates an ELF core file and uses PMAP functions to scan and iterate over memory chunks, as well as handle memory mappings used during dumping. the PMAP layer can choose to return physical memory chunks or virtual memory chunks. For minidumps, the chunks should be virtual.
The default MMU I/F implementation for the scan_md() method returns NULL. Thus, when a PMAP implementation does not implement the required methods, an empty core file is created. Here, empty means having an ELF header only.
Obtained from: Juniper Networks
|
#
190681 |
|
03-Apr-2009 |
nwhitehorn |
Add support for 64-bit PowerPC CPUs operating in the 64-bit bridge mode provided, for example, on the PowerPC 970 (G5), as well as on related CPUs like the POWER3 and POWER4.
This also adds support for various built-in hardware found on Apple G5 hardware (e.g. the IBM CPC925 northbridge).
Reviewed by: grehan
|
#
179081 |
|
18-May-2008 |
alc |
Retire pmap_addr_hint(). It is no longer used.
|
#
173708 |
|
17-Nov-2007 |
alc |
Prevent the leakage of wired pages in the following circumstances: First, a file is mmap(2)ed and then mlock(2)ed. Later, it is truncated. Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the pages beyond the EOF are unmapped and freed. However, when the file is mlock(2)ed, the pages beyond the EOF are unmapped but not freed because they have a non-zero wire count. This can be a mistake. Specifically, it is a mistake if the sole reason why the pages are wired is because of wired, managed mappings. Previously, unmapping the pages destroys these wired, managed mappings, but does not reduce the pages' wire count. Consequently, when the file is unmapped, the pages are not unwired because the wired mapping has been destroyed. Moreover, when the vm object is finally destroyed, the pages are leaked because they are still wired. The fix is to reduce the pages' wired count by the number of wired, managed mappings destroyed. To do this, I introduce a new pmap function pmap_page_wired_mappings() that returns the number of managed mappings to the given physical page that are wired, and I use this function in vm_object_page_remove().
Reviewed by: tegge MFC after: 6 weeks
|
#
164895 |
|
05-Dec-2006 |
grehan |
Fix gdb issue where the i-cache was not being updated when a breakpoint was written into a user's address space. The fix is to modify uiomove_fromphys to sync the icache when an executable user-space page is written into.
Alan Cox suggested that there should probably be a higher-level interface to this in the ptrace code, but agreed that this is an OK short-term solution.
Files changed:
pmap.h - declaration of pmap_page_executable() pmap_dispatch.c - pass through the page_executable call to the mmu object mmu_oea.c - implement the page_executable method by examining the PTE_EXEC field in the vm_page_t uio_machdep.c - in uiomove_fromphys(), if the op was a UIO_WRITE to user-space, and if the page is executable, sync the icache since this is at the least a breakpoint-write from gdb.
Reported by: marcel Tested by: marcel, grehan on g3+g4 Discussed with: alc MFC after: 2 weeks
|
#
160889 |
|
01-Aug-2006 |
alc |
Complete the transition from pmap_page_protect() to pmap_remove_write(). Originally, I had adopted sparc64's name, pmap_clear_write(), for the function that is now pmap_remove_write(). However, this function is more like pmap_remove_all() than like pmap_clear_modify() or pmap_clear_reference(), hence, the name change.
The higher-level rationale behind this change is described in src/sys/amd64/amd64/pmap.c revision 1.567. The short version is that I'm trying to clean up and fix our support for execute access.
Reviewed by: marcel@ (ia64)
|
#
159627 |
|
14-Jun-2006 |
ups |
Remove mpte optimization from pmap_enter_quick(). There is a race with the current locking scheme and removing it should have no measurable performance impact. This fixes page faults leading to panics in pmap_enter_quick_locked() on amd64/i386.
Reviewed by: alc,jhb,peter,ps
|
#
159303 |
|
05-Jun-2006 |
alc |
Introduce the function pmap_enter_object(). It maps a sequence of resident pages from the same object. Use it in vm_map_pmap_enter() to reduce the locking overhead of premapping objects.
Reviewed by: tegge@
|
#
157443 |
|
03-Apr-2006 |
peter |
Remove the unused sva and eva arguments from pmap_remove_pages().
|
#
152630 |
|
20-Nov-2005 |
alc |
Eliminate pmap_init2(). It's no longer used.
|
#
152179 |
|
08-Nov-2005 |
grehan |
Insert a layer of indirection to the pmap code, using a kobj for the interface. This allows run-time selection of MMU code, based on CPU-type detection, or tunable-overrides when testing new code.
Pre-requisite for G5 support.
conf/files.powerpc - remove pmap.c - add mmu_if.h, mmu_oea.c, pmap_dispatch.c
powerpc/include/mmuvar.h - definitions for MMU implementations
powerpc/include/pmap.h - remove pmap_pte_spill declaration - add pmap_mmu_install declaration - size the phys_avail array - pmap_bootstrapped is now global-scope
powerpc/powerpc/machdep.c - call kobj_machdep_init early in the boot sequence to allow kobj usage prior to SI_SUB_LOCK - install the OEA pmap code. This will be moved to CPU-specific init code in the future.
powerpc/powerpc/mmu_if.m - Kobj MMU interface definitions
powerpc/powerpc/pmap_dispatch.c - central dispatch for pmap calls - contains the global mmu kobj and the routine to locate the the mmu implementation and init the kobj
|