#
267654 |
|
19-Jun-2014 |
gjb |
Copy stable/9 to releng/9.3 as part of the 9.3-RELEASE cycle.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
237627 |
|
27-Jun-2012 |
alc |
MFC r236214 Replace all uses of the vm page queues lock by a r/w lock that is private to this pmap.c.
|
#
225736 |
|
22-Sep-2011 |
kensmith |
Copy head to stable/9 as part of 9.0-RELEASE release cycle.
Approved by: re (implicit)
|
#
224746 |
|
09-Aug-2011 |
kib |
- Move the PG_UNMANAGED flag from m->flags to m->oflags, renaming the flag to VPO_UNMANAGED (and also making the flag protected by the vm object lock, instead of vm page queue lock). - Mark the fake pages with both PG_FICTITIOUS (as it is now) and VPO_UNMANAGED. As a consequence, pmap code now can use use just VPO_UNMANAGED to decide whether the page is unmanaged.
Reviewed by: alc Tested by: pho (x86, previous version), marius (sparc64), marcel (arm, ia64, powerpc), ray (mips) Sponsored by: The FreeBSD Foundation Approved by: re (bz)
|
#
217265 |
|
11-Jan-2011 |
jhb |
Remove unneeded includes of <sys/linker_set.h>. Other headers that use it internally contain nested includes.
Reviewed by: bde
|
#
217058 |
|
06-Jan-2011 |
marius |
Remove an unused variable accidentally added in r216803.
|
#
216803 |
|
29-Dec-2010 |
marius |
On UltraSPARC-III+ and greater take advantage of ASI_ATOMIC_QUAD_LDD_PHYS, which takes an physical address instead of an virtual one, for loading TTEs of the kernel TSB so we no longer need to lock the kernel TSB into the dTLB, which only has a very limited number of lockable dTLB slots. The net result is that we now basically can handle a kernel TSB of any size and no longer need to limit the kernel address space based on the number of dTLB slots available for locked entries. Consequently, other parts of the trap handlers now also only access the the kernel TSB via its physical address in order to avoid nested traps, as does the PMAP bootstrap code as we haven't taken over the trap table at that point, yet. Apart from that the kernel TSB now is accessed via a direct mapping when we are otherwise taking advantage of ASI_ATOMIC_QUAD_LDD_PHYS so no further code changes are needed. Most of this is implemented by extending the patching of the TSB addresses and mask as well as the ASIs used to load it into the trap table so the runtime overhead of this change is rather low. Currently the use of ASI_ATOMIC_QUAD_LDD_PHYS is not yet enabled on SPARC64 CPUs due to lack of testing and due to the fact it might require minor adjustments there. Theoretically it should be possible to use the same approach also for the user TSB, which already is not locked into the dTLB, avoiding nested traps. However, for reasons I don't understand yet OpenSolaris only does that with SPARC64 CPUs. On the other hand I think that also addressing the user TSB physically and thus avoiding nested traps would get us closer to sharing this code with sun4v, which only supports trap level 0 and 1, so eventually we could have a single kernel which runs on both sun4u and sun4v (as does Linux and OpenBSD).
Developed at and committed from: 27C3
|
#
210334 |
|
21-Jul-2010 |
attilio |
KTR_CTx are long time aliased by existing classes so they can't serve their purpose anymore. Axe them out.
Sponsored by: Sandvine Incorporated Discussed with: jhb, emaste Possible MFC: TBD
|
#
174933 |
|
27-Dec-2007 |
alc |
Update two tracepoints, i.e., CTRx() invocations, to reflect the demise of page coloring a few months ago.
|
#
170249 |
|
03-Jun-2007 |
alc |
Prepare for the new physical memory allocator: Change the way that the physical page's color is obtained.
Approved by: re
|
#
162543 |
|
22-Sep-2006 |
alc |
The sparc64/sparc64/pmap.c implementations of pmap_remove(), pmap_protect(), and pmap_copy() have optimizations for regions larger than PMAP_TSB_THRESH (which works out to 16MB). This caused a panic in tsb_foreach for kernel mappings, since pm->pm_tsb is NULL in that case. This fix teaches tsb_foreach to use the kernel's tsb in that case.
Submitted by: Michael Plass MFC after: 3 days
|
#
141712 |
|
12-Feb-2005 |
alc |
Add lock assertion.
Tested by: jhb
|
#
133451 |
|
10-Aug-2004 |
alc |
Add pmap locking to many of the functions.
Implement the protection check required by the pmap_extract_and_hold() specification.
Remove the acquisition and release of Giant from pmap_extract_and_hold() and pmap_protect().
Many thanks to Ken Smith for resolving a sparc64-specific initialization problem in my original patch.
Tested by: kensmith@
|
#
119291 |
|
22-Aug-2003 |
imp |
Prefer new location of pci include files (which have only been in the tree for two or more years now), except in a few places where there's code to be compatible with older versions of FreeBSD.
|
#
116417 |
|
15-Jun-2003 |
jake |
- Mirror vm_page_queue_mtx assertions added to the i386 pmap. - Add vm page queue locking in certain places that are only needed on sparc64.
This should make pmap_qenter and pmap_qremove MP-safe.
Discussed with: alc
|
#
113238 |
|
08-Apr-2003 |
jake |
Use vm_paddr_t for physical addresses.
|
#
112879 |
|
31-Mar-2003 |
jake |
- Allow the physical memory size that will be actually used by the kernel to be overridden by setting hw.physmem. - Fix a vm_map_find arg, we don't want to find space. - Add tracing and statistics for off colored pages. - Detect "stupid" pmap_kenters (same virtual and physical as existing mapping), and do nothing in that case.
|
#
112697 |
|
27-Mar-2003 |
jake |
Handle the fictitious pages created by the device pager. For fictitious pages which represent actual physical memory we must strip off the fake page in order to allow illegal aliases to be detected. Otherwise we map uncacheable in the virtual and physical caches and set the side effect bit, as is required for mapping device memory.
This fixes gstat on sparc64, which wants to mmap kernel memory through a character device.
|
#
108700 |
|
05-Jan-2003 |
jake |
- Reorganize PMAP_STATS to scale a little better. - Add some more stats for things that are now considered interesting.
|
#
108166 |
|
21-Dec-2002 |
jake |
- Add a pmap pointer to struct md_page, and use this to find the pmap that a mapping belongs to by setting it in the vm_page_t structure that backs the tsb page that the tte for a mapping is in. This allows the pmap that a mapping belongs to to be found without keeping a pointer to it in the tte itself. - Remove the pmap pointer from struct tte and use the space to make the tte pv lists doubly linked (TAILQs), like on other architectures. This makes entering or removing a mapping O(1) instead of O(n) where n is the number of pmaps a page is mapped by (including kernel_pmap). - Use atomic ops for setting and clearing bits in the ttes, now that they return the old value and can be easily used for this purpose. - Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will inline it (4 inline stores using %g0 instead of a function call). - Initially set the virtual colour for all the vm_page_ts to be equal to their physical colour. This will be more useful once uma_small_alloc is implemented, but basically pages with virtual colour equal to phsyical colour are easier to handle at the pmap level because they can be safely accessed through cachable direct virtual to physical mappings with that colour, without fear of causing illegal dcache aliases.
In total these changes give a minor performance improvement, about 1% reduction in system time during buildworld.
|
#
108140 |
|
20-Dec-2002 |
jake |
Add page queue locking around functions that call vm_page_flag_set. This fixes a failed assertion early in boot on sparc64.
Reported by: Roderick van Domburg <r.s.a.vandomburg@student.utwente.nl>
|
#
102040 |
|
18-Aug-2002 |
jake |
Add pmap support for user mappings of multiple page sizes (super pages). This supports all hardware page sizes (8K, 64K, 512K, 4MB), but only 8k pages are actually used as of yet.
|
#
101653 |
|
10-Aug-2002 |
jake |
Auto size available kernel virtual address space based on phsyical memory size. This avoids blowing out kva in kmeminit() on large memory machines (4 gigs or more).
Reviewed by: tmm
|
#
100718 |
|
26-Jul-2002 |
jake |
Remove the tlb argument to tlb_page_demap (itlb or dtlb), in order to better match the pmap_invalidate api.
|
#
99935 |
|
13-Jul-2002 |
jake |
Remove debug code.
|
#
97448 |
|
29-May-2002 |
jake |
Remove an unused variable.
|
#
97447 |
|
29-May-2002 |
jake |
Merge the code in pv.c into pmap.c directly. Place all page mappings onto the pv lists in the vm_page, even unmanaged kernel mappings. This is so that the virtual cachability of these mappings can be tracked when a page is mapped to more than one virtual address. All virtually cachable mappings of a physical page must have the same virtual colour, or illegal alises can be created in the data cache. This is a bit tricky because we still have to recognize managed and unmanaged mappings, even though they are all on the pv lists.
|
#
97446 |
|
29-May-2002 |
jake |
Add pv list linkage and a pmap pointer to struct tte. Remove separately allocated pv entries and use the linkage in the tte for pv operations.
|
#
97030 |
|
21-May-2002 |
jake |
Rewrite pmap_enter to avoid copying ttes in all cases. Pass the tte data to tsb_tte_enter instead of a whole tte, also to avoid copying.
|
#
97027 |
|
20-May-2002 |
jake |
Redefine the tte accessor macros to take a pointer to a tte, instead of the value of the tag or data field. Add macros for getting the page shift, size and mask for the physical page that a tte maps (which may be one of several sizes). Use the new cache functions for invalidating single pages.
|
#
92850 |
|
21-Mar-2002 |
jeff |
Remove references to vm_zone.h and switch over to the new uma API.
Reviewed by: jake
|
#
91783 |
|
07-Mar-2002 |
jake |
Implement delivery of tlb shootdown ipis. This is currently more fine grained than the other implementations; we have complete control over the tlb, so we only demap specific pages. We take advantage of the ranged tlb flush api to send one ipi for a range of pages, and due to the pm_active optimization we rarely send ipis for demaps from user pmaps.
Remove now unused routines to load the tlb; this is only done once outside of the tlb fault handlers. Minor cleanups to the smp startup code.
This boots multi user with both cpus active on a dual ultra 60 and on a dual ultra 2.
|
#
91782 |
|
07-Mar-2002 |
jake |
Modify the tlb demap API to take a pmap instead of a tlb context number. Due to allocating tlb contexts on the fly, we only ever need to demap the primary context, non-primary contexts have already been implicitly flushed by context switching. All we really need to tell is if its a kernel demap or not, and its easier just to compare against the kernel_pmap which is a constant.
|
#
91288 |
|
26-Feb-2002 |
jake |
Convert pmap.pm_context to an array of contexts indexed by cpuid. This doesn't make sense for SMP right now, but it is a means to an end.
|
#
91224 |
|
25-Feb-2002 |
jake |
Modify the tte format to not include the tlb context number and to store the virtual page number in a much more convenient way; all in one piece. This greatly simplifies the comparison for a matching tte, and allows the fault handlers to be much simpler due to not having to load wierd masks. Rewrite the tlb fault handlers to account for the new format. These are also written to allow faults on the user tsb inside of the fault handlers; the kernel fault handler must be aware of this and not clobber the other's registers. The faults do not yet occur due to other support that is needed (and still under my desk).
Bug fixes from: tmm
|
#
91177 |
|
23-Feb-2002 |
jake |
Make use of the ranged tlb demap operations where ever possible. Use pmap_qenter and pmap_qremove in preference to pmap_kenter/pmap_kremove. The former maps in multiple pages at a time, and so can do a ranged flush. Don't assume that pmap_kenter and pmap_kremove will flush the tlb, even though they still do. It will not once the MI code is updated to use pmap_qenter and pmap_qremove.
|
#
91168 |
|
23-Feb-2002 |
jake |
Adapt the tsb_foreach interface to take a source and a destination pmap so that it can be used for pmap_copy. Other consumers ignore the second pmap. Add statistics gathering for tsb_foreach. Implement pmap_copy.
|
#
91167 |
|
23-Feb-2002 |
jake |
Add statistic gathering for various tsb operations.
Submitted by: tmm
|
#
89046 |
|
08-Jan-2002 |
jake |
Catch up to change in compile time assertion interface.
|
#
88826 |
|
02-Jan-2002 |
tmm |
1. Implement an optimization for pmap_remove() and pmap_protect(): if a substantial fraction of the number of entries of tte's in the tsb would need to be looked up, traverse the tsb instead. This is crucial in some places, e.g. when swapping out a process, where a certain pmap_remove() call would take very long time to complete without this. 2. Implement pmap_qenter_flags(), which will become used later 3. Reactivate the instruction cache flush done when mapping as executable. This is required e.g. when executing files via NFS, but is known to cause problems on UltraSPARC-IIe CPU's. If you have such a CPU, you will need to comment this call out for now.
Submitted by: jake (3)
|
#
88649 |
|
29-Dec-2001 |
jake |
Remove support for multi level tsbs, making this code much simpler and much less magic, fragile, broken. Use ttes rather than sttes. We still use the replacement scheme used by the original code, which is pretty cool.
Many crucial bug fixes from: tmm
|
#
85258 |
|
20-Oct-2001 |
jake |
Add missing includes.
|
#
85247 |
|
20-Oct-2001 |
jake |
Use KTR_PMAP instead of KTR_CT1.
|
#
84191 |
|
30-Sep-2001 |
jake |
Remove some debug code, add traces.
|
#
82903 |
|
03-Sep-2001 |
jake |
Implement pv_bit_count which is used by pmap_ts_referenced.
Remove the modified tte bit and add a softwrite bit. Mappings are only writeable if they have been written to, thus in general modify just duplicates the write bit. The softwrite bit makes it easier to distinguish mappings which should be writeable but are not yet modified.
Move the exec bit down one, it was being sign extended when used as an immediate operand.
Use the lock bit to mean tsb page and remove the tsb bit. These are the only form of locked (tsb) entries we support and we need to conserve bits where possible.
Implement pmap_copy_page and pmap_is_modified and friends.
Detect mappings that are being being upgraded from read-only to read-write due to copy-on-write and update the write bit appropriately.
Make trap_mmu_fault do the right thing for protection faults, which is necessary to implement copy on write correctly. Also handle a bunch more userland trap types and add ktr traces.
|
#
81388 |
|
10-Aug-2001 |
jake |
Handle all types of mmu misses from user mode. Pass a context argument to tlb functions.
|
#
81186 |
|
06-Aug-2001 |
jake |
Handle dmmu protection faults as well as misses. Enable tracking of the modify and reference tte bits. Implementing allocating of tsb pages. Make tsb_stte_lookup do the right thing with the kernel pmap.
|
#
80709 |
|
31-Jul-2001 |
jake |
Flesh out the sparc64 port considerably. This contains: - mostly complete kernel pmap support, and tested but currently turned off userland pmap support - low level assembly language trap, context switching and support code - fully implemented atomic.h and supporting cpufunc.h - some support for kernel debugging with ddb - various header tweaks and filling out of machine dependent structures
|