#
259065 |
|
07-Dec-2013 |
gjb |
- Copy stable/10 (r259064) to releng/10.0 as part of the 10.0-RELEASE cycle. - Update __FreeBSD_version [1] - Set branch name to -RC1
[1] 10.0-CURRENT __FreeBSD_version value ended at '55', so start releng/10.0 at '100' so the branch is started with a value ending in zero.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
256281 |
|
10-Oct-2013 |
gjb |
Copy head (r256279) to stable/10 as part of the 10.0-RELEASE cycle.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation
|
#
243040 |
|
14-Nov-2012 |
kib |
Flip the semantic of M_NOWAIT to only require the allocation to not sleep, and perform the page allocations with VM_ALLOC_SYSTEM class. Previously, the allocation was also allowed to completely drain the reserve of the free pages, being translated to VM_ALLOC_INTERRUPT request class for vm_page_alloc() and similar functions.
Allow the caller of malloc* to request the 'deep drain' semantic by providing M_USE_RESERVE flag, now translated to VM_ALLOC_INTERRUPT class. Previously, it resulted in less aggressive VM_ALLOC_SYSTEM allocation class.
Centralize the translation of the M_* malloc(9) flags in the single inline function malloc2vm_flags().
Discussion started by: "Sears, Steven" <Steven.Sears@netapp.com> Reviewed by: alc, mdf (previous version) Tested by: pho (previous version) MFC after: 2 weeks
|
#
234745 |
|
27-Apr-2012 |
nwhitehorn |
After switching mutexes to use lwsync, they no longer provide sufficient guarantees on acquire for the tlbie mutex. Conversely, the TLB invalidation sequence provides guarantees that do not need to be redundantly applied on release. Roll a small custom lock that is just right. Simultaneously, convert the SLB tree changes back to lwsync, as changing them to sync was a misdiagnosis of the tlbie barrier problem this commit actually fixes.
|
#
234652 |
|
24-Apr-2012 |
nwhitehorn |
Revert r234581 for this file. The lockless SLB tree code does in fact need a heavyweight sync instead of a lightweight sync to function properly. Thanks to mdf for the clarification.
|
#
234581 |
|
22-Apr-2012 |
nwhitehorn |
Use lwsync to provide memory barriers on systems that support it instead of sync (lwsync is an alternate encoding of sync on systems that do not support it, providing graceful fallback). This provides more than an order of magnitude reduction in the time required to acquire or release a mutex.
MFC after: 2 months
|
#
230123 |
|
14-Jan-2012 |
nwhitehorn |
Rework SLB trap handling so that double-faults into an SLB trap handler are possible, and double faults within an SLB trap handler are not. The result is that it possible to take an SLB fault at any time, on any address, for any reason, at any point in the kernel.
This lets us do two important things. First, it removes the (soft) 16 GB RAM ceiling on PPC64 as well as any architectural limitations on KVA space. Second, it lets the kernel tolerate poorly designed hypervisors that have a tendency to fail to restore the SLB properly after a hypervisor context switch.
MFC after: 6 weeks
|
#
227568 |
|
16-Nov-2011 |
alc |
Refactor the code that performs physically contiguous memory allocation, yielding a new public interface, vm_page_alloc_contig(). This new function addresses some of the limitations of the current interfaces, contigmalloc() and kmem_alloc_contig(). For example, the physically contiguous memory that is allocated with those interfaces can only be allocated to the kernel vm object and must be mapped into the kernel virtual address space. It also provides functionality that vm_phys_alloc_contig() doesn't, such as wiring the returned pages. Moreover, unlike that function, it respects the low water marks on the paging queues and wakes up the page daemon when necessary. That said, at present, this new function can't be applied to all types of vm objects. However, that restriction will be eliminated in the coming weeks.
From a design standpoint, this change also addresses an inconsistency between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions. Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other functions in vm/vm_phys.c didn't. Moreover, vm_phys_alloc_contig() knew about vnodes and reservations. Now, vm_page_alloc_contig() is responsible for these things.
Reviewed by: kib Discussed with: jhb
|
#
222620 |
|
02-Jun-2011 |
nwhitehorn |
The POWER7 has only 32 SLB slots instead of 64, like other supported 64-bit PowerPC CPUs. Add infrastructure to support variable numbers of SLB slots and move the user slot from 63 to 0, so that it is always available.
|
#
217451 |
|
15-Jan-2011 |
andreast |
Remove unused variables. Spotted by a cppcheck (devel/cppcheck, http://sourceforge.net/projects/cppcheck) run.
Approved by: nwhitehorn (mentor)
|
#
215159 |
|
12-Nov-2010 |
nwhitehorn |
Add some platform KOBJ extensions and continue integrating PowerPC hypervisor infrastructure support: - Fix coexistence of multiple platform modules in the same kernel - Allow platform modules to provide an SMP topology - PowerPC hypervisors limit the amount of memory accessible in real mode. Allow the platform modules to specify the maximum real-mode address, and modify the bits of the kernel that need to allocate real-mode-accessible buffers to respect this limits.
|
#
214574 |
|
30-Oct-2010 |
nwhitehorn |
Restructure the way the copyin/copyout segment is stored to prevent a concurrency bug. Since all SLB/SR entries were invalidated during an exception, a decrementer exception could cause the user segment to be invalidated during a copyin()/copyout() without a thread switch that would cause it to be restored from the PCB, potentially causing the operation to continue on invalid memory. This is now handled by explicit restoration of segment 12 from the PCB on 32-bit systems and a check in the Data Segment Exception handler on 64-bit.
While here, cause copyin()/copyout() to check whether the requested user segment is already installed, saving some pipeline flushes, and fix the synchronization primitives around the mtsr and slbmte instructions to prevent accessing stale segments.
MFC after: 2 weeks
|
#
212722 |
|
16-Sep-2010 |
nwhitehorn |
Split the SLB mirror cache into two kinds of object, one for kernel maps which are similar to the previous ones, and one for user maps, which are arrays of pointers into the SLB tree. This changes makes user SLB updates atomic, closing a window for memory corruption. While here, rearrange the allocation functions to make context switches faster.
|
#
212715 |
|
15-Sep-2010 |
nwhitehorn |
Replace the SLB backing store splay tree used on 64-bit PowerPC AIM hardware with a lockless sparse tree design. This marginally improves the performance of PMAP and allows copyin()/copyout() to run without acquiring locks when used on wired mappings.
Submitted by: mdf
|
#
210704 |
|
31-Jul-2010 |
nwhitehorn |
Improve hash coverage for kernel page table entries by modifying the kernel ESID -> VSID map function. This makes ZFS run stably on PowerPC under heavy loads (repeated simultaneous SVN checkouts and updates).
|
#
209975 |
|
13-Jul-2010 |
nwhitehorn |
MFppc64:
Kernel sources for 64-bit PowerPC, along with build-system changes to keep 32-bit kernels compiling (build system changes for 64-bit kernels are coming later). Existing 32-bit PowerPC kernel configurations must be updated after this change to specify their architecture.
|