#
302408 |
|
07-Jul-2016 |
gjb |
Copy head@r302406 to stable/11 as part of the 11.0-RELEASE cycle. Prune svn:mergeinfo from the new branch, as nothing has been merged here.
Additional commits post-branch will follow.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
257230 |
|
27-Oct-2013 |
kib |
Add a virtual table for the busdma methods on x86, to allow different busdma implementations to coexist. Copy busdma_machdep.c to busdma_bounce.c, which is still a single implementation of the busdma interface on x86 for now. The busdma_machdep.c only contains common and dispatch code.
Tested by: pho (as part of the larger patch) Sponsored by: The FreeBSD Foundation MFC after: 1 month
|
#
257228 |
|
27-Oct-2013 |
kib |
Add bus_dmamap_load_ma() function to load map with the array of vm_pages. Provide trivial implementation which forwards the load to _bus_dmamap_load_phys() page by page. Right now all architectures use bus_dmamap_load_ma_triv().
Tested by: pho (as part of the functional patch) Sponsored by: The FreeBSD Foundation MFC after: 1 month
|
#
254025 |
|
07-Aug-2013 |
jeff |
Replace kernel virtual address space allocation with vmem. This provides transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_* - Those that allocate address space are named kva_* - Those that operate on maps are named kmap_* - Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc Tested by: pho Sponsored by: EMC / Isilon Storage Division
|
#
251900 |
|
18-Jun-2013 |
rpaulo |
Fix a KTR_BUSDMA format string.
|
#
248968 |
|
01-Apr-2013 |
kib |
Record the correct error in the trace.
Sponsored by: The FreeBSD Foundation MFC after: 3 days
|
#
246713 |
|
12-Feb-2013 |
kib |
Reform the busdma API so that new types may be added without modifying every architecture's busdma_machdep.c. It is done by unifying the bus_dmamap_load_buffer() routines so that they may be called from MI code. The MD busdma is then given a chance to do any final processing in the complete() callback.
The cam changes unify the bus_dmamap_load* handling in cam drivers.
The arm and mips implementations are updated to track virtual addresses for sync(). Previously this was done in a type specific way. Now it is done in a generic way by recording the list of virtuals in the map.
Submitted by: jeff (sponsored by EMC/Isilon) Reviewed by: kan (previous version), scottl, mjacob (isp(4), no objections for target mode changes) Discussed with: ian (arm changes) Tested by: marius (sparc64), mips (jmallet), isci(4) on x86 (jharris), amd64 (Fabian Keil <freebsd-listen@fabiankeil.de>)
|
#
239354 |
|
17-Aug-2012 |
jhb |
Allow static DMA allocations that allow for enough segments to do page-sized segments for the entire allocation to use kmem_alloc_attr() to allocate KVM rather than using kmem_alloc_contig(). This avoids requiring a single physically contiguous chunk in this case.
Submitted by: Peter Jeremy (original version) MFC after: 1 month
|
#
239020 |
|
03-Aug-2012 |
jhb |
Correct function name in comment.
Submitted by: alc
|
#
239008 |
|
03-Aug-2012 |
jhb |
Improve the handling of static DMA buffers that use non-default memory attributes (currently just BUS_DMA_NOCACHE): - Don't call pmap_change_attr() on the returned address, instead use kmem_alloc_contig() to ask the VM system for memory with the requested attribute. - As a result, always use kmem_alloc_contig() for non-default memory attributes, even for sub-page allocations. This requires adjusting bus_dmamem_free()'s logic for determining which free routine to use. - For x86, add a new dummy bus_dmamap that is used for static DMA buffers allocated via kmem_alloc_contig(). bus_dmamem_free() can then use the map pointer to determine which free routine to use. - For powerpc, add a new flag to the allocated map (bus_dmamem_alloc() always creates a real map on powerpc) to indicate which free routine should be used.
Note that the BUS_DMA_NOCACHE handling in powerpc is currently #ifdef'd out. I have left it disabled but updated it to match x86.
Reviewed by: scottl MFC after: 1 month
|
#
233036 |
|
16-Mar-2012 |
jhb |
Revert the PCIe 4GB boundary issue workaround now that the proper fix is in HEAD.
Ok'd by: scottl
|
#
232356 |
|
01-Mar-2012 |
jhb |
- Change contigmalloc() to use the vm_paddr_t type instead of an unsigned long for specifying a boundary constraint. - Change bus_dma tags to use bus_addr_t instead of bus_size_t for boundary constraints.
These allow boundary constraints to be fully expressed for cases where sizeof(bus_addr_t) != sizeof(bus_size_t). Specifically, it allows a driver to properly specify a 4GB boundary in a PAE kernel.
Note that this cannot be safely MFC'd without a lot of compat shims due to KBI changes, so I do not intend to merge it.
Reviewed by: scottl
|
#
232267 |
|
28-Feb-2012 |
emaste |
Workaround for PCIe 4GB boundary issue
Enforce a boundary of no more than 4GB - transfers crossing a 4GB boundary can lead to data corruption due to PCIe limitations. This change is a less-intrusive workaround that can be quickly merged back to older branches; a cleaner implementation will arrive in HEAD later but may require KPI changes.
This change is based on a suggestion by jhb@.
Reviewed by: scottl, jhb Sponsored by: Sandvine Incorporated MFC after: 3 days
|
#
227309 |
|
07-Nov-2011 |
ed |
Mark all SYSCTL_NODEs static that have no corresponding SYSCTL_DECLs.
The SYSCTL_NODE macro defines a list that stores all child-elements of that node. If there's no SYSCTL_DECL macro anywhere else, there's no reason why it shouldn't be static.
|
#
217337 |
|
12-Jan-2011 |
mdf |
Revert to using bus_size_t for the bounce_zone's alignment member.
Reuqested by: jhb
|
#
217330 |
|
12-Jan-2011 |
mdf |
Fix a brain fart. Since this file is shared between i386 and amd64, a bus_size_t may be 32 or 64 bits. Change the bounce_zone alignment field to explicitly be 32 bits, as I can't really imagine a DMA device that needs anything close to 2GB alignment of data.
|
#
217326 |
|
12-Jan-2011 |
mdf |
sysctl(9) cleanup checkpoint: amd64 GENERIC builds cleanly.
Commit the kernel changes.
|
#
216316 |
|
09-Dec-2010 |
cperciva |
Replace i386/i386/busdma_machdep.c and amd64/amd64/busdma_machdep.c (which are identical) with a single x86/x86/busdma_machdep.c.
|
#
216308 |
|
08-Dec-2010 |
cperciva |
On amd64, we have (since r1.72, in December 2005) MAX_BPAGES=8192, while on i386 we have MAX_BPAGES=512. Implement this difference via '#ifdef __i386__'.
With this commit, the i386 and amd64 busdma_machdep.c files become identical; they will soon be replaced by a single file under sys/x86.
|
#
216194 |
|
05-Dec-2010 |
cperciva |
MFamd64 r204214: Enforce stronger alignment semantics (require that the end of segments be aligned, not just the start of segments) in order to allow Xen's blkfront driver to operate correctly.
PR: kern/152818 MFC after: 3 days
|
#
216191 |
|
04-Dec-2010 |
cperciva |
Remove gratuitous i386/amd64 inconsistency in favour of the less verbose version of declaring a variable initialized to zero.
|
#
216190 |
|
04-Dec-2010 |
cperciva |
Remove unnecessary #includes which seem to have been accidentally added as part of CVS r1.76 (in January 2006).
|
#
213282 |
|
29-Sep-2010 |
neel |
Fix bogus error message from bus_dmamem_alloc() about incorrect alignment.
The check for alignment should be made against the physical address and not the virtual address that maps it.
Sponsored by: NetApp Submitted by: Will McGovern (will at netapp dot com) Reviewed by: mjacob, jhb
|
#
191438 |
|
23-Apr-2009 |
jhb |
Reduce the number of bounce zones (and thus the number of bounce pages used in some cases): - Ignore DMA tag boundaries when allocating bounce pages. The boundaries don't determine whether or not parts of a DMA request bounce. Instead, they are just used to carve up segments. - Allow tags with sub-page alignment to share bounce pages since bounce pages are always page aligned.
Reviewed by: scottl (amd64) MFC after: 1 month
|
#
191201 |
|
17-Apr-2009 |
jhb |
Restore bus DMA bounce pages to an offset of 0 when they are released by a tag that has BUS_DMA_KEEP_PG_OFFSET set. Otherwise the page could be reused with a non-zero offset by a tag that doesn't have BUS_DMA_KEEP_PG_OFFSET leading to data corruption.
Sleuthing by: avg Reviewed by: scottl
|
#
191011 |
|
13-Apr-2009 |
kib |
The bus_dmamap_load_uio(9) shall use pmap of the thread recorded in the uio_td to extract pages from, instead of unconditionally use kernel pmap.
Submitted by: Jason Harmening <jason.harmening gmail com> (amd64 version) PR: amd64/133592 Reviewed by: scottl (original patch), jhb MFC after: 2 weeks
|
#
188403 |
|
09-Feb-2009 |
cognet |
The bounce zone sees its page number increased if multiple dma maps use it in the same dma tag. However, it can happen multiple dma tags share the same bounce zone too, so add a per-bounce zone map counter, and check it instead of the dma tag map counter, to know if we have to alloc more pages.
Reported by: miwi Reviewed by: scottl
|
#
188350 |
|
08-Feb-2009 |
imp |
When bouncing pages, allow a new option to preserve the intra-page offset. This is needed for the ehci hardware buffer rings that assume this behavior.
This is an interim solution, and a more general one is being worked on. This solution doesn't break anything that doesn't ask for it directly. The mbuf and uio variants with this flag likely don't work and haven't been tested.
Universe builds with these changes. I don't have a huge-memory machine to test these changes with, but will be happy to work with folks that do and hps if this changes turns out not to be sufficient.
Submitted by: alfred@ from Hans Peter Selasky's original
|
#
181775 |
|
15-Aug-2008 |
kmacy |
Integrate support for xen in to i386 common code.
MFC after: 1 month
|
#
180533 |
|
15-Jul-2008 |
alc |
Update bus_dmamem_alloc()'s first call to malloc() such that M_WAITOK is specified when appropriate.
Reviewed by: scottl
|
#
177690 |
|
28-Mar-2008 |
emaste |
If we're returning successfully from bus_dmamem_alloc, don't record a KTR of error = ENOMEM.
|
#
176206 |
|
12-Feb-2008 |
scottl |
If busdma is being used to realign dynamic buffers and the alignment is set to PAGE_SIZE or less, the bounce page counting logic was flawed and wouldn't reserve any pages. Adjust to be correct. Review of other architectures is forthcoming.
Submitted by: Joseph Golio
|
#
173988 |
|
27-Nov-2007 |
jhb |
Remove the 'needbounce' variable from the _bus_dmamap_load_buffer() routine. It is not needed as the existing tests for segment coalescing already handle bounced addresses and it prevents legal segment coalescing in certain edge cases.
MFC after: 1 week Reviewed by: scottl
|
#
170564 |
|
11-Jun-2007 |
mjacob |
Check against maxsegsz being zero in bus_dma_tag_create and return EINVAL if it is.
Reviewed by: scott long
|
#
170086 |
|
29-May-2007 |
yongari |
Honor maxsegsz of less than a page size in a DMA tag. Previously it used to return PAGE_SIZE without respect to restrictions of a DMA tag. This affected all of the busdma load functions that use _bus_dmamap_loader_buffer() as their back-end.
Reviewed by: scottl
|
#
169799 |
|
20-May-2007 |
mjacob |
Initializae lastaddr to 0 in bus_dmamap_load_uio so that _bus_dmamap_load_buffer won't (potentially) be confused.
Discovered by: gcc 4.2
MFC after: 3 days
|
#
168822 |
|
17-Apr-2007 |
jhb |
Honor the BUS_DMA_NOCACHE flag to bus_dmamem_alloc() on amd64 and i386 by mapping the pages as UC (uncacheable) using pmap_change_attr().
MFC after: 1 week Requested by: ariff Reviewed by: scottl
|
#
167277 |
|
06-Mar-2007 |
scottl |
Don't increment total_bounced when doing no-op dmamap_sync ops.
|
#
162673 |
|
26-Sep-2006 |
scottl |
The need to run a filter also implies that bouncing could be possible, so just use the COULD_BOUNCE flag for both and retire the USE_FILTER flag. This fixes the problem that rev 1.81 introduced with the if_bfe driver (and possibly others).
|
#
162607 |
|
24-Sep-2006 |
imp |
Add a newline to the printf.
|
#
162275 |
|
13-Sep-2006 |
scottl |
Remove duplicated code. Declare functions non-static that shouldn't be inlined.
|
#
162211 |
|
11-Sep-2006 |
scottl |
The run_filter() procedure is a means of working around DMA engine bugs in old/broken hardware. Unfortunately, it adds cache pressure and possible mispredicted branches to the fast path of the bus_dmamap_load collection of functions. Since it's meant for slow path exception processing, de-inline it and allow its conditions to be pre-computed at tag_create time and thus short-circuited at runtime.
While here, cut down on the size of _bus_dmamap_load_buffer() by pushing the bounce page logic into a non-inlined function. Again, this helps with cache pressure and mispredicted branches.
According to the TSC, this shaves off a few cycles on average. Unfortunately, the data varies quite a bit due to interrupts and preemption, so it's hard to get a good measurement. Real world measurements of network PPS are welcomed. A merge to amd64 and other arches is pending more testing.
|
#
159130 |
|
01-Jun-2006 |
silby |
After much discussion with mjacob and scottl, change bus_dmamem_alloc so that it just warns the user with a printf when it misaligns a piece of memory that was requested through a busdma tag.
Some drivers (such as mpt, and probably others) were asking for alignments that could not be satisfied, but as far as driver operation was concerned, that did not matter. In the theory that other drivers will fall into this same category, we agreed that panicing or making the allocation fail will cause more hardship than is necessary. The printf should be sufficient motivation to get the driver glitch fixed.
|
#
159092 |
|
30-May-2006 |
mjacob |
Turn the panic on not being able to meet alignment constraints in bus_dmamem_alloc into the more reasonable EINVAL return.
Also, reclaim memory allocated but then not used if we had an error return.
|
#
159011 |
|
28-May-2006 |
silby |
Add a quick hack to ensure that bus_dmamem_alloc properly aligns small allocations with large alignment requirements.
Add a panic to detect cases where we've still failed to properly align.
|
#
158264 |
|
03-May-2006 |
scottl |
Allow bus_dmamap_load() to pass ENOMEM back to the caller. This puts it into conformance with the mbuf and uio load routines. ENOMEM can only happen with BUS_DMA_NOWAIT is passed in, thus the deferals are disabled. I don't like doing this, but fixing this fixes assumptions in other important drivers, which is a net benefit for now.
|
#
154367 |
|
14-Jan-2006 |
scottl |
Free the newtag if we exit with a failure from alloc_bounce_zone().
Found by: Coverity Prevent(tm)
|
#
152775 |
|
24-Nov-2005 |
le |
Fix typo.
|
#
143449 |
|
12-Mar-2005 |
scottl |
Guard against an integer underflow that could cause busdma to eat up all available RAM. This also results in the global bounce page limit being applied to zones instead of globally.
Submitted by: Petr Lampa (in part)
|
#
143293 |
|
08-Mar-2005 |
mux |
Oops, CTR*() macros are not varadic macros, and the number indicates the number of parameters. Fix my previous commit to use the correct CTR*() macros.
Pointy hat to: mux
|
#
143284 |
|
08-Mar-2005 |
mux |
Use __func__ in the KTR_BUSDMA traces. This avoids copy and paste errors like in the bus_dmamap_load_mbuf_sg() case where we were wrongly displaying the function name as bus_dmamap_load_mbuf.
|
#
143202 |
|
07-Mar-2005 |
scottl |
Remove dead code.
|
#
139840 |
|
07-Jan-2005 |
scottl |
Introduce bus_dmamap_load_mbuf_sg(). Instead of taking a callback arg, this cuts to the chase and fills in a provided s/g list. This is meant to optimize out the cost of the callback since the callback doesn't serve much purpose for mbufs since mbuf loads will never be deferred. This is just for amd64 and i386 at the moment, other arches will be coming shortly.
|
#
139724 |
|
05-Jan-2005 |
imp |
Start all license/copyright notice comments with /*-, per tradition
|
#
138194 |
|
29-Nov-2004 |
scottl |
Don't flag alignment constraints as a reason for bouncing. This fixes the trigger for other misbehaviour in the sym driver that was causing freezes at boot. Thanks to phk@ for reporting and testing this.
|
#
137966 |
|
21-Nov-2004 |
scottl |
Remove an extra #include
|
#
137965 |
|
21-Nov-2004 |
scottl |
MFC amd64: Consolidate all of the bounce tests into the BUS_DMA_COULD_BOUNCE flag. Allocate the bounce zone at either tag creation or map creation to help avoid null-pointer derefs later on. Track total pages per zone so that each zone can get a minimum allocation at tag creation time instead of being defeated by mis-behaving tags that suck up the max amount.
|
#
137894 |
|
19-Nov-2004 |
scottl |
Revert part of rev 1.57. The tag boundary is honored by splitting the segment, not by bouncing.
|
#
137460 |
|
09-Nov-2004 |
scottl |
Zero the tag when it's allocated. Also fix a printf format problem. This should fix the problems introduced several hours ago.
|
#
137445 |
|
09-Nov-2004 |
scottl |
First pass at replacing the single global bounce pool with sub-pools that are appropriate for different tag requirements. With the former global pool, bounce pages might get allocated that are appropriate for one tag, but not appropriate for another, but the system had no way to distinguish between them. Now zones with distinct attributes are created to hold pages, and each tag that requires bouncing is associated with a zone. New zones are created as needed if no existing zones can meet the requirements of the tag. Stats for each zone are tracked via the hw.busdma sysctl node.
This should help drivers that are failing with mysterious data corruption.
MFC After: 1 week
|
#
137142 |
|
02-Nov-2004 |
scottl |
Streamline busdma a bit. Inline _bus_dmamap_load_buffer, optimize some tests, replace a passed td with a passed pmap to eliminate some deferences.
|
#
136805 |
|
23-Oct-2004 |
rwatson |
Add some basic KTR tracing to busdma on i386. This is likely not the final set of traces -- someone with more busdma background will probably want to review and expand this, as well as port to other platforms. This tracing is sufficient to identify key busdma events on i386, and in particular to draw attention to bounce buffering events that may have a substantial performance impact.
|
#
134934 |
|
08-Sep-2004 |
scottl |
Fix a problem with tag->boundary inheritence that has existed since day one and was propagated to nearly every platform. The boundary of the child needs to consider the boundary of the parent and pick the minimum of the two, not the maximum. However, if either is 0 then pick the appropriate one. This bug was exposed by a recent change to ATA, which should now be fixed by this change. The alignment and maxsegsz tag attributes likely also need a similar review in the near future.
This is a MT5 candidate.
Reviewed by: marcel Submitted by: sos (in part)
|
#
132545 |
|
22-Jul-2004 |
scottl |
Arg! Revert local changes that were accidentlly included in the previous version.
|
#
132544 |
|
22-Jul-2004 |
scottl |
Don't count needed bounce pages if loading a buffer that was created with bus_dmamem_alloc()
Submitted by: harti
|
#
131529 |
|
03-Jul-2004 |
scottl |
Commit the first of half of changes that allow busdma to transparently honor the alignment and boundary constraints in the dma tag when loading buffers. Previously, these constraints were only honored when allocating memory via bus_dmamem_alloc(). Now, bus_dmamap_load() will automatically use bounce buffers when needed.
Also add a set of sysctls to monitor the global busdma stats. These are:
hw.busdma.free_bpages hw.busdma.reserved_bpages hw.busdma.active_bpages hw.busdma.total_bpages hw.busdma.total_bounced hw.busdma.total_deferred
|
#
126919 |
|
13-Mar-2004 |
scottl |
Now that contigfree() does not require Giant, don't grab it in busdma.
|
#
119133 |
|
19-Aug-2003 |
sam |
remove #define no longer used
|
#
118451 |
|
04-Aug-2003 |
scottl |
In _bus_dmamap_load_buffer(), only count the number of bounce pages needed if they haven't been counted before. This test was ommitted when bus_dmamap_load() was merged into this function, and results in the pagesneeded field growing without bounds when multiple deferrals happen.
Thanks to Paul Saab for beating his head against this for a few hours =-)
|
#
118246 |
|
31-Jul-2003 |
scottl |
Allocate the S/G list in the tag, not on the stack. The enforces the rule that while many maps can exist and be loaded per tag, bus_dmamap_load() and friends can only be called on one map at a time from the tag. This is enforced via the mutex arguments in the tag.
Fixing this bug means that s/g lists can be arbitrarily long in length, and also removes an ugly GNU-ism from the code. No API or ABI change is incurred. Similar changes for other platforms is forthcoming.
|
#
118081 |
|
27-Jul-2003 |
mux |
- Introduce a new busdma flag BUS_DMA_ZERO to request for zero'ed memory in bus_dmamem_alloc(). This is possible now that contigmalloc() supports the M_ZERO flag. - Remove the locking of Giant around calls to contigmalloc() since contigmalloc() now grabs Giant itself.
|
#
117691 |
|
17-Jul-2003 |
scottl |
Now that the dust has settled, make dflt_lock() always panic.
|
#
117136 |
|
01-Jul-2003 |
mux |
Sync more things with other backends.
|
#
117129 |
|
01-Jul-2003 |
mux |
Honor the boundary of the busdma tag when allocating bounce pages. This was fixed in revision 1.5 of alpha/alpha/busdma_machdep.c and was never fixed in other busdma backends using bounce pages.
|
#
117126 |
|
01-Jul-2003 |
scottl |
Mega busdma API commit.
Add two new arguments to bus_dma_tag_create(): lockfunc and lockfuncarg. Lockfunc allows a driver to provide a function for managing its locking semantics while using busdma. At the moment, this is used for the asynchronous busdma_swi and callback mechanism. Two lockfunc implementations are provided: busdma_lock_mutex() performs standard mutex operations on the mutex that is specified from lockfuncarg. dftl_lock() is a panic implementation and is defaulted to when NULL, NULL are passed to bus_dma_tag_create(). The only time that NULL, NULL should ever be used is when the driver ensures that bus_dmamap_load() will not be deferred. Drivers that do not provide their own locking can pass busdma_lock_mutex,&Giant args in order to preserve the former behaviour.
sparc64 and powerpc do not provide real busdma_swi functions, so this is largely a noop on those platforms. The busdma_swi on is64 is not properly locked yet, so warnings will be emitted on this platform when busdma callback deferrals happen.
If anyone gets panics or warnings from dflt_lock() being called, please let me know right away.
Reviewed by: tmm, gibbs
|
#
116907 |
|
27-Jun-2003 |
scottl |
Do the first and mostly mechanical step of adding mutex support to the bus_dma async callback scheme. Note that sparc64 does not seem to do async callbacks. Note that ia64 callbacks might not be MPSAFE at the moment. Note that powerpc doesn't seem to do async callbacks due to the implementation being incomplete.
Reviewed by: mostly silence on arch@
|
#
115683 |
|
02-Jun-2003 |
obrien |
Use __FBSDID().
|
#
115343 |
|
27-May-2003 |
scottl |
Bring back bus_dmasync_op_t. It is now a typedef to an int, though the BUS_DMASYNC_ definitions remain as before. The does not change the ABI, and reverts the API to be a bit more compatible and flexible. This has survived a full 'make universe'.
Approved by: re (bmah)
|
#
115316 |
|
26-May-2003 |
scottl |
De-orbit bus_dmamem_alloc_size(). It's a hack and was never used anyways. No need for it to pollute the 5.x API any further.
Approved by: re (bmah)
|
#
113492 |
|
15-Apr-2003 |
mux |
style(9)
|
#
113472 |
|
14-Apr-2003 |
simokawa |
Restore delayed load support for the resource shortage case. It was missed in the previous change. Now, _bus_dmamap_load_buffer() accepts BUS_DMA_WAITOK/BUS_DMA_NOWAIT flags.
Original idea from: jake
|
#
113459 |
|
14-Apr-2003 |
simokawa |
* Use _bus_dmamap_load_buffer() and respect maxsegsz in bus_dmamap_load(). Ignoring maxsegsz may lead to fatal data corruption for some devices. ex. SBP-2/FireWire We should apply this change to other platforms except for sparc64.
MFC after: 1 week
|
#
113347 |
|
10-Apr-2003 |
mux |
Change the operation parameter of bus_dmamap_sync() from an enum to an int and redefine the BUS_DMASYNC_* constants as flags. This allows us to specify several operations in one call to bus_dmamap_sync() as in NetBSD.
|
#
113228 |
|
07-Apr-2003 |
jake |
Add support for bounce buffers to _bus_dmamap_load_buffer, which is the backend for bus_dmamap_load_mbuf and bus_dmamap_load_uio.
- Increaes MAX_BPAGES to 512. Less than this causes fxp to quickly runs out of bounce pages. - Add an argument to reserve_bounce_pages indicating wether this operation should fail or be queued for later processing if we run out of memory. The EINPROGRESS return value is not handled properly by consumers of bus_dmamap_load_mbuf. - If bounce buffers are required allocate minimum 1 bounce page at map creation time. If maxsize was small previously this could get truncated to 0 and the drivers would quickly run out of bounce pages. - Fix a bug handling the return value of alloc_bounce_pages at map creation time. It returns the number of pages allocated, not 0 on success. - Use bus_addr_t for physical addresses to avoid truncation. - Assert that the map is non-null and not the no bounce map in add_bounce_pages.
Sponsored by: DARPA, Network Associates Laboratories
|
#
112569 |
|
24-Mar-2003 |
jake |
- Add vm_paddr_t, a physical address type. This is required for systems where physical addresses larger than virtual addresses, such as i386s with PAE. - Use this to represent physical addresses in the MI vm system and in the i386 pmap code. This also changes the paddr parameter to d_mmap_t. - Fix printf formats to handle physical addresses >4G in the i386 memory detection code, and due to kvtop returning vm_paddr_t instead of u_long.
Note that this is a name change only; vm_paddr_t is still the same as vm_offset_t on all currently supported platforms.
Sponsored by: DARPA, Network Associates Laboratories Discussed with: re, phk (cdevsw change)
|
#
112436 |
|
20-Mar-2003 |
mux |
Use atomic operations to increment and decrement the refcount in busdma tags. There are currently no tags shared accross different drivers so this isn't needed at the moment, but it will be required when we'll have a proper newbus method to get the parent busdma tag.
|
#
112346 |
|
17-Mar-2003 |
mux |
- Lock down the bounce pages structures. We use the same locking scheme as with the alpha backend because both implementations of bounce pages are identical. - Remove useless splhigh()/splx() calls.
|
#
112196 |
|
13-Mar-2003 |
mux |
Grab Giant around calls to contigmalloc() and contigfree() so that drivers converted to be MP safe don't have to deal with it.
|
#
111524 |
|
26-Feb-2003 |
mux |
Correctly set BUS_SPACE_MAXSIZE in all the busdma backends. It was bogusly set to 64 * 1024 or 128 * 1024 because it was bogusly reused in the BUS_DMAMAP_NSEGS definition.
|
#
111119 |
|
19-Feb-2003 |
imp |
Back out M_* changes, per decision of the TRB.
Approved by: trb
|
#
110335 |
|
04-Feb-2003 |
harti |
Fix a problem in bus_dmamap_load_{mbuf,uio} when the first mbuf or the first uio segment is empty. In this case no dma segment is create by bus_dmamap_load_buffer, but the calling routine clears the first flag. Under certain combinations of addresses of the first and second mbuf/uio buffer this leads to corrupted DMA segment descriptors. This was already fixed by tmm in sparc64/sparc64/iommu.c.
PR: kern/47733 Reviewed by: sam Approved by: jake (mentor)
|
#
110232 |
|
02-Feb-2003 |
alfred |
Consolidate MIN/MAX macros into one place (param.h).
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
|
#
110030 |
|
29-Jan-2003 |
scottl |
Implement bus_dmamem_alloc_size() and bus_dmamem_free_size() as counterparts to bus_dmamem_alloc() and bus_dmamem_free(). This allows the caller to specify the size of the allocation instead of it defaulting to the max_size field of the busdma tag.
This is intended to aid in converting drivers to busdma. Lots of hardware cannot understand scatter/gather lists, which forces the driver to copy the i/o buffers to a single contiguous region before sending it to the hardware. Without these new methods, this would require a new busdma tag for each operation, or a complex internal allocator/cache for each driver.
Allocations greater than PAGE_SIZE are rounded up to the next PAGE_SIZE by contigmalloc(), so this is not suitable for multiple static allocations that would be better served by a single fixed-length subdivided allocation.
Reviewed by: jake (sparc64)
|
#
109623 |
|
21-Jan-2003 |
alfred |
Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0. Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
|
#
104486 |
|
04-Oct-2002 |
sam |
New bus_dma interfaces for use by crypto device drivers:
o bus_dmamap_load_mbuf o bus_dmamap_load_uio
Test on i386. Known to compile on alpha and sparc64, but not tested. Otherwise untried.
|
#
102241 |
|
21-Aug-2002 |
archie |
Don't use "NULL" when "0" is really meant.
|
#
95076 |
|
19-Apr-2002 |
alfred |
Clean up:
Comment run_filter() to explain what it does.
Remove chatty comments.
void busdma_swi() { } -> void busdma_swi(void) { }
|
#
88900 |
|
05-Jan-2002 |
jhb |
Change the preemption code for software interrupt thread schedules and mutex releases to not require flags for the cases when preemption is not allowed:
The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent switching to a higher priority thread on mutex releease and swi schedule, respectively when that switch is not safe. Now that the critical section API maintains a per-thread nesting count, the kernel can easily check whether or not it should switch without relying on flags from the programmer. This fixes a few bugs in that all current callers of swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from fast interrupt handlers and the swi_sched of softclock needed this flag. Note that to ensure that swi_sched()'s in clock and fast interrupt handlers do not switch, these handlers have to be explicitly wrapped in critical_enter/exit pairs. Presently, just wrapping the handlers is sufficient, but in the future with the fully preemptive kernel, the interrupt must be EOI'd before critical_exit() is called. (critical_exit() can switch due to a deferred preemption in a fully preemptive kernel.)
I've tested the changes to the interrupt code on i386 and alpha. I have not tested ia64, but the interrupt code is almost identical to the alpha code, so I expect it will work fine. PowerPC and ARM do not yet have interrupt code in the tree so they shouldn't be broken. Sparc64 is broken, but that's been ok'd by jake and tmm who will be fixing the interrupt code for sparc64 shortly.
Reviewed by: peter Tested on: i386, alpha
|
#
81711 |
|
15-Aug-2001 |
wpaul |
Teach bus_dmamem_free() about contigfree(). This is a bit of a hack, but it's better than the buggy behavior we have now. If we contigmalloc() buffers in bus_dmamem_alloc(), then we must configfree() them in bus_dmamem_free(). Trying to free() them is wrong, and will cause a panic (at least, it does on the alpha.)
I tripped over this when trying to kldunload my busdma-ified if_rl driver.
|
#
79224 |
|
04-Jul-2001 |
dillon |
With Alfred's permission, remove vm_mtx in favor of a fine-grained approach (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
|
#
76827 |
|
18-May-2001 |
alfred |
Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level vm operations.
faults can not be taken without holding Giant.
Memory subsystems can now call the base page allocators safely.
Almost all atomic ops were removed as they are covered under the vm mutex.
Alpha and ia64 now need to catch up to i386's trap handlers.
FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties).
Reviewed (partially) by: jake, jhb
|
#
72238 |
|
09-Feb-2001 |
jhb |
- Catch up to the new swi API changes: - Use swi_* function names. - Use void * to hold cookies to handlers instead of struct intrhand *. - In sio.c, use 'driver_name' instead of "sio" as the name of the driver lock to minimize diffs with cy(4).
|
#
69781 |
|
08-Dec-2000 |
dwmalone |
Convert more malloc+bzero to malloc+M_ZERO.
Submitted by: josh@zipperup.org Submitted by: Robert Drehmel <robd@gmx.net>
|
#
67551 |
|
25-Oct-2000 |
jhb |
- Overhaul the software interrupt code to use interrupt threads for each type of software interrupt. Roughly, what used to be a bit in spending now maps to a swi thread. Each thread can have multiple handlers, just like a hardware interrupt thread. - Instead of using a bitmask of pending interrupts, we schedule the specific software interrupt thread to run, so spending, NSWI, and the shandlers array are no longer needed. We can now have an arbitrary number of software interrupt threads. When you register a software interrupt thread via sinthand_add(), you get back a struct intrhand that you pass to sched_swi() when you wish to schedule your swi thread to run. - Convert the name of 'struct intrec' to 'struct intrhand' as it is a bit more intuitive. Also, prefix all the members of struct intrhand with 'ih_'. - Make swi_net() a MI function since there is now no point in it being MD.
Submitted by: cp
|
#
60938 |
|
26-May-2000 |
jake |
Back out the previous change to the queue(3) interface. It was not discussed and should probably not happen.
Requested by: msmith and others
|
#
60833 |
|
23-May-2000 |
jake |
Change the way that the queue(3) structures are declared; don't assume that the type argument to *_HEAD and *_ENTRY is a struct.
Suggested by: phk Reviewed by: phk Approved by: mdodd
|
#
52635 |
|
29-Oct-1999 |
phk |
useracc() the prequel:
Merge the contents (less some trivial bordering the silly comments) of <vm/vm_prot.h> and <vm/vm_inherit.h> into <vm/vm.h>. This puts the #defines for the vm_inherit_t and vm_prot_t types next to their typedefs.
This paves the road for the commit to follow shortly: change useracc() to use VM_PROT_{READ|WRITE} rather than B_{READ|WRITE} as argument.
|
#
50477 |
|
27-Aug-1999 |
peter |
$Id$ -> $FreeBSD$
|
#
49859 |
|
15-Aug-1999 |
gibbs |
Fix a bug in busdma_mem_free() where we were improperly checking the map associated with the region to free.
|
#
48449 |
|
02-Jul-1999 |
mjacob |
Correct some ugly formatting. Remember to initialize the alignment tag. Honor and pass a callers request to contigalloc if they had a non-zero alignment constraint.
|
#
41764 |
|
14-Dec-1998 |
dillon |
author was assuming that nextpaddr declared *inside* the do loop would survive within the loop. This is not guarenteed by C. I have moved the nextpaddr declaration to outside the do loop.
|
#
40286 |
|
13-Oct-1998 |
dg |
Fixed two potentially serious classes of bugs:
1) The vnode pager wasn't properly tracking the file size due to "size" being page rounded in some cases and not in others. This sometimes resulted in corrupted files. First noticed by Terry Lambert. Fixed by changing the "size" pager_alloc parameter to be a 64bit byte value (as opposed to a 32bit page index) and changing the pagers and their callers to deal with this properly. 2) Fixed a bogus type cast in round_page() and trunc_page() that caused some 64bit offsets and sizes to be scrambled. Removing the cast required adding casts at a few dozen callers. There may be problems with other bogus casts in close-by macros. A quick check seemed to indicate that those were okay, however.
|
#
40029 |
|
07-Oct-1998 |
gibbs |
Fix a parent tag reference count bug during tag teardown.
Enable optimization for nobounce_dmamap clients by setting the map held by the client to NULL. This allows the macros in bus.h to check against a constant to avoid function calls.
Don't attempt to 'free()' contigmalloced pages in bus_dmamem_free(). We currently leak these pages, which is not ideal, but is better than a panic. The leak will be fixed when contigmalloc is merged into the bus dma framework after 3.0R.
|
#
39755 |
|
29-Sep-1998 |
bde |
Don't pretend to support ix86's with 16-bit ints by using longs just to ensure 32-bit variables. Doing so broke ix86's with 64-bit longs.
|
#
39243 |
|
15-Sep-1998 |
gibbs |
autoconf.c: Convert autoconf hooks from old SCSI system to CAM.
busdma_machdep.c: bus_dmamap_free() should expect the nobounce map, not a NULL one.
mountroot.c: swapgeneric.c: da and od changes.
symbols.raw: Nuke the old disk stat symbols.
userconfig.c: Disable the SCSI listing code until it can be converted to CAM.
|
#
37555 |
|
11-Jul-1998 |
bde |
Fixed printf format errors.
|
#
35767 |
|
05-May-1998 |
gibbs |
Implement bus_dmamem_* functions and correct a few nits reported by Peter Wemm.
|
#
35256 |
|
17-Apr-1998 |
des |
Seventy-odd "its" / "it's" typos in comments fixed as per kern/6108.
|
#
33676 |
|
20-Feb-1998 |
bde |
Removed unused #includes.
|
#
33134 |
|
06-Feb-1998 |
eivind |
Back out DIAGNOSTIC changes.
|
#
33108 |
|
04-Feb-1998 |
eivind |
Turn DIAGNOSTIC into a new-style option.
|
#
32516 |
|
15-Jan-1998 |
gibbs |
Implementation of Bus DMA for FreeBSD-x86. This is sufficient to do page level bounce buffering, but there are still some issues left to address.
|