#
331722 |
|
29-Mar-2018 |
eadler |
Revert r330897:
This was intended to be a non-functional change. It wasn't. The commit message was thus wrong. In addition it broke arm, and merged crypto related code.
Revert with prejudice.
This revert skips files touched in r316370 since that commit was since MFCed. This revert also skips files that require $FreeBSD$ property changes.
Thank you to those who helped me get out of this mess including but not limited to gonzo, kevans, rgrimes.
Requested by: gjb (re)
|
#
330897 |
|
14-Mar-2018 |
eadler |
Partial merge of the SPDX changes
These changes are incomplete but are making it difficult to determine what other changes can/should be merged.
No objections from: pfg
|
#
318976 |
|
27-May-2017 |
hselasky |
MFC r318353: Avoid use of contiguous memory allocations in busdma when possible.
This patch improves the boundary checks in busdma to allow more cases using the regular page based kernel memory allocator. Especially in the case of having a non-zero boundary in the parent DMA tag. For example AMD64 based platforms set the PCI DMA tag boundary to PCI_DMA_BOUNDARY, 4GB, which before this patch caused contiguous memory allocations to be preferred when allocating more than PAGE_SIZE bytes. Even if the required alignment was less than PAGE_SIZE bytes.
This patch also fixes the nsegments check for using kmem_alloc_attr() when the maximum segment size is less than PAGE_SIZE bytes.
Updated some comments describing the code in question.
Differential Revision: https://reviews.freebsd.org/D10645 Reviewed by: kib, jhb, gallatin, scottl Sponsored by: Mellanox Technologies
|
#
314506 |
|
01-Mar-2017 |
ian |
MFC r306262, r306267, r310021: (needed to avoid conflicts on later merges)
Remove bus_dma_get_range and bus_dma_get_range_nb on armv6. We only need this on a few earlier arm SoCs.
Restrict where we need to define fdt_fixup_table to just PowerPC and Marvell.
Add the missing void to function signatures in much of the arm code.
|
#
307345 |
|
15-Oct-2016 |
mmel |
MFC r306759:
ARM: Remove ARMv4 #defines from busdma_machdep-v6.c, it's ARMv6 specific file. Consistently use BUSDMA_DCACHE_ALIGN for cache line alignment.
|
#
302408 |
|
07-Jul-2016 |
gjb |
Copy head@r302406 to stable/11 as part of the 11.0-RELEASE cycle. Prune svn:mergeinfo from the new branch, as nothing has been merged here.
Additional commits post-branch will follow.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
291193 |
|
23-Nov-2015 |
skra |
Revert r291142.
The not quite consistent logic for bounce pages allocation is utilizited by re(4) interface which can hang now.
Approved by: kib (mentor)
|
#
291142 |
|
21-Nov-2015 |
skra |
Fix BUS_DMA_MIN_ALLOC_COMP flag logic. When bus_dmamap_t map is being created for bus_dma_tag_t tag, bounce pages should be allocated only if needed.
Before the fix, they were allocated always if BUS_DMA_COULD_BOUNCE flag was set but BUS_DMA_MIN_ALLOC_COMP not. As bounce pages are never freed, it could cause memory exhaustion when a lot of such tags together with their maps were created.
Note that there could be more maps in one tag by current design. However BUS_DMA_MIN_ALLOC_COMP flag is tag's flag. It's set after bounce pages are allocated. Thus, they are allocated only for first tag's map which needs them.
Approved by: kib (mentor)
|
#
291018 |
|
18-Nov-2015 |
mmel |
ARM: Fix dma_dcache_sync() for early allocated memory. Drivers can request DMA to buffers that are not in memory represented in the vm page arrays. Because of this, store KVA of already mapped buffer to synclist and use it in dma_dcache_sync().
Reviewed by: jah Approved by: kib (mentor) Differential Revision: https://reviews.freebsd.org/D4120
|
#
290309 |
|
02-Nov-2015 |
ian |
Eliminate the last dregs of the old global arm_root_dma_tag.
In the old days, device drivers passed NULL for the parent tag when creating a new tag, and on arm platforms that resulted in a global tag representing overall platform constraints being substituted in the busdma code. Now all drivers use bus_get_dma_tag() and if there is a need to represent overall platform constraints they will be inherited from a tag supplied by nexus or some bus driver in the hierarchy.
The only arm platforms still relying on the old global-tag scheme were some xscale boards with special PCI-bus constraints. This change provides those constraints through a tag supplied by the xscale PCI bus driver, and eliminates the few remaining references to the old global var.
Reviewed by: cognet
|
#
289893 |
|
24-Oct-2015 |
ian |
Define a couple macros to access cacheline size/mask in an arch-dependent way. This code should now work for all arm versions v4 thru v7.
|
#
289887 |
|
24-Oct-2015 |
ian |
Rename dcache_dma_preread() to dcache_inv_poc_dma() to make it clear that it is a dcache invalidate to point of coherency just like dcache_inv_poc(), but a slightly different version specific to dma operations. Elaborate the comment about how and why it's different.
|
#
289865 |
|
24-Oct-2015 |
ian |
A few more whitespace, style, and comment cleanups. No functional changes.
|
#
289858 |
|
23-Oct-2015 |
ian |
Instead of all memory allocations using M_DEVBUF, use new categories M_BUSDMA for allocations of metadata (tags, maps, segment tracking lists), and M_BOUNCE for bounce pages.
|
#
289854 |
|
23-Oct-2015 |
ian |
Catch up to r232356: change the boundary constraint type to bus_addr_t. This code lived in the projects/armv6 branch when that change got applied to all the other arches.
|
#
289851 |
|
23-Oct-2015 |
ian |
Whitespace and style nits, no functional changes.
The goal is to make these two files cosmetically alike so that the actual implementation differences are visible. The only changes which aren't spaces<->tabs and rewrapping and reindenting lines are a couple fields shuffled around in the tag and map structs so that everything is in the same order in both versions (which should amount to no functional change).
|
#
289825 |
|
23-Oct-2015 |
jah |
Remove unclear comment about address truncation in busdma. Add (hopefully much clearer) comment at declaration of PHYS_TO_VM_PAGE().
Noted by: avg
|
#
289759 |
|
22-Oct-2015 |
jah |
Use pmap_quick* functions in armv6 busdma, for bounce buffers and cache maintenance. This makes it safe to sync buffers that have no VA mapping associated with the busdma map, but may have other mappings, possibly on different CPUs. This also makes it safe to sync unmapped bounce buffers in non-sleepable thread contexts.
Similar to r286787 for x86, this treats userspace buffers the same as unmapped buffers and no longer borrows the UVA for sync operations.
Submitted by: Svatopluk Kraus <onwahe@gmail.com> (earlier revision) Tested by: Svatopluk Kraus Differential Revision: https://reviews.freebsd.org/D3869
|
#
286969 |
|
20-Aug-2015 |
ian |
Remove code left over from the armv4 days. On armv4, cache maintenance operations always had to be aligned and sized to cache lines. On armv6 and later, cache maintenance operates on a cache line if any part of the line is referenced in the operation, so we don't need extra code to align the edges of the sync range.
|
#
286968 |
|
20-Aug-2015 |
ian |
Minor comment and style fixes, no functional change.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>
|
#
283366 |
|
24-May-2015 |
andrew |
Remove trailing whitespace from sys/arm/arm
|
#
282120 |
|
28-Apr-2015 |
hselasky |
The add_bounce_page() function can be called when loading physical pages which pass a NULL virtual address. If the BUS_DMA_KEEP_PG_OFFSET flag is set, use the physical address to compute the page offset instead. The physical address should always be valid when adding bounce pages and should contain the same page offset like the virtual address.
Submitted by: Svatopluk Kraus <onwahe@gmail.com> MFC after: 1 week Reviewed by: jhb@
|
#
278031 |
|
01-Feb-2015 |
ian |
Remove a stale comment. The logic that deleted the map before other resources was removed long ago, but the comment stuck somehow.
|
#
274839 |
|
22-Nov-2014 |
ian |
When doing a PREREAD sync of an mbuf-type dma buffer, do a writeback of the first cacheline if the buffer start address is not on a cacheline boundary. Normally a buffer which is not cacheline-aligned is bounced, but a special rule applies for mbufs, which are always misaligned due to the header. We know the cpu will not write to the header while dma is in progress (so we've been told anyway), but it may have written to the header shortly before starting a read, so we need to flush that write out to memory before invalidating the whole buffer.
In collaboration with Mical Meloun and Svata Kraus.
|
#
274605 |
|
16-Nov-2014 |
ian |
No functional changes. Remove a couple outdated or inane comments and add new comment blocks describing why the cache maintenance sequences are done in the order they are for each case.
|
#
274604 |
|
16-Nov-2014 |
ian |
Correct the sequence of busdma sync ops involved with PRE/POSTREAD syncs.
We used to invalidate the cache for PREREAD alone, or writeback+invalidate for PREREAD with PREWRITE, then treat POSTREAD as a no-op. Prefetching on modern systems can lead to parts of a DMA buffer getting pulled into the caches while DMA is in progress (due to access of "nearby" data), so it's mandatory to invalidate during the POSTREAD sync even if a PREREAD invalidate also happened.
In the PREREAD case the invalidate is done to ensure that there are no dirty cache lines that might get automatically evicted during the DMA, corrupting the buffer. In a PREREAD+PREWRITE case the writeback which is required for PREWRITE handling is suffficient to avoid corruption caused by eviction and no invalidate need be done until POSTREAD time.
Submitted by: Michal Meloun <meloun@miracle.cz>
|
#
274603 |
|
16-Nov-2014 |
ian |
Do the cache invalidate sequence from the outermost to innermost, required for correct operation.
Submitted by: Michal Meloun <meloun@miracle.cz>
|
#
274602 |
|
16-Nov-2014 |
ian |
Do not do a cache invalidate on a PREREAD sync that is also a PREWRITE sync. The PREWRITE handling does a writeback of any dirty cachelines, so there's no danger of an eviction during the DMA corrupting the buffer. There will be an invalidate done during POSTREAD, so doing it before the read too is wasted time.
|
#
274596 |
|
16-Nov-2014 |
ian |
Indent a couple lines properly and expand a comment. No functional changes.
|
#
274545 |
|
15-Nov-2014 |
ian |
Whitespace and comment tweaks, no functional changes.
|
#
274538 |
|
15-Nov-2014 |
ian |
When doing busdma sync ops for BUSDMA_COHERENT memory, there is no need for cache maintenance operations, but ensure that all prior writes have reached memory when doing a PREWRITE sync.
Submitted by: Michal Meloun <meloun@miracle.cz>
|
#
274536 |
|
15-Nov-2014 |
ian |
Use the standard powerof2() macro from param.h instead of a local one.
Pointed out by: jhb@
|
#
274191 |
|
06-Nov-2014 |
ian |
Strengthen the sanity checking of busdma tag parameters.
It turns out an alignment of zero can lead to an endless loop in the vm reservations code, so specifically disallow that. The manpage says hardware which can do dma at any address should use a value of one, which hints at the forbiddeness of zero without exactly saying it. Several other conditions which could lead to insanity in working with the tag are also checked now.
Every existing call to bus_dma_tag_create() (about 680 of them) was eyeballed for violations of these things, and two alignment=0 glitches were fixed. It's possible something was missed, but overall this shouldn't lead to any arm users suddenly experiencing failures.
|
#
273599 |
|
24-Oct-2014 |
loos |
Fix a bug where DMA maps created with bus_dmamap_create() won't increment the map count and without being able to keep track of the current map allocation, bus_dma_tag_destroy() will fail to proceed and will return EBUSY even after all the maps have been correctly destroyed with bus_dmamap_destroy().
Found while testing the detach method of a NIC.
Tested on: BBB (am335x) Reviewed by: cognet, ian MFC after: 1 week
|
#
273377 |
|
21-Oct-2014 |
hselasky |
Fix multiple incorrect SYSCTL arguments in the kernel:
- Wrong integer type was specified.
- Wrong or missing "access" specifier. The "access" specifier sometimes included the SYSCTL type, which it should not, except for procedural SYSCTL nodes.
- Logical OR where binary OR was expected.
- Properly assert the "access" argument passed to all SYSCTL macros, using the CTASSERT macro. This applies to both static- and dynamically created SYSCTLs.
- Properly assert the the data type for both static and dynamic SYSCTLs. In the case of static SYSCTLs we only assert that the data pointed to by the SYSCTL data pointer has the correct size, hence there is no easy way to assert types in the C language outside a C-function.
- Rewrote some code which doesn't pass a constant "access" specifier when creating dynamic SYSCTL nodes, which is now a requirement.
- Updated "EXAMPLES" section in SYSCTL manual page.
MFC after: 3 days Sponsored by: Mellanox Technologies
|
#
269321 |
|
31-Jul-2014 |
ian |
Switch to using counter(9) for the new 64-bit stats kept by armv6 busdma.
|
#
269217 |
|
29-Jul-2014 |
ian |
Export some new busdma stats via sysctl for armv6. Added:
hw.busdma.tags_total: 46 hw.busdma.maps_total: 1302 hw.busdma.maps_dmamem: 851 hw.busdma.maps_coherent: 849 hw.busdma.maploads_total: 1568812 hw.busdma.maploads_bounced: 16750 hw.busdma.maploads_coherent: 920 hw.busdma.maploads_dmamem: 920 hw.busdma.maploads_mbuf: 1542766 hw.busdma.maploads_physmem: 0
|
#
269216 |
|
29-Jul-2014 |
ian |
A while back, the array of segments used for a load/mapping operation was moved from the stack into the tag structure. In retrospect that was a bad idea, because nothing protects that array from concurrent access by multiple threads.
This change moves the array to the map structure (actually it's allocated following the structure, but all in a single malloc() call).
This also establishes a "sane" limit of 4096 segments per map. This is mostly to prevent trying to allocate all of memory if someone accidentally uses a tag with nsegments set to BUS_SPACE_UNRESTRICTED. If there's ever a genuine need for more than 4096, don't hesitate to increase this (or maybe make it tunable).
Reviewed by: cognet
|
#
269215 |
|
29-Jul-2014 |
ian |
We never need bounce pages for memory we allocate. We cleverly allocate memory the matches all the constraints of the dma tag so that bouncing will never be required.
Reviewed by: cognet
|
#
269214 |
|
29-Jul-2014 |
ian |
Replace a bunch of double-indirection with a local pointer var (that is, (*mapp)->something becomes map->something). No functional changes.
Reviewed by: cognet
|
#
269213 |
|
29-Jul-2014 |
ian |
Don't clear the DMAMAP_DMAMEM_ALLOC flag set a few lines earlier. Doh!
Reviewed by: cognet
|
#
269212 |
|
29-Jul-2014 |
ian |
Memory belonging to an mbuf, or allocated by bus_dmamem_alloc(), never triggers a need to bounce due to cacheline alignment. These buffers are always aligned to cacheline boundaries, and even when the DMA operation starts at an offset within the buffer or doesn't extend to the end of the buffer, it's safe to flush the complete cachelines that were only partially involved in the DMA. This is because there's a very strict rule on these types of buffers that there will not be concurrent access by the CPU and one or more DMA transfers within the buffer.
Reviewed by: cognet
|
#
269211 |
|
29-Jul-2014 |
ian |
The run_filter() function doesn't just run dma tag exclusion filter functions, it has evolved to make a variety of decisions about whether the DMA needs to bounce, so rename it to must_bounce(). Rewrite it to perform checks outside of the ancestor loop if they're based on information that's wholly contained within the original tag. Now the loop only checks exclusion zones in ancestor tags.
Also, add a new function, might_bounce() which does a fast inline check of flags within the tag and map to quickly eliminate the need to call the more expensive must_bounce() for each page in the DMA operation.
Within the mapping loops, use map->pagesneeded != 0 as a proxy for all the various checks on whether bouncing might be required. If no pages were reserved for bouncing during the checks before the mapping loop, then there's no need to re-check any of the conditions that can lead to bouncing -- all those checks already decided there would be no bouncing.
Reviewed by: cognet
|
#
269210 |
|
29-Jul-2014 |
ian |
Propagate any alignment restriction from the parent tag to a new tag, keeping the more restrictive of the two values.
Reviewed by: cognet
|
#
269209 |
|
29-Jul-2014 |
ian |
Reformat some continuation lines. No functional changes.
Reviewed by: cognet
|
#
269208 |
|
29-Jul-2014 |
ian |
Correct the comparison logic when looking for intersections between exclusion zones and phsyical memory. The phys_avail[i] entries are the address of the first byte of ram in the region, and phys_avail[i+1] entries are the address of the first byte of ram in the next region (i.e., they're not included in the region that starts at phys_avail[i]).
Reviewed by: cognet
|
#
269207 |
|
29-Jul-2014 |
ian |
The exclusion_bounce() routine compares unchanging values in the tag with unchanging values in the phys_avail array, so do the comparisons just once at tag creation time and set a flag to remember the result.
Reviewed by: cognet
|
#
269206 |
|
29-Jul-2014 |
ian |
Rename _bus_dma_can_bounce(), add new inline routines.
DMA on arm can bounce for several reasons, and _bus_dma_can_bounce() only checks for the lowaddr/highaddr exclusion ranges in the dma tag, so now it's named exclusion_bounce(). The other reasons for bouncing are checked by the new functions alignment_bounce() and cacheline_bounce().
Reviewed by: cognet
|
#
269136 |
|
26-Jul-2014 |
ian |
Pull in the armv4 "fast out" code for checking whether busdma can bounce due to an excluded region of physical memory.
|
#
269135 |
|
26-Jul-2014 |
ian |
Remove completely bogus alignment check -- it's the physical address that needs to be aligned, not the virtual, and it doesn't seem worth the cost of a vtophys() call just to see if kmem_alloc_contig() works properly.
|
#
267992 |
|
28-Jun-2014 |
hselasky |
Pull in r267961 and r267973 again. Fix for issues reported will follow.
|
#
267985 |
|
27-Jun-2014 |
gjb |
Revert r267961, r267973:
These changes prevent sysctl(8) from returning proper output, such as:
1) no output from sysctl(8) 2) erroneously returning ENOMEM with tools like truss(1) or uname(1) truss: can not get etype: Cannot allocate memory
|
#
267961 |
|
27-Jun-2014 |
hselasky |
Extend the meaning of the CTLFLAG_TUN flag to automatically check if there is an environment variable which shall initialize the SYSCTL during early boot. This works for all SYSCTL types both statically and dynamically created ones, except for the SYSCTL NODE type and SYSCTLs which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to be used in the case a tunable sysctl has a custom initialisation function allowing the sysctl to still be marked as a tunable. The kernel SYSCTL API is mostly the same, with a few exceptions for some special operations like iterating childrens of a static/extern SYSCTL node. This operation should probably be made into a factored out common macro, hence some device drivers use this. The reason for changing the SYSCTL API was the need for a SYSCTL parent OID pointer and not only the SYSCTL parent OID list pointer in order to quickly generate the sysctl path. The motivation behind this patch is to avoid parameter loading cludges inside the OFED driver subsystem. Instead of adding special code to the OFED driver subsystem to post-load tunables into dynamically created sysctls, we generalize this in the kernel.
Other changes: - Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask" to "hw.pcic.intr_mask". - Removed redundant TUNABLE statements throughout the kernel. - Some minor code rewrites in connection to removing not needed TUNABLE statements. - Added a missing SYSCTL_DECL(). - Wrapped two very long lines. - Avoid malloc()/free() inside sysctl string handling, in case it is called to initialize a sysctl from a tunable, hence malloc()/free() is not ready when sysctls from the sysctl dataset are registered. - Bumped FreeBSD version to indicate SYSCTL API change.
MFC after: 2 weeks Sponsored by: Mellanox Technologies
|
#
261418 |
|
02-Feb-2014 |
cognet |
Invalidate cachelines for bounce pages on PREREAD too, there may still be stale entries from a previous transfer.
|
#
257228 |
|
27-Oct-2013 |
kib |
Add bus_dmamap_load_ma() function to load map with the array of vm_pages. Provide trivial implementation which forwards the load to _bus_dmamap_load_phys() page by page. Right now all architectures use bus_dmamap_load_ma_triv().
Tested by: pho (as part of the functional patch) Sponsored by: The FreeBSD Foundation MFC after: 1 month
|
#
256638 |
|
16-Oct-2013 |
ian |
Add cases for the combinations of busdma sync op flags that we handle correctly by doing nothing, then add a panic for the default case, because that implies that some driver asked for a sync (probably incorrectly) and nothing was done.
|
#
256637 |
|
16-Oct-2013 |
ian |
When calculating the number of bounce pages needed, round the maxsize up to a multiple of PAGE_SIZE, and add one page because there can always be one more boundary crossing than the number of pages in the transfer.
|
#
254229 |
|
11-Aug-2013 |
cognet |
Only allocate 2 bounce pages for maps that can only use them for buffers that are unaligned on cache lines boundary, as we will never need more.
|
#
254061 |
|
07-Aug-2013 |
cognet |
Don't bother trying to work around buffers which are not aligned on a cache line boundary. It has never been 100% correct, and it can't work on SMP, because nothing prevents another core from accessing data from an unrelated buffer in the same cache line while we invalidated it. Just use bounce pages instead.
Reviewed by: ian Approved by: mux (mentor) (implicit)
|
#
254025 |
|
07-Aug-2013 |
jeff |
Replace kernel virtual address space allocation with vmem. This provides transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_* - Those that allocate address space are named kva_* - Those that operate on maps are named kmap_* - Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc Tested by: pho Sponsored by: EMC / Isilon Storage Division
|
#
253787 |
|
29-Jul-2013 |
cognet |
Remove useless cache operations.
|
#
252652 |
|
03-Jul-2013 |
gonzo |
Fix one of INVARIANTS-related UMA panics on ARM
Force UMA zone to allocate service structures like slabs using own allocator. uma_debug code performs atomic ops on uma_slab_t fields and safety of this operation is not guaranteed for write-back caches
|
#
248655 |
|
23-Mar-2013 |
ian |
Don't check and warn about pmap mismatch on every call to busdma sync. With some recent busdma refactoring, sometimes it happens that a sync op gets called when bus_dmamap_load() never got called, which results in a spurious warning about a map mismatch when no sync operations will actually happen anyway. Now the check is done only if a sync operation is actually performed, and the result of the check is a panic, not just a printf.
Reviewed by: cognet (who prevented me from donning a point hat)
|
#
247776 |
|
04-Mar-2013 |
cognet |
If we're using a PIPT L2 cache, only merge 2 segments if both the virtual and the physical addreses are contiguous.
Submitted by: Thomas Skibo <ThomasSkibo@sbcglobal.net>
|
#
246881 |
|
16-Feb-2013 |
ian |
In _bus_dmamap_addseg(), the return value must be zero for error, or the size actually added to the segment (possibly smaller than the requested size if boundary crossings had to be avoided).
|
#
246859 |
|
15-Feb-2013 |
ian |
Set map->pmap before _bus_dmamap_count_pages() tries to use it.
Obtained from: Thomas Skibo <ThomasSkibo@sbcglobal.net>
|
#
246713 |
|
12-Feb-2013 |
kib |
Reform the busdma API so that new types may be added without modifying every architecture's busdma_machdep.c. It is done by unifying the bus_dmamap_load_buffer() routines so that they may be called from MI code. The MD busdma is then given a chance to do any final processing in the complete() callback.
The cam changes unify the bus_dmamap_load* handling in cam drivers.
The arm and mips implementations are updated to track virtual addresses for sync(). Previously this was done in a type specific way. Now it is done in a generic way by recording the list of virtuals in the map.
Submitted by: jeff (sponsored by EMC/Isilon) Reviewed by: kan (previous version), scottl, mjacob (isp(4), no objections for target mode changes) Discussed with: ian (arm changes) Tested by: marius (sparc64), mips (jmallet), isci(4) on x86 (jharris), amd64 (Fabian Keil <freebsd-listen@fabiankeil.de>)
|
#
244912 |
|
31-Dec-2012 |
gonzo |
Merge r234561 from busdma_machdep.c to ARMv6 version of busdma:
Interrupts must be disabled while handling a partial cache line flush, as otherwise the interrupt handling code may modify data in the non-DMA part of the cache line while we have it stashed away in the temporary stack buffer, then we end up restoring a stale value.
PR: 160431 Submitted by: Ian Lepore
|
#
244469 |
|
19-Dec-2012 |
cognet |
Use the new allocator in bus_dmamem_alloc().
|
#
243909 |
|
05-Dec-2012 |
cognet |
Don't write-back the cachelines if we really just want to invalidate them.
Spotted out by: Ian Lepore <freebsd at damnhippie DOT dyndns dot org>
|
#
243108 |
|
15-Nov-2012 |
cognet |
Remove a useless printf
|
#
239597 |
|
22-Aug-2012 |
gonzo |
Do not change "cachable" attribute for DMA memory allocated with BUS_DMA_COHERENT attribute
The minimum unit for changing "cachable" attribute is page, so call to pmap_change_attr effectively disable cache for all pages that newly allocated DMA memory region spans on. The problem is that general-purpose memory could reside on these pages too and disabling cache might affect performance. Moreover ldrex/strex operators raise Data Abort exception when accessing memory on page with "cachable" attribute off.
BUS_DMA_COHERENT does nto require memory to be coherent. It just suggests to do best effort for reducing synchronization overhead.
|
#
239268 |
|
15-Aug-2012 |
gonzo |
Merging projects/armv6, part 1
Cummulative patch of changes that are not vendor-specific: - ARMv6 and ARMv7 architecture support - ARM SMP support - VFP/Neon support - ARM Generic Interrupt Controller driver - Simplification of startup code for all platforms
|