History log of /linux-master/arch/microblaze/mm/consistent.c
Revision Date Author Comments
# 05cdf457 26-Nov-2020 Michal Simek <michal.simek@xilinx.com>

microblaze: Remove noMMU code

This configuration is obsolete and likely none is really using it. That's
why remove it to simplify code.

Note about CONFIG_MMU in hw_exception_handler.S is left intentionally
for better comment understanding.

Cc: Mike Rapoport <rppt@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/43486cab370e0c0a79860120b71e0caac75a7e44.1606397528.git.michal.simek@xilinx.com


# 9f4df96b 22-Sep-2020 Christoph Hellwig <hch@lst.de>

dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>

Move more nitty gritty DMA implementation details into the common
internal header.

Signed-off-by: Christoph Hellwig <hch@lst.de>


# fa7e2247 21-Feb-2020 Christoph Hellwig <hch@lst.de>

dma-direct: make uncached_kernel_address more general

Rename the symbol to arch_dma_set_uncached, and pass a size to it as
well as allow an error return. That will allow reusing this hook for
in-place pagetable remapping.

As the in-place remap doesn't always require an explicit cache flush,
also detangle ARCH_HAS_DMA_PREP_COHERENT from ARCH_HAS_DMA_SET_UNCACHED.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>


# 4f8232bb 21-Feb-2020 Christoph Hellwig <hch@lst.de>

dma-direct: remove the cached_kernel_address hook

dma-direct now finds the kernel address for coherent allocations based
on the dma address, so the cached_kernel_address hooks is unused and
can be removed entirely.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>


# 04e3543e 14-Aug-2019 Christoph Hellwig <hch@lst.de>

microblaze: use the generic dma coherent remap allocator

This switches to using common code for the DMA allocations, including
potential use of the CMA allocator if configured.

Switching to the generic code enables DMA allocations from atomic
context, which is required by the DMA API documentation, and also
adds various other minor features drivers start relying upon. It
also makes sure we have on tested code base for all architectures
that require uncached pte bits for coherent DMA allocations.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# d3b9f659 14-Aug-2019 Christoph Hellwig <hch@lst.de>

microblaze/nommu: use the generic uncached segment support

Stop providing our own arch alloc/free hooks for nommu platforms and
just expose the segment offset and use the generic dma-direct
allocator.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 518a2f19 14-Dec-2018 Christoph Hellwig <hch@lst.de>

dma-mapping: zero memory returned from dma_alloc_*

If we want to map memory from the DMA allocator to userspace it must be
zeroed at allocation time to prevent stale data leaks. We already do
this on most common architectures, but some architectures don't do this
yet, fix them up, either by passing GFP_ZERO when we use the normal page
allocator or doing a manual memset otherwise.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Acked-by: Sam Ravnborg <sam@ravnborg.org> [sparc]


# 57c8a661 30-Oct-2018 Mike Rapoport <rppt@linux.vnet.ibm.com>

mm: remove include/linux/bootmem.h

Move remaining definitions and declarations from include/linux/bootmem.h
into include/linux/memblock.h and remove the redundant header.

The includes were replaced with the semantic patch below and then
semi-automated removal of duplicated '#include <linux/memblock.h>

@@
@@
- #include <linux/bootmem.h>
+ #include <linux/memblock.h>

[sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
[sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
[sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 58b04406 11-Sep-2018 Christoph Hellwig <hch@lst.de>

dma-mapping: consolidate the dma mmap implementations

The only functional differences (modulo a few missing fixes in the arch
code) is that architectures without coherent caches need a hook to
convert a virtual or dma address into a pfn, given that we don't have
the kernel linear mapping available for the otherwise easy virt_to_page
call. As a side effect we can support mmap of the per-device coherent
area even on architectures not providing the callback, and we make
previous dangerous default methods dma_common_mmap actually save for
non-coherent architectures by rejecting it without the right helper.

In addition to that we need a hook so that some architectures can
override the protection bits when mmaping a dma coherent allocations.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts


# ed207a74 19-Jul-2018 Christoph Hellwig <hch@lst.de>

microblaze: remove consistent_sync and consistent_sync_page

Both unused.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# 5411ad27 19-Jul-2018 Christoph Hellwig <hch@lst.de>

microblaze: use generic dma_noncoherent_ops

Switch to the generic noncoherent direct mapping implementation.

This removes the direction-based optimizations in
sync_{single,sg}_for_{cpu,device} which were marked untestested and
do not match the usually very well tested {un,}map_{single,sg}
implementations.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# 3a8e3265 03-Dec-2014 Lars-Peter Clausen <lars@metafoo.de>

microblaze: Fix mmap for cache coherent memory

When running in non-cache coherent configuration the memory that was
allocated with dma_alloc_coherent() has a custom mapping and so there is no
1-to-1 relationship between the kernel virtual address and the PFN. This
means that virt_to_pfn() will not work correctly for those addresses and the
default mmap implementation in the form of dma_common_mmap() will map some
random, but not the requested, memory area.

Fix this by providing a custom mmap implementation that looks up the PFN
from the page table rather than using virt_to_pfn.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# a66a6265 07-Feb-2013 Michal Simek <michal.simek@xilinx.com>

microblaze: Use asm-generic/io.h

Using generic io.h will narrow down code duplication
in architecture io.h.

- define PCI_IOBASE
- remove non existing pci_io_base extern

Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# c1ce4b37 12-Nov-2013 Xishi Qiu <qiuxishi@huawei.com>

mm/arch: use __free_reserved_page() to simplify the code

Use __free_reserved_page() to simplify the code in arch.

It used split_page() in consistent_alloc()/__dma_alloc_coherent()/dma_alloc_coherent(),
so page->_count == 1, and we can free it safely.

__free_reserved_page()
ClearPageReserved()
init_page_count() // it won't change the value
__free_page()

Signed-off-by: Xishi Qiu <qiuxishi@huawei.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# d64af918 01-Feb-2013 Michal Simek <michal.simek@xilinx.com>

microblaze: Do not use module.h in files which are not modules

Based on the patch:
"lib: reduce the use of module.h wherever possible"
(sha1: 8bc3bcc93a2b4e47d5d410146f6546bca6171663)
fix all microblaze files which are not modules.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>


# 6bd55f0b 27-Dec-2012 Michal Simek <monstr@monstr.eu>

microblaze: Fix coding style issues

Fix coding style issues reported by checkpatch.pl.

Signed-off-by: Michal Simek <monstr@monstr.eu>


# cd44da15 07-Feb-2011 Michal Simek <monstr@monstr.eu>

microblaze: Fix sparse warning - consistent_alloc function

Warning in dma.c was caused by incorrect type in consistent_alloc function.

Warning log:
CHECK arch/microblaze/kernel/dma.c
arch/microblaze/kernel/dma.c:53:26: warning: incorrect type in argument 1 (different base types)
arch/microblaze/kernel/dma.c:53:26: expected int [signed] gfp
arch/microblaze/kernel/dma.c:53:26: got restricted unsigned int [usertype] flag

Signed-off-by: Michal Simek <monstr@monstr.eu>


# 385e1efa 29-Apr-2010 Michal Simek <monstr@monstr.eu>

microblaze: Fix consistent-sync code

PCI_DMA_FROMDEVICE should call invalidation not flushing.

Signed-off-by: Michal Simek <monstr@monstr.eu>


# f1525765 10-Apr-2010 Michal Simek <monstr@monstr.eu>

microblaze: Fix consistent code

This patch fix consistent code which had problems with consistent_free
function.
I am not sure if we need to call flush_tlb_all after it but it keeps
tlbs synced.
I added noMMU and MMU version together.

Uncached shadow feature is not tested.

Signed-off-by: Michal Simek <monstr@monstr.eu>


# 5a0e3ad6 24-Mar-2010 Tejun Heo <tj@kernel.org>

include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h

percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.

2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).

* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>


# 3a0d7a4d 21-Feb-2010 Michal Simek <monstr@monstr.eu>

microblaze: Add consistent code

Remove ancient Kconfig option for consistent code.
MMU uses cache inhibit pages.

noMMU uses UNCACHE SHADOW feature where is used double ram size.
For example:
Physical ram is 256MB and cache are setup to cover the same size.
But if you setup in HW that size is 512MB and cache covers 256MB
than you can use adresses from 256-512MB without caches and
correspond with 0-256MB with cache. That's why I am using
dcache base/high addresses to find out uncache area.

Signed-off-by: Michal Simek <monstr@monstr.eu>