History log of /linux-master/arch/arm/include/asm/memory.h
Revision Date Author Comments
# a9ff6961 02-Jun-2022 Linus Walleij <linus.walleij@linaro.org>

ARM: mm: Make virt_to_pfn() a static inline

Making virt_to_pfn() a static inline taking a strongly typed
(const void *) makes the contract of a passing a pointer of that
type to the function explicit and exposes any misuse of the
macro virt_to_pfn() acting polymorphic and accepting many types
such as (void *), (unitptr_t) or (unsigned long) as arguments
without warnings.

Doing this is a bit intrusive: virt_to_pfn() requires
PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in
<asm/page.h>, so this must be included *before* <asm/memory.h>.

The use of macros were obscuring the unclear inclusion order here,
as the macros would eventually be resolved, but a static inline
like this cannot be compiled with unresolved macros.

The naive solution to include <asm/page.h> at the top of
<asm/memory.h> does not work, because <asm/memory.h> sometimes
includes <asm/page.h> at the end of itself, which would create a
confusing inclusion loop. So instead, take the approach to always
unconditionally include <asm/page.h> at the end of <asm/memory.h>

arch/arm uses <asm/memory.h> explicitly in a lot of places,
however it turns out that if we just unconditionally include
<asm/memory.h> into <asm/page.h> and switch all inclusions of
<asm/memory.h> to <asm/page.h> instead, we enforce the right
order and <asm/memory.h> will always have access to the
definitions.

Put an inclusion guard in place making it impossible to include
<asm/memory.h> explicitly.

Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>


# 6069b9ec 29-Jan-2023 Mike Rapoport (IBM) <rppt@kernel.org>

arm: include asm-generic/memory_model.h from page.h rather than memory.h

Patch series "mm, arch: add generic implementation of pfn_valid() for
FLATMEM", v2.

Every architecture that supports FLATMEM memory model defines its own
version of pfn_valid() that essentially compares a pfn to max_mapnr.

Use mips/powerpc version implemented as static inline as a generic
implementation of pfn_valid() and drop its per-architecture definitions


This patch (of 4):

Makes it consistent with other architectures and allows for generic
definition of pfn_valid() in asm-generic/memory_model.h with clear
override in arch/arm/include/asm/page.h

Link: https://lkml.kernel.org/r/20230129124235.209895-1-rppt@kernel.org
Link: https://lkml.kernel.org/r/20230129124235.209895-2-rppt@kernel.org
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Brian Cain <bcain@quicinc.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# be7f3f90 25-Aug-2022 Arnd Bergmann <arnd@arndb.de>

ARM: footbridge: remove custom DMA address handling

Footbridge is the last Arm platform that has its own
__virt_to_bus()/__bus_to_virt()/phys_to_dma()/dma_to_phys() abstraction,
but this is just a simple offset now.

For PCI devices, the offset that is programmed into the PCI bridge must
also be set in each device using dma_direct_set_offset(). As Arm does
not have a pcibios_bus_add_device() helper yet, just use a bus notifier
for this.

For the ISA DMA, drivers now pass a non-translated physical address into
set_dma_addr(), so they have to be converted back with the corresponding
isa_bus_to_virt() function and then into the correct bus address with
the offset using the isa_dma_dev.

Tested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>


# af6f23b8 19-Apr-2022 Christoph Hellwig <hch@lst.de>

ARM/dma-mapping: use the generic versions of dma_to_phys/phys_to_dma by default

Only the footbridge platforms provide their own DMA address translation
helpers, so switch to the generic version for all other platforms, and
consolidate the footbridge implementation to remove two levels of
indirection.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>


# 463dbba4 08-Aug-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9104/2: Fix Keystone 2 kernel mapping regression

This fixes a Keystone 2 regression discovered as a side effect of
defining an passing the physical start/end sections of the kernel
to the MMU remapping code.

As the Keystone applies an offset to all physical addresses,
including those identified and patches by phys2virt, we fail to
account for this offset in the kernel_sec_start and kernel_sec_end
variables.

Further these offsets can extend into the 64bit range on LPAE
systems such as the Keystone 2.

Fix it like this:
- Extend kernel_sec_start and kernel_sec_end to be 64bit
- Add the offset also to kernel_sec_start and kernel_sec_end

As passing kernel_sec_start and kernel_sec_end as 64bit invariably
incurs BE8 endianness issues I have attempted to dry-code around
these.

Tested on the Vexpress QEMU model both with and without LPAE
enabled.

Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately")
Reported-by: Nishanth Menon <nmenon@kernel.org>
Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Tested-by: Nishanth Menon <nmenon@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# e17362d6 17-Jun-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9097/1: mmu: Declare section start/end correctly

The kernel test robot reported an interesting bug:

A debug print was using %08x with kernel_sec_start and kernel_sec_end
being phys_addr_t which can be either u32 or u64 (possibly more).

Actually these should just be declared as u32 to begin with: they are
declared as such in the assembly in head.S and the kernel definitely
boots in a 32 bit physical address space. Redeclare the kernel_sec_start
and kernel_sec_end to rid the bug.

Reported-by: kernel test robot <lkp@intel.com>
Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately")
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# a91da545 03-Jun-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9089/1: Define kernel physical section start and end

When we are mapping the initial sections in head.S we
know very well where the start and end of the kernel image
in physical memory is placed. Later on it gets hard
to determine this.

Save the information into two variables named
kernel_sec_start and kernel_sec_end for convenience
for later work involving the physical start and end
of the kernel. These variables are section-aligned
corresponding to the early section mappings set up
in head.S.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# b78f63f4 03-Jun-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET

We want to be able to compile the kernel into an address different
from PAGE_OFFSET (start of lowmem) + TEXT_OFFSET, so start to pry
apart the address of where the kernel is located from the address
where the lowmem is located by defining and using KERNEL_OFFSET in
a few key places.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 95731b8e 11-Feb-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: 9059/1: cache-v7: get rid of mini-stack

Now that we have reduced the number of registers that we need to
preserve when calling v7_invalidate_l1 from the boot code, we can use
scratch registers to preserve the remaining ones, and get rid of the
mini stack entirely. This works around any issues regarding cache
behavior in relation to the uncached accesses to this memory, which is
hard to get right in the general case (i.e., both bare metal and under
virtualization)

While at it, switch v7_invalidate_l1 to using ip as a scratch register
instead of r4. This makes the function AAPCS compliant, and removes the
need to stash r4 in ip across the call.

Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 9443076e 18-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: p2v: reduce p2v alignment requirement to 2 MiB

The ARM kernel's linear map starts at PAGE_OFFSET, which maps to a
physical address (PHYS_OFFSET) that is platform specific, and is
discovered at boot. Since we don't want to slow down translations
between physical and virtual addresses by keeping the offset in a
variable in memory, we implement this by patching the code performing
the translation, and putting the offset between PAGE_OFFSET and the
start of physical RAM directly into the instruction opcodes.

As we only patch up to 8 bits of offset, yielding 4 GiB >> 8 == 16 MiB
of granularity, we have to round up PHYS_OFFSET to the next multiple if
the start of physical RAM is not a multiple of 16 MiB. This wastes some
physical RAM, since the memory that was skipped will now live below
PAGE_OFFSET, making it inaccessible to the kernel.

We can improve this by changing the patchable sequences and the patching
logic to carry more bits of offset: 11 bits gives us 4 GiB >> 11 == 2 MiB
of granularity, and so we will never waste more than that amount by
rounding up the physical start of DRAM to the next multiple of 2 MiB.
(Note that 2 MiB granularity guarantees that the linear mapping can be
created efficiently, whereas less than 2 MiB may result in the linear
mapping needing another level of page tables)

This helps Zhen Lei's scenario, where the start of DRAM is known to be
occupied. It also helps EFI boot, which relies on the firmware's page
allocator to allocate space for the decompressed kernel as low as
possible. And if the KASLR patches ever land for 32-bit, it will give
us 3 more bits of randomization of the placement of the kernel inside
the linear region.

For the ARM code path, it simply comes down to using two add/sub
instructions instead of one for the carryless version, and patching
each of them with the correct immediate depending on the rotation
field. For the LPAE calculation, which has to deal with a carry, it
patches the MOVW instruction with up to 12 bits of offset (but we only
need 11 bits anyway)

For the Thumb2 code path, patching more than 11 bits of displacement
would be somewhat cumbersome, but the 11 bits we need fit nicely into
the second word of the u16[2] opcode, so we simply update the immediate
assignment and the left shift to create an addend of the right magnitude.

Suggested-by: Zhen Lei <thunder.leizhen@huawei.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# e8e00f5a 20-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: p2v: switch to MOVW for Thumb2 and ARM/LPAE

In preparation for reducing the phys-to-virt minimum relative alignment
from 16 MiB to 2 MiB, switch to patchable sequences involving MOVW
instructions that can more easily be manipulated to carry a 12-bit
immediate. Note that the non-LPAE ARM sequence is not updated: MOVW
may not be supported on non-LPAE platforms, and the sequence itself
can be updated more easily to apply the 12 bits of displacement.

For Thumb2, which has many more versions of opcodes, switch to a sequence
that can be patched by the same patching code for both versions. Note
that the Thumb2 opcodes for MOVW and MVN are unambiguous, and have no
rotation bits in their immediate fields, so there is no need to use
placeholder constants in the asm blocks.

While at it, drop the 'volatile' qualifiers from the asm blocks: the
code does not have any side effects that are invisible to the compiler,
so it is free to omit these sequences if the outputs are not used.

Suggested-by: Russell King <linux@armlinux.org.uk>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 2730e8ea 20-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: p2v: use relative references in patch site arrays

Free up a register in the p2v patching code by switching to relative
references, which don't require keeping the phys-to-virt displacement
live in a register.

Acked-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 0869f3b9 18-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: p2v: drop redundant 'type' argument from __pv_stub

We always pass the same value for 'type' so pull it into the __pv_stub
macro itself.

Acked-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# fc2933c1 28-Oct-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: 9020/1: mm: use correct section size macro to describe the FDT virtual address

Commit

149a3ffe62b9dbc3 ("9012/1: move device tree mapping out of linear region")

created a permanent, read-only section mapping of the device tree blob
provided by the firmware, and added a set of macros to get the base and
size of the virtually mapped FDT based on the physical address. However,
while the mapping code uses the SECTION_SIZE macro correctly, the macros
use PMD_SIZE instead, which means something entirely different on ARM when
using short descriptors, and is therefore not the right quantity to use
here. So replace PMD_SIZE with SECTION_SIZE. While at it, change the names
of the macro and its parameter to clarify that it returns the virtual
address of the start of the FDT, based on the physical address in memory.

Tested-by: Joel Stanley <joel@jms.id.au>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# c12366ba 25-Oct-2020 Linus Walleij <linus.walleij@linaro.org>

ARM: 9015/2: Define the virtual space of KASan's shadow region

Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
addressable by a 32bit architecture) out of the virtual address
space to use as shadow memory for KASan as follows:

+----+ 0xffffffff
| |
| | |-> Static kernel image (vmlinux) BSS and page table
| |/
+----+ PAGE_OFFSET
| |
| | |-> Loadable kernel modules virtual address space area
| |/
+----+ MODULES_VADDR = KASAN_SHADOW_END
| |
| | |-> The shadow area of kernel virtual address.
| |/
+----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
| | shadow address of MODULES_VADDR
| | |
| | |
| | |-> The user space area in lowmem. The kernel address
| | | sanitizer do not use this space, nor does it map it.
| | |
| | |
| | |
| | |
| |/
------ 0

0 .. TASK_SIZE is the memory that can be used by shared
userspace/kernelspace. It us used for userspace processes and for
passing parameters and memory buffers in system calls etc. We do not
need to shadow this area.

KASAN_SHADOW_START:
This value begins with the MODULE_VADDR's shadow address. It is the
start of kernel virtual space. Since we have modules to load, we need
to cover also that area with shadow memory so we can find memory
bugs in modules.

KASAN_SHADOW_END
This value is the 0x100000000's shadow address: the mapping that would
be after the end of the kernel memory at 0xffffffff. It is the end of
kernel address sanitizer shadow area. It is also the start of the
module area.

KASAN_SHADOW_OFFSET:
This value is used to map an address to the corresponding shadow
address by the following formula:

shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;

As you would expect, >> 3 is equal to dividing by 8, meaning each
byte in the shadow memory covers 8 bytes of kernel memory, so one
bit shadow memory per byte of kernel memory is used.

The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
on the VMSPLIT layout of the system: the kernel and userspace can
split up lowmem in different ways according to needs, so we calculate
the shadow offset depending on this.

When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
rotated constant, so we need to modify the TASK_SIZE access code in the
*.s file.

The kernel and modules may use different amounts of memory,
according to the VMSPLIT configuration, which in turn
determines the PAGE_OFFSET.

We use the following KASAN_SHADOW_OFFSETs depending on how the
virtual memory is split up:

- 0x1f000000 if we have 1G userspace / 3G kernelspace split:
- The kernel address space is 3G (0xc0000000)
- PAGE_OFFSET is then set to 0x40000000 so the kernel static
image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
so the modules use addresses 0x3f000000 .. 0x3fffffff
- So the addresses 0x3f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0xc1000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x18200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x26e00000, to
KASAN_SHADOW_END at 0x3effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x3f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
KASAN_SHADOW_OFFSET = 0x1f000000

- 0x5f000000 if we have 2G userspace / 2G kernelspace split:
- The kernel space is 2G (0x80000000)
- PAGE_OFFSET is set to 0x80000000 so the kernel static
image uses 0x80000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
so the modules use addresses 0x7f000000 .. 0x7fffffff
- So the addresses 0x7f000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x81000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x10200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0x6ee00000, to
KASAN_SHADOW_END at 0x7effffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0x7f000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
KASAN_SHADOW_OFFSET = 0x5f000000

- 0x9f000000 if we have 3G userspace / 1G kernelspace split,
and this is the default split for ARM:
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xc0000000 so the kernel static
image uses 0xc0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
so the modules use addresses 0xbf000000 .. 0xbfffffff
- So the addresses 0xbf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x41000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x08200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xb6e00000, to
KASAN_SHADOW_END at 0xbfffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xbf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
KASAN_SHADOW_OFFSET = 0x9f000000

- 0x8f000000 if we have 3G userspace / 1G kernelspace with
full 1 GB low memory (VMSPLIT_3G_OPT):
- The kernel address space is 1GB (0x40000000)
- PAGE_OFFSET is set to 0xb0000000 so the kernel static
image uses 0xb0000000 .. 0xffffffff.
- On top of that we have the MODULES_VADDR which under
the worst case (using ARM instructions) is
PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
so the modules use addresses 0xaf000000 .. 0xaffffff
- So the addresses 0xaf000000 .. 0xffffffff need to be
covered with shadow memory. That is 0x51000000 bytes
of memory.
- 1/8 of that is needed for its shadow memory, so
0x0a200000 bytes of shadow memory is needed. We
"steal" that from the remaining lowmem.
- The KASAN_SHADOW_START becomes 0xa4e00000, to
KASAN_SHADOW_END at 0xaeffffff.
- Now we can calculate the KASAN_SHADOW_OFFSET for any
kernel address as 0xaf000000 needs to map to the first
byte of shadow memory and 0xffffffff needs to map to
the last byte of shadow memory. Since:
SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
KASAN_SHADOW_OFFSET = 0x8f000000

- The default value of 0xffffffff for KASAN_SHADOW_OFFSET
is an error value. We should always match one of the
above shadow offsets.

When we do this, TASK_SIZE will sometimes get a bit odd values
that will not fit into immediate mov assembly instructions.
To account for this, we need to rewrite some assembly using
TASK_SIZE like this:

- mov r1, #TASK_SIZE
+ ldr r1, =TASK_SIZE

or

- cmp r4, #TASK_SIZE
+ ldr r0, =TASK_SIZE
+ cmp r4, r0

this is done to avoid the immediate #TASK_SIZE that need to
fit into a limited number of bits.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 7a1be318 11-Oct-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: 9012/1: move device tree mapping out of linear region

On ARM, setting up the linear region is tricky, given the constraints
around placement and alignment of the memblocks, and how the kernel
itself as well as the DT are placed in physical memory.

Let's simplify matters a bit, by moving the device tree mapping to the
top of the address space, right between the end of the vmalloc region
and the start of the the fixmap region, and create a read-only mapping
for it that is independent of the size of the linear region, and how it
is organized.

Since this region was formerly used as a guard region, which will now be
populated fully on LPAE builds by this read-only mapping (which will
still be able to function as a guard region for stray writes), bump the
start of the [underutilized] fixmap region by 512 KB as well, to ensure
that there is always a proper guard region here. Doing so still leaves
ample room for the fixmap space, even with NR_CPUS set to its maximum
value of 32.

Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 2dd8a62c 10-Apr-2018 Masahiro Yamada <yamada.masahiro@socionext.com>

linux/const.h: move UL() macro to include/linux/const.h

ARM, ARM64 and UniCore32 duplicate the definition of UL():

#define UL(x) _AC(x, UL)

This is not actually arch-specific, so it will be useful to move it to a
common header. Currently, we only have the uapi variant for
linux/const.h, so I am creating include/linux/const.h.

I also added _UL(), _ULL() and ULL() because _AC() is mostly used in
the form either _AC(..., UL) or _AC(..., ULL). I expect they will be
replaced in follow-up cleanups. The underscore-prefixed ones should
be used for exported headers.

Link: http://lkml.kernel.org/r/1519301715-31798-4-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 62d1c95d 27-Dec-2017 Vladimir Murzin <vladimir.murzin@arm.com>

ARM: 8739/1: NOMMU: Setup VBAR/Hivecs for secondaries cores

With switch to dynamic exception base address setting, VBAR/Hivecs
set only for boot CPU, but secondaries stay unaware of that. That
might lead to weird effects when trying up to bring up secondaries.

Fixes: ad475117d201 ("ARM: 8649/2: nommu: remove Hivecs configuration is asm")
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 58c16709 01-Feb-2017 Afzal Mohammed <afzal.mohd.ma@gmail.com>

ARM: 8648/2: nommu: display vectors base

VECTORS_BASE displays the exception base address. Now on no-MMU as
the exception base address is dynamically estimated, define
VECTORS_BASE to the variable holding it.

As it is the case, limit VECTORS_BASE constant definition to MMU.

Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# d2ca5f24 29-Jan-2017 Afzal Mohammed <afzal.mohd.ma@gmail.com>

ARM: 8646/1: mmu: decouple VECTORS_BASE from Kconfig

For MMU configurations, VECTORS_BASE is always 0xffff0000, a macro
definition will suffice.

For no-MMU, exception base address is dynamically determined in
subsequent patches. To preserve bisectability, now make the
macro applicable for no-MMU scenario too.

Thanks to 0-DAY kernel test infrastructure that found the
bisectability issue. This macro will be restricted to MMU case upon
dynamically determining exception base address for no-MMU.

Once exception address is handled dynamically for no-MMU,
VECTORS_BASE can be removed from Kconfig.

Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e377cd82 14-Jan-2017 Florian Fainelli <f.fainelli@gmail.com>

ARM: 8640/1: Add support for CONFIG_DEBUG_VIRTUAL

x86 has an option: CONFIG_DEBUG_VIRTUAL to do additional checks on
virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. This includes caller
using __virt_to_phys() on image addresses instead of __pa_symbol(). This
is a generally useful debug feature to spot bad code (particulary in
drivers).

Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a09975bf 14-Jan-2017 Florian Fainelli <f.fainelli@gmail.com>

ARM: 8639/1: Define KERNEL_START and KERNEL_END

In preparation for adding CONFIG_DEBUG_VIRTUAL support, define a set of
common constants: KERNEL_START and KERNEL_END which abstract
CONFIG_XIP_KERNEL vs. !CONFIG_XIP_KERNEL. Update the code where
relevant.

Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7d281b62 28-Jul-2016 Nicolas Pitre <nico@fluxnic.net>

ARM: 8589/1: asm/memory.h: remove dead definitions

The last ad-hoc __phys_to_virt definition was removed in commit fd0053c9
("ARM: realview: remove sparsemem hack"). Therefore we can remove the
unneeded definitions and unduplicate the virt_to_pfn macro from
asm/memory.h.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 07a7056c 15-Mar-2016 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: provide arm_has_idmap_alias() helper

Provide a helper to indicate whether we need to perform special handling
for boot identity mapping aliases or not.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Pratyush Anand <panand@redhat.com>


# 981b6714 15-Mar-2016 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: provide improved virt_to_idmap() functionality

For kexec, we need more functionality from the IDMAP system. We need to
be able to convert physical addresses to their identity mappped versions
as well as virtual addresses.

Convert the existing arch_virt_to_idmap() to deal with physical
addresses instead.

Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9e0087e6 18-Feb-2016 Arnd Bergmann <arnd@arndb.de>

ARM: 8530/1: remove VIRT_TO_BUS

All drivers that are relevant for rpc or footbridge have stopped
using virt_to_bus a while ago, so we can remove it and avoid some
harmless randconfig warnings for drivers that we do not care about:

drivers/atm/zatm.c: In function 'poll_rx':
drivers/atm/zatm.c:401:18: warning: 'bus_to_virt' is deprecated [-Wdeprecated-declarations]
skb = ((struct rx_buffer_head *) bus_to_virt(here[2]))->skb;

FWIW, the remaining drivers using this are:

ATM: firestream, zatm, ambassador, horizon
ISDN: hisax/netjet
V4L: STA2X11, zoran
Net: Appletalk LTPC, Tulip DE4x5, Toshiba IrDA
WAN: comtrol sv11, cosa, lanmedia, sealevel
SCSI: DPT_I2O, buslogic
VME: CA91C142

My best guess is that all of the above are so hopelessly obsolete that
we are best off removing all of them form the kernel, but that can be
done another time.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 8ff97fa3 16-Feb-2016 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: make the physical-relative calculation more obvious

The physical-relative calculation between the XIP text and data sections
introduced by the previous patch was far from obvious. Let's simplify it
by turning it into a macro which takes the two (virtual) addresses.

This allows us to arrange the calculation in a more obvious manner - we
can make it two sub-expressions which calculate the physical address for
each symbol, and then takes the difference of those physical addresses.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d7811455 01-Feb-2016 Nicolas Pitre <nico@fluxnic.net>

ARM: 8512/1: proc-v7.S: Adjust stack address when XIP_KERNEL

When XIP_KERNEL is enabled, the virt to phys address translation for RAM
is not the same as the virt to phys address translation for .text.
The only way to know where physical RAM is located is to use
PLAT_PHYS_OFFSET.
The MACRO will be useful for other places where there is a similar problem.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 28410293 11-Jan-2016 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: make virt_to_idmap() return unsigned long

Make virt_to_idmap() return an unsigned long rather than phys_addr_t.

Returning phys_addr_t here makes no sense, because the definition of
virt_to_idmap() is that it shall return a physical address which maps
identically with the virtual address. Since virtual addresses are
limited to 32-bit, identity mapped physical addresses are as well.

Almost all users already had an implicit narrowing cast to unsigned long
so let's make this official and part of this interface.

Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 803e3dbc 09-Sep-2015 Sergey Dyasly <dserrg@gmail.com>

ARM: 8430/1: use default ioremap alignment for SMP or LPAE

16MB alignment for ioremap mappings was added by commit a069c896d0d6 ("[ARM]
3705/1: add supersection support to ioremap()") in order to support supersection
mappings. But __arm_ioremap_pfn_caller uses section and supersection mappings
only in !SMP && !LPAE case. There is no need for such big alignment if either
SMP or LPAE is enabled.

After this change, ioremap will use default maximum alignment of 128 pages.

Link: https://lkml.kernel.org/g/1419328813-2211-1-git-send-email-d.safonov@partner.samsung.com

Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: James Bottomley <JBottomley@parallels.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnd Bergmann <arnd.bergmann@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dmitry Safonov <d.safonov@partner.samsung.com>
Signed-off-by: Sergey Dyasly <s.dyasly@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 012dcef3 07-Aug-2015 Christoph Hellwig <hch@lst.de>

mm: move __phys_to_pfn and __pfn_to_phys to asm/generic/memory_model.h

Three architectures already define these, and we'll need them genericly
soon.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>


# 0871b724 17-Jul-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix __virt_to_idmap build error on !MMU

Fengguang Wu reports that building ARM with !MMU results in the
following build error:

arch/arm/kernel/built-in.o: In function `__soft_restart':
>> :(.text+0x1624): undefined reference to `arch_virt_to_idmap'

Fix this by adding an appropriate IS_ENABLED(CONFIG_MMU) into the
__virt_to_idmap() inline function.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e4886664 26-Jun-2015 Vitaly Andrianov <vitalya@ti.com>

ARM: 8396/1: use phys_addr_t in pfn_to_kaddr()

This patch fixes pfn_to_kaddr() to use phys_addr_t. Without this,
this macro is broken on LPAE systems. For physical addresses above
first 4GB result of shifting pfn with PAGE_SHIFT may be truncated.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b2c3e38a 04-Apr-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: redo TTBR setup code for LPAE

Re-engineer the LPAE TTBR setup code. Rather than passing some shifted
address in order to fit in a CPU register, pass either a full physical
address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1).

This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of
cpu_set_ttbr() in the secondary CPU startup code path (which was there
to re-set TTBR1 to the appropriate high physical address space on
Keystone2.)

Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 84c4d3a6 28-Jul-2014 Thierry Reding <treding@nvidia.com>

ARM: Use include/asm-generic/io.h

Include the generic I/O header file so that duplicate implementations
can be removed. This will also help to establish consistency across more
architectures regarding which accessors they support.

Signed-off-by: Thierry Reding <treding@nvidia.com>


# c6f54a9b 23-Jul-2014 Uwe Kleine-König <u.kleine-koenig@pengutronix.de>

ARM: 8113/1: remove remaining definitions of PLAT_PHYS_OFFSET from <mach/memory.h>

The platforms selecting NEED_MACH_MEMORY_H defined the start address of
their physical memory in the respective <mach/memory.h>. With
ARM_PATCH_PHYS_VIRT=y (which is quite common today) this is useless
though because the definition isn't used but determined dynamically.

So remove the definitions from all <mach/memory.h> and provide the
Kconfig symbol PHYS_OFFSET with the respective defaults in case
ARM_PATCH_PHYS_VIRT isn't enabled.

This allows to drop the dependency of PHYS_OFFSET on !NEED_MACH_MEMORY_H
which prevents compiling an integrator nommu-kernel.
(CONFIG_PAGE_OFFSET which has "default PHYS_OFFSET if !MMU" expanded to
"0x" because CONFIG_PHYS_OFFSET doesn't exist as INTEGRATOR selects
NEED_MACH_MEMORY_H.)

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 03eca200 03-Jun-2014 Uwe Kleine-König <u.kleine-koenig@pengutronix.de>

ARM: nommu: don't limit TASK_SIZE

With TASK_SIZE set to the maximal RAM address booting in some XIP
configurations fails (e.g. on efm32 DK3750). The problem is that
strncpy_from_user et al. check for the address not being above TASK_SIZE
(since 8c56cc8be5b3 (ARM: 7449/1: use generic strnlen_user and
strncpy_from_user functions)) and this makes booting fail if the XIP
flash is above the RAM address space.

This change is in line with blackfin, frv and m68k which also use
0xffffffff for TASK_SIZE with !MMU.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>


# 64d3b6a3 09-Apr-2014 Nicolas Pitre <nico@fluxnic.net>

ARM: 8023/1: remove remnants of the static DMA mapping

It looks like the static mapping area for DMA was replaced by dynamic
allocation into the vmalloc area by commit e9da6e9905e6 but the
information in Documentation/arm/memory.txt was not removed accordingly.

CONSISTENT_END in arch/arm/include/asm/memory.h has no more users and
can be removed as well.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e26a9e00 25-Mar-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Better virt_to_page() handling

virt_to_page() is incredibly inefficient when virt-to-phys patching is
enabled. This is because we end up with this calculation:

page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]

in assembly. The asm virt_to_phys() is equivalent this this operation:

addr - PAGE_OFFSET + __pv_phys_offset

and we can see that because this is assembly, the compiler has no chance
to optimise some of that away. This should reduce down to:

page = &mem_map[(addr - PAGE_OFFSET) >> 12]

for the common cases. Permit the compiler to make this optimisation by
giving it more of the information it needs - do this by providing a
virt_to_pfn() macro.

Another issue which makes this more complex is that __pv_phys_offset is
a 64-bit type on all platforms. This is needlessly wasteful - if we
store the physical offset as a PFN, we can save a lot of work having
to deal with 64-bit values, which sometimes ends up producing incredibly
horrid code:

a4c: e3009000 movw r9, #0
a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset
a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset
a50: R_ARM_MOVT_ABS __pv_phys_offset
a54: e3002000 movw r2, #0
a54: R_ARM_MOVW_ABS_NC __pv_phys_offset
a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset
a58: R_ARM_MOVT_ABS __pv_phys_offset
a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset
a60: e3001000 movw r1, #0
a60: R_ARM_MOVW_ABS_NC mem_map
a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 006fa259 26-Feb-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix noMMU kallsyms symbol filtering

With noMMU, CONFIG_PAGE_OFFSET was not being set correctly. As there's
no MMU, PAGE_OFFSET should be equal to PHYS_OFFSET in all cases. This
commit makes that explicit.

Since we do this, we don't need to mess around in asm/memory.h with
ifdefs to sort this out, so let's get rid of that, and there's no point
offering the "Memory split" option for noMMU as that's meaningless
there.

Fixes: b9b32bf70f2f ("ARM: use linker magic for vectors and vector stubs")
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# efea3403 20-Dec-2013 Laura Abbott <lauraa@codeaurora.org>

ARM: 7931/1: Correct virt_addr_valid

The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid. The check for
an address between PAGE_OFFSET and high_memory is still necessary
as vmalloc/highmem addresses are not valid with virt_to_page.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b713aa0b 10-Dec-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix asm/memory.h build error

Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
not defined:

In file included from arch/arm/include/asm/page.h:163:0,
from include/linux/mm_types.h:16,
from include/linux/sched.h:24,
from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_phys':
arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
arch/arm/include/asm/memory.h: In function '__phys_to_virt':
arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)

Fixes: ca5a45c06cd4 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 139cc2ba 07-Nov-2013 Victor Kamensky <victor.kamensky@linaro.org>

ARM: 7882/1: mm: fix __phys_to_virt to work with 64 bit phys_addr_t in BE case

Make sure that inline assembler that expects 'r' operand
receives 32 bit value.

Before this fix in case of CONFIG_ARCH_PHYS_ADDR_T_64BIT and
CONFIG_ARM_PATCH_PHYS_VIRT __phys_to_virt function passed 64 bit
value to __pv_stub inline assembler where 'r' operand is
expected. Compiler behavior in such case is not well specified.
It worked in little endian case, but in big endian case
incorrect code was generated, where compiler confused which
part of 64 bit value it needed to modify. For example BE
snippet looked like this:

N:0x80904E08 : MOV r2,#0
N:0x80904E0C : SUB r2,r2,#0x81000000

when LE similar code looked like this

N:0x808FCE2C : MOV r2,r0
N:0x808FCE30 : SUB r2,r2,#0xc0, 8 ; #0xc0000000

Note 'r0' register is va that have to be translated into phys

To avoid this situation use explicit cast to 'unsigned long',
which explicitly discard upper part of phys address and convert
value to 32 bit. Also add comment so such cast will not be
removed in the future.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5e4432d3 29-Oct-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix misplaced arch_virt_to_idmap()

Olof Johansson reported:

In file included from arch/arm/include/asm/page.h:163:0,
from include/linux/mm_types.h:16,
from include/linux/sched.h:24,
from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_idmap':
arch/arm/include/asm/memory.h:300:6: error: 'arch_virt_to_idmap' undeclared (first use in this function)

caused by arch_virt_to_idmap being placed inside a different
preprocessor conditional to its user. Move it along side its user.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f52bb722 29-Jul-2013 Sricharan R <r.sricharan@ti.com>

ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses

The current phys_to_virt patching mechanism works only for 32 bit
physical addresses and this patch extends the idea for 64bit physical
addresses.

The 64bit v2p patching mechanism patches the higher 8 bits of physical
address with a constant using 'mov' instruction and lower 32bits are patched
using 'add'. While this is correct, in those platforms where the lowmem addressable
physical memory spawns across 4GB boundary, a carry bit can be produced as a
result of addition of lower 32bits. This has to be taken in to account and added
in to the upper. The patched __pv_offset and va are added in lower 32bits, where
__pv_offset can be in two's complement form when PA_START < VA_START and that can
result in a false carry bit.

e.g
1) PA = 0x80000000; VA = 0xC0000000
__pv_offset = PA - VA = 0xC0000000 (2's complement)

2) PA = 0x2 80000000; VA = 0xC000000
__pv_offset = PA - VA = 0x1 C0000000

So adding __pv_offset + VA should never result in a true overflow for (1).
So in order to differentiate between a true carry, a __pv_offset is extended
to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is
2's complement. So 'mvn #0' is inserted instead of 'mov' while patching
for the same reason. Since mov, add, sub instruction are to patched
with different constants inside the same stub, the rotation field
of the opcode is using to differentiate between them.

So the above examples for v2p translation becomes for VA=0xC0000000,
1) PA[63:32] = 0xffffffff
PA[31:0] = VA + 0xC0000000 --> results in a carry
PA[63:32] = PA[63:32] + carry

PA[63:0] = 0x0 80000000

2) PA[63:32] = 0x1
PA[31:0] = VA + 0xC0000000 --> results in a carry
PA[63:32] = PA[63:32] + carry

PA[63:0] = 0x2 80000000

The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as
part of the review of first and second versions of the subject patch.

There is no corresponding change on the phys_to_virt() side, because
computations on the upper 32-bits would be discarded anyway.

Cc: Russell King <linux@arm.linux.org.uk>

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# 4dc9a817 30-Jul-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: mm: Introduce virt_to_idmap() with an arch hook

On some PAE systems (e.g. TI Keystone), memory is above the
32-bit addressable limit, and the interconnect provides an
aliased view of parts of physical memory in the 32-bit addressable
space. This alias is strictly for boot time usage, and is not
otherwise usable because of coherency limitations. On such systems,
the idmap mechanism needs to take this aliased mapping into account.

This patch introduces virt_to_idmap() and a arch function pointer which
can be populated by platform which needs it. Also populate necessary
idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
to be compatible with multi-platform builds.

Most architecture won't touch it and in that case virt_to_idmap()
fall-back to existing virt_to_phys() macro.

Cc: Russell King <linux@arm.linux.org.uk>

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# ca5a45c0 30-Jul-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions

Fix remainder types used when converting back and forth between
physical and virtual addresses.

Cc: Russell King <linux@arm.linux.org.uk>

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# 5b84de34 03-Jul-2013 Jiang Liu <liuj97@gmail.com>

mm/ARM: fix stale comment about VALID_PAGE()

VALID_PAGE() has been removed from kernel long time ago,
so fix the comment.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Giancarlo Asnaghi <giancarlo.asnaghi@st.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 5b20c5b2 12-Sep-2012 Cyril Chemparathy <cyril@ti.com>

ARM: fix type of PHYS_PFN_OFFSET to unsigned long

On LPAE machines, PHYS_OFFSET evaluates to a phys_addr_t and this type is
inherited by the PHYS_PFN_OFFSET definition as well. Consequently, the kernel
build emits warnings of the form:

init/main.c: In function 'start_kernel':
init/main.c:588:7: warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'phys_addr_t' [-Wformat]

This patch fixes this warning by pinning down the PFN type to unsigned long.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 4756dcbf 21-Jul-2012 Cyril Chemparathy <cyril@ti.com>

ARM: LPAE: accomodate >32-bit addresses for page table base

This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to
38-bit physical addresses.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 5d1c20bc 31-Jan-2013 Will Deacon <will@kernel.org>

ARM: 7637/1: memory: use SZ_ constants for defining the virtual memory layout

Parts of the virtual memory layout (mainly the modules area) are
described using open-coded immediate values.

Use the SZ_ definitions from linux/sizes.h instead to make the code
clearer.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a5d533ee 12-Nov-2012 Arnd Bergmann <arnd@arndb.de>

ARM: disable virt_to_bus/virt_to_bus almost everywhere

We are getting a number of warnings about the use of the deprecated
bus_to_virt function in drivers using the ARM ISA DMA API:

drivers/parport/parport_pc.c: In function 'parport_pc_fifo_write_block_dma':
drivers/parport/parport_pc.c:622:3: warning: 'bus_to_virt' is deprecated
(declared at arch/arm/include/asm/memory.h:253) [-Wdeprecated-declarations]

This is only because that function gets used by the inline
set_dma_addr() helper. We know that any driver for the ISA DMA API
is correctly using the DMA addresses, so we can change this
to use the __bus_to_virt() function instead, which does not warn.

After this, there are no remaining drivers that are used on
any defconfigs on ARM using virt_to_bus or bus_to_virt, with
the exception of the OSS sound driver. That driver is only used
on RiscPC, NetWinder and Shark, so we can set ARCH_NO_VIRT_TO_BUS
on all other platforms and hide the deprecated functions, which
is far more effective than marking them as deprecated, in order
to avoid any new users of that code.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Russell King <linux@arm.linux.org.uk>


# 79d1f5c9 07-Feb-2013 Will Deacon <will@kernel.org>

ARM: 7641/1: memory: fix broken mmap by ensuring TASK_UNMAPPED_BASE is aligned

We have received multiple reports of mmap failures when running with a
2:2 vm split. These manifest as either -EINVAL with a non page-aligned
address (ending 0xaaa) or a SEGV, depending on the application. The
issue is commonly observed in children of make, which appears to use
bottom-up mmap (assumedly because it changes the stack rlimit).

Further investigation reveals that this regression was triggered by
394ef6403abc ("mm: use vm_unmapped_area() on arm architecture"), whereby
TASK_UNMAPPED_BASE is no longer page-aligned for bottom-up mmap, causing
get_unmapped_area to choke on misaligned addressed.

This patch fixes the problem by defining TASK_UNMAPPED_BASE in terms of
TASK_SIZE and explicitly aligns the result to 16M, matching the other
end of the heap.

Acked-by: Nicolas Pitre <nico@linaro.org>
Reported-by: Steve Capper <steve.capper@arm.com>
Reported-by: Jean-Francois Moine <moinejf@free.fr>
Reported-by: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 48aa820f 20-Aug-2012 Rob Herring <rob.herring@calxeda.com>

ARM: kill off arch_is_coherent

With ixp2xxx removed, there are no platforms that define arch_is_coherent,
so the last occurrences of arch_is_coherent can be removed. Any new
platform with coherent i/o should use coherent dma mapping functions.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>


# b4ad5155 04-Sep-2012 Stephen Boyd <sboyd@codeaurora.org>

ARM: 7512/1: Fix XIP build due to PHYS_OFFSET definition moving

During the p2v changes, the PHYS_OFFSET #define moved into a
!__ASSEMBLY__ section. This causes a XIP build to fail with

arch/arm/kernel/head.o: In function 'stext':
arch/arm/kernel/head.S:146: undefined reference to 'PHYS_OFFSET'

Momentarily leave the #ifndef __ASSEMBLY__ section so we can
define PHYS_OFFSET for all compilation units.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 158e8bfe 23-Jun-2012 Alessandro Rubini <rubini@gnudd.com>

ARM: 7432/1: use the new linux/sizes.h

Signed-off-by: Alessandro Rubini <rubini@gnudd.com>
Acked-by: Giancarlo Asnaghi <giancarlo.asnaghi@st.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 38b4205a 14-Mar-2012 Uwe Kleine-König <u.kleine-koenig@pengutronix.de>

ARM: 7361/1: provide XIP_VIRT_ADDR for no-MMU builds

XIP_VIRT_ADDR is needed for XIP builds and currently only defined for
builds with CONFIG_MMU.

Also provide it for no-MMU builds to make it possible to build an XIP
kernel for MMU-less machines. As these lack an MMU it has to be an
identity mapping.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0cdc8b92 02-Sep-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: switch from NO_MACH_MEMORY_H to NEED_MACH_MEMORY_H

Given that we want the default to not have any <mach/memory.h> and given
that there are now fewer cases where it is still provided than the cases
where it is not at this point, this makes sense to invert the logic and
just identify the exception cases.

The word "need" instead of "have" was chosen to construct the config
symbol so not to suggest that having a mach/memory.h file is actually
a feature that one should aim for.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 1b9f95f8 05-Jul-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: prepare for removal of a bunch of <mach/memory.h> files

When the CONFIG_NO_MACH_MEMORY_H symbol is selected by a particular
machine class, the machine specific memory.h include file is no longer
used and can be removed. In that case the equivalent information can
be obtained dynamically at runtime by enabling CONFIG_ARM_PATCH_PHYS_VIRT
or by specifying the physical memory address at kernel configuration time.

If/when all instances of mach/memory.h are removed then this symbol could
be removed.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 99d1717d 02-Aug-2011 Jon Medhurst <tixy@yxit.co.uk>

ARM: Add init_consistent_dma_size()

This function can be called during boot to increase the size of the consistent
DMA region above it's default value of 2MB. It must be called before the memory
allocator is initialised, i.e. before any core_initcall.

Signed-off-by: Jon Medhurst <tixy@yxit.co.uk>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# daece596 11-Aug-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: 7013/1: P2V: Remove ARM_PATCH_PHYS_VIRT_16BIT

This code can be removed now that MSM targets no longer need the 16-bit
offsets for P2V.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 022ae537 08-Jul-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: dma: replace ISA_DMA_THRESHOLD with a variable

ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get
rid of it from ARM by replacing it with arm_dma_zone_mask. Move
dma_supported() and dma_set_mask() out of line, and have
dma_supported() check this new variable instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# be20902b 11-May-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: use ARM_DMA_ZONE_SIZE to adjust the zone sizes

Rather than each platform providing its own function to adjust the
zone sizes, use the new ARM_DMA_ZONE_SIZE definition to perform this
adjustment. This ensures that the actual DMA zone size and the
ISA_DMA_THRESHOLD/MAX_DMA_ADDRESS definitions are consistent with
each other, and moves this complexity out of the platform code.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2fb3ec5c 11-May-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Replace platform definition of ISA_DMA_THRESHOLD/MAX_DMA_ADDRESS

The values of ISA_DMA_THRESHOLD and MAX_DMA_ADDRESS are related; one is
the physical/bus address, the other is the virtual address. Both need
to be kept in step, so rather than having platforms define both, allow
them to define a single macro which sets both of these macros
appropraitely.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3a6b1676 15-Feb-2011 Will Deacon <will@kernel.org>

ARM: 6675/1: use phys_addr_t instead of unsigned long in conversion code

The unsigned long datatype is not sufficient for mapping physical addresses
>= 4GB.

This patch ensures that the address conversion code in asm/memory.h casts
to the correct type when handling physical addresses. The internal v2p
macros only deal with lowmem addresses, so these do not need to be modified.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cada3c08 04-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: P2V: extend to 16-bit translation offsets

MSM's memory is aligned to 2MB, which is more than we can do with our
existing method as we're limited to the upper 8 bits. Extend this by
using two instructions to 16 bits, automatically selected when MSM is
enabled.

Acked-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# dc21af99 04-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching

This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.

Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.

As many translations are of the form:

physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)

we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.

Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.

At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.

Add a module version magic string for this feature to prevent
incompatible modules being loaded.

Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f4117ac9 04-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: P2V: separate PHYS_OFFSET from platform definitions

This uncouple PHYS_OFFSET from the platform definitions, thereby
facilitating run-time computation of the physical memory offset.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Magnus Damm <damm@opensource.se>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Acked-by: Wan ZongShun <mcuos.com@gmail.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Acked-by: Jiandong Zheng <jdzheng@broadcom.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 05b112ff 25-Jan-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: 6637/1: Make the argument to virt_to_phys() "const volatile"

Changing the virt_to_phys() argument to "const volatile void *" avoids
compiler warnings in some situations where this function is used.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1dbd30e9 12-Jul-2010 Linus Walleij <linus.walleij@stericsson.com>

ARM: 6225/1: make TCM allocation static and common for all archs

This changes the TCM handling so that a fixed area is reserved at
0xfffe0000-0xfffeffff for TCM. This areas is used by XScale but
XScale does not have TCM so the mechanisms are mutually exclusive.

This change is needed to make TCM detection more dynamic while
still being able to compile code into it, and is a must for the
unified ARM goals: the current TCM allocation at different places
in memory for each machine would be a nightmare if you want to
compile a single image for more than one machine with TCM so it
has to be nailed down in one place.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b65b4781 22-May-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Remove 'node' argument form arch_adjust_zones()

Since we no longer support discontigmem, node is always zero, so
remove this argument.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# be370302 07-May-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Remove DISCONTIGMEM support

Everything should now be using sparsemem rather than discontigmem, so
remove the code supporting discontigmem from ARM.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c931b4f6 07-Feb-2010 Fenkart/Bostandzhyan <andreas.fenkart@streamunlimited.com>

ARM: 5928/1: Change type of VMALLOC_END to unsigned long.

Makes it consistent with VMALLOC_START

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a7bd08c8 07-Feb-2010 Fenkart/Bostandzhyan <andreas.fenkart@streamunlimited.com>

ARM: 5927/1: Make delimiters of DMA area globally visibly.

Adds DMA area to 'virtual memory map' startup message

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1c4a4f48 31-Oct-2009 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: dma-mapping: simplify page_to_dma() and __pfn_to_bus()

The non-highmem() and the __pfn_to_bus() based page_to_dma() both
compile to the same code, so its pointless having these two different
approaches. Use the __pfn_to_bus() based version.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


# 719301ff 31-Oct-2009 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: provide phys_to_page() to complement page_to_phys()

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


# a8cf81ff 19-Oct-2009 Russell King <rmk+kernel@arm.linux.org.uk>

Revert "[ARM] unconditionally define __virt_to_phys and __phys_to_virt"

This reverts commit 75f4aa15cf05ce6d99c8261cf57dcd749877fd1c.

We have a couple of platforms which require non-linear P:V mappings,
so we need these to be overridable.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b7cfda9f 07-Sep-2009 Russell King <rmk@dyn-67.arm.linux.org.uk>

ARM: Fix pfn_valid() for sparse memory

On OMAP platforms, some people want to declare to segment up the memory
between the kernel and a separate application such that there is a hole
in the middle of the memory as far as Linux is concerned. However,
they want to be able to mmap() the hole.

This currently causes problems, because update_mmu_cache() thinks that
there are valid struct pages for the "hole". Fix this by making
pfn_valid() slightly more expensive, by checking whether the PFN is
contained within the meminfo array.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Khasim Syed Mohammed <khasim@ti.com>


# adca6dc2 23-Jul-2009 Catalin Marinas <catalin.marinas@arm.com>

Thumb-2: Add support for loadable modules

Modules compiled to Thumb-2 have two additional relocations needing to
be resolved at load time, R_ARM_THM_CALL and R_ARM_THM_JUMP24, for BL
and B.W instructions. The maximum Thumb-2 addressing range is +/-2^24
(+/-16MB) therefore the MODULES_VADDR macro in asm/memory.h is set to
(MODULES_END - 8MB) for the Thumb-2 compiled kernel.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 58edb515 09-Sep-2008 Nicolas Pitre <nico@cam.org>

[ARM] make page_to_dma() highmem aware

If a machine class has a custom __virt_to_bus() implementation then it
must provide a __arch_page_to_dma() implementation as well which is
_not_ based on page_address() to support highmem.

This patch fixes existing __arch_page_to_dma() and provide a default
implementation otherwise. The default implementation for highmem is
based on __pfn_to_bus() which is defined only when no custom
__virt_to_bus() is provided by the machine class.

That leaves only ebsa110 and footbridge which cannot support highmem
until they provide their own __arch_page_to_dma() implementation.
But highmem support on those legacy platforms with limited memory is
certainly not a priority.

Signed-off-by: Nicolas Pitre <nico@marvell.com>


# d73cd428 15-Sep-2008 Nicolas Pitre <nico@cam.org>

[ARM] kmap support

The kmap virtual area borrows a 2MB range at the top of the 16MB area
below PAGE_OFFSET currently reserved for kernel modules and/or the
XIP kernel. This 2MB corresponds to the range covered by 2 consecutive
second-level page tables, or a single pmd entry as seen by the Linux
page table abstraction. Because XIP kernels are unlikely to be seen
on systems needing highmem support, there shouldn't be any shortage of
VM space for modules (14 MB for modules is still way more than twice the
typical usage).

Because the virtual mapping of highmem pages can go away at any moment
after kunmap() is called on them, we need to bypass the delayed cache
flushing provided by flush_dcache_page() in that case.

The atomic kmap versions are based on fixmaps, and
__cpuc_flush_dcache_page() is used directly in that case.

Signed-off-by: Nicolas Pitre <nico@marvell.com>


# b5ee9002 05-Sep-2008 Nicolas Pitre <nico@cam.org>

[ARM] remove a common set of __virt_to_bus definitions

Let's provide an overridable default instead of having every machine
class define __virt_to_bus and __bus_to_virt to the same thing. What
most platforms are using is bus_addr == phys_addr so such is the default.

One exception is ebsa110 which has no DMA what so ever, so the actual
definition is not important except only for proper compilation. Also
added a comment about the special footbridge bus translation.

Let's also remove comments alluding to set_dma_addr which is not
(and should not) be commonly used.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 75f4aa15 05-Sep-2008 Nicolas Pitre <nico@cam.org>

[ARM] unconditionally define __virt_to_phys and __phys_to_virt

There is no machine class overriding this. If non linear translations
are implemented again for some machines then this could be restored at
that time.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ab4f2ee1 06-Nov-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] fix naming of MODULE_START / MODULE_END

As of 73bdf0a60e607f4b8ecc5aec597105976565a84f, the kernel needs
to know where modules are located in the virtual address space.
On ARM, we located this region between MODULE_START and MODULE_END.
Unfortunately, everyone else calls it MODULES_VADDR and MODULES_END.
Update ARM to use the same naming, so is_vmalloc_or_module_addr()
can work properly. Also update the comment on mm/vmalloc.c to
reflect that ARM also places modules in a separate region from the
vmalloc space.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3bca103a 07-Oct-2008 Nicolas Pitre <nico@cam.org>

[ARM] 5295/1: make ZONE_DMA optional

Most ARM machines don't need a special "DMA" memory zone, and
when configured out, the kernel becomes a bit smaller:

| text data bss dec hex filename
|3826182 102384 111700 4040266 3da64a vmlinux
|3823593 101616 111700 4036909 3d992d vmlinux.nodmazone

This is because the system now has only one zone total which effect is
to optimize away many conditionals in page allocation paths.

So let's configure this zone only on machines that need split zones.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6c5da7ac 30-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] mm: move vmalloc= parsing to arch/arm/mm/mmu.c

There's no point scattering this around the tree, the parsing
of the parameter might as well live beside the code which uses
it. That also means we can make vmalloc_reserve a static
variable.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 8d5796d2 25-Aug-2008 Lennert Buytenhek <buytenh@wantstofly.org>

[ARM] 5222/1: Allow configuring user:kernel split via Kconfig

This patch adds a config option (CONFIG_VMSPLIT_*) to allow choosing
between 3:1, 2:2 and 1:3 user:kernel memory splits.

Tested-by: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 98ed7d4b 09-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] dma-mapping: improve type-safeness of DMA translations

OMAP at least gets the return type(s) for the DMA translation functions
wrong, which can lead to subtle errors. Avoid this by moving the DMA
translation functions to asm/dma-mapping.h, and converting them to
inline functions.

Fix the OMAP DMA translation macros to use the correct argument and
result types.

Also, remove the unnecessary casts in dmabounce.c.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 60296c71 04-Aug-2008 Lennert Buytenhek <buytenh@wantstofly.org>

[ARM] prevent crashing when too much RAM installed

This patch will truncate and/or ignore memory banks if their kernel
direct mappings would (partially) overlap with the vmalloc area or
the mappings between the vmalloc area and the address space top, to
prevent crashing during early boot if there happens to be more RAM
installed than we are expecting.

Since the start of the vmalloc area is not at a fixed address (but
the vmalloc end address is, via the per-platform VMALLOC_END define),
a default area of 128M is reserved for vmalloc mappings, which can
be shrunk or enlarged by passing an appropriate vmalloc= command line
option as it is done on x86.

On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe000000,
two 512M RAM banks and vmalloc=128M (the default), this patch gives:

Truncating RAM at 20000000-3fffffff to -35ffffff (vmalloc region overlap).
Memory: 512MB 352MB = 864MB total

On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe800000,
two 256M RAM banks and vmalloc=768M, this patch gives:

Truncating RAM at 00000000-0fffffff to -0e7fffff (vmalloc region overlap).
Ignoring RAM at 10000000-1fffffff (vmalloc region overlap).

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Tested-by: Riku Voipio <riku.voipio@iki.fi>


# a09e64fb 05-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Move include/asm-arm/arch-* to arch/arm/*/include/mach

This just leaves include/asm-arm/plat-* to deal with.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4baa9922 02-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] move include/asm-arm to arch/arm/include/asm

Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>