History log of /linux-master/arch/arm/kernel/head.S
Revision Date Author Comments
# a9ff6961 02-Jun-2022 Linus Walleij <linus.walleij@linaro.org>

ARM: mm: Make virt_to_pfn() a static inline

Making virt_to_pfn() a static inline taking a strongly typed
(const void *) makes the contract of a passing a pointer of that
type to the function explicit and exposes any misuse of the
macro virt_to_pfn() acting polymorphic and accepting many types
such as (void *), (unitptr_t) or (unsigned long) as arguments
without warnings.

Doing this is a bit intrusive: virt_to_pfn() requires
PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in
<asm/page.h>, so this must be included *before* <asm/memory.h>.

The use of macros were obscuring the unclear inclusion order here,
as the macros would eventually be resolved, but a static inline
like this cannot be compiled with unresolved macros.

The naive solution to include <asm/page.h> at the top of
<asm/memory.h> does not work, because <asm/memory.h> sometimes
includes <asm/page.h> at the end of itself, which would create a
confusing inclusion loop. So instead, take the approach to always
unconditionally include <asm/page.h> at the end of <asm/memory.h>

arch/arm uses <asm/memory.h> explicitly in a lot of places,
however it turns out that if we just unconditionally include
<asm/memory.h> into <asm/page.h> and switch all inclusions of
<asm/memory.h> to <asm/page.h> instead, we enforce the right
order and <asm/memory.h> will always have access to the
definitions.

Put an inclusion guard in place making it impossible to include
<asm/memory.h> explicitly.

Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>


# 50f6f34e 29-Sep-2022 Arnd Bergmann <arnd@arndb.de>

ARM: footbridge: remove CATS

Nobody seems to have a CATS machine any more, so remove
it now, leaving only NetWinder and EBSA285.

Cc: Russell King <linux@armlinux.org.uk>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>


# 39114538 05-Jul-2022 Mike Rapoport <rppt@kernel.org>

ARM: head.S: rename PMD_ORDER to PMD_ENTRY_ORDER

PMD_ORDER denotes order of magnitude for a PMD entry, i.e PMD entry size
is 2 ^ PMD_ORDER.

Rename PMD_ORDER to PMD_ENTRY_ORDER to allow a generic definition of
PMD_ORDER as order of a PMD allocation: (PMD_SHIFT - PAGE_SHIFT).

Link: https://lkml.kernel.org/r/20220705154708.181258-16-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 8b806b82 24-Jan-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: mm: switch to swapper_pg_dir early for vmap'ed stack

When onlining a CPU, switch to swapper_pg_dir as soon as possible so
that it is guaranteed that the vmap'ed stack is mapped before it is
used.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 19f29aeb 18-Sep-2021 Keith Packard <keithpac@amazon.com>

ARM: smp: Pass task to secondary_start_kernel

This avoids needing to compute the task pointer in this function, which
will no longer be possible once we move thread_info off the stack.

Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# 00568b8a 21-Oct-2021 LABBE Corentin <clabbe.montjoie@gmail.com>

ARM: 9148/1: handle CONFIG_CPU_ENDIAN_BE32 in arch/arm/kernel/head.S

My intel-ixp42x-welltech-epbx100 no longer boot since 4.14.
This is due to commit 463dbba4d189 ("ARM: 9104/2: Fix Keystone 2 kernel
mapping regression")
which forgot to handle CONFIG_CPU_ENDIAN_BE32 as possible BE config.

Suggested-by: Krzysztof Hałasa <khalasa@piap.pl>
Fixes: 463dbba4d189 ("ARM: 9104/2: Fix Keystone 2 kernel mapping regression")
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 463dbba4 08-Aug-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9104/2: Fix Keystone 2 kernel mapping regression

This fixes a Keystone 2 regression discovered as a side effect of
defining an passing the physical start/end sections of the kernel
to the MMU remapping code.

As the Keystone applies an offset to all physical addresses,
including those identified and patches by phys2virt, we fail to
account for this offset in the kernel_sec_start and kernel_sec_end
variables.

Further these offsets can extend into the 64bit range on LPAE
systems such as the Keystone 2.

Fix it like this:
- Extend kernel_sec_start and kernel_sec_end to be 64bit
- Add the offset also to kernel_sec_start and kernel_sec_end

As passing kernel_sec_start and kernel_sec_end as 64bit invariably
incurs BE8 endianness issues I have attempted to dry-code around
these.

Tested on the Vexpress QEMU model both with and without LPAE
enabled.

Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately")
Reported-by: Nishanth Menon <nmenon@kernel.org>
Suggested-by: Russell King <rmk+kernel@armlinux.org.uk>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Tested-by: Nishanth Menon <nmenon@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# a91da545 03-Jun-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9089/1: Define kernel physical section start and end

When we are mapping the initial sections in head.S we
know very well where the start and end of the kernel image
in physical memory is placed. Later on it gets hard
to determine this.

Save the information into two variables named
kernel_sec_start and kernel_sec_end for convenience
for later work involving the physical start and end
of the kernel. These variables are section-aligned
corresponding to the early section mappings set up
in head.S.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# b78f63f4 03-Jun-2021 Linus Walleij <linus.walleij@linaro.org>

ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET

We want to be able to compile the kernel into an address different
from PAGE_OFFSET (start of lowmem) + TEXT_OFFSET, so start to pry
apart the address of where the kernel is located from the address
where the lowmem is located by defining and using KERNEL_OFFSET in
a few key places.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 10fce53c 17-Nov-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section

The early ATAGS/DT mapping code uses SECTION_SHIFT to mask low order
bits of R2, and decides that no ATAGS/DTB were provided if the resulting
value is 0x0.

This means that on systems where DRAM starts at 0x0 (such as Raspberry
Pi), no explicit mapping of the DT will be created if R2 points into the
first 1 MB section of memory. This was not a problem before, because the
decompressed kernel is loaded at the base of DRAM and mapped using
sections as well, and so as long as the DT is referenced via a virtual
address that uses the same translation (the linear map, in this case),
things work fine.

However, commit 7a1be318f579 ("9012/1: move device tree mapping out of
linear region") changes this, and now the DT is referenced via a virtual
address that is disjoint from the linear mapping of DRAM, and so we need
the early code to create the DT mapping unconditionally.

So let's create the early DT mapping for any value of R2 != 0x0.

Reported-by: "kernelci.org bot" <bot@kernelci.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 3bcf906b 14-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET

Replace the open coded arithmetic with a simple adr_l/sub pair. This
removes some open coded arithmetic involving virtual addresses, avoids
literal pools on v7+, and slightly reduces the footprint of the code.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 59d2f282 14-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: head: use PC-relative insn sequence for __smp_alt

Now that calling __do_fixup_smp_on_up() can be done without passing
the physical-to-virtual offset in r3, we can replace the open coded
PC relative offset calculations with a pair of adr_l invocations. This
removes some open coded arithmetic involving virtual addresses, avoids
literal pools on v7+, and slightly reduces the footprint of the code.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 450abd38 14-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: kernel: use relative references for UP/SMP alternatives

Currently, the .alt.smp.init section contains the virtual addresses
of the patch sites. Since patching may occur both before and after
switching into virtual mode, this requires some manual handling of
the address when applying the UP alternative.

Let's simplify this by using relative offsets in the table entries:
this allows us to simply add each entry's address to its contents,
regardless of whether we are running in virtual mode or not.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 91580f0d 14-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: head.S: use PC-relative insn sequence for secondary_data

Replace the open coded PC relative offset calculations with adr_l
and ldr_l invocations. This removes some open coded arithmetic
involving virtual addresses, avoids literal pools on v7+, and slightly
reduces the footprint of the code.

Note that it also removes a stale comment about the contents of r6.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 172c34c9 14-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: head-common.S: use PC-relative insn sequence for idmap creation

Replace the open coded PC relative offset calculations involving
__turn_mmu_on and __turn_mmu_on_end with a pair of adr_l invocations.
This removes some open coded arithmetic involving virtual addresses,
avoids literal pools on v7+, and slightly reduces the footprint of the
code.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# eae78e1a 20-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: p2v: move patching code to separate assembler source file

Move the phys2virt patching code into a separate .S file before doing
some work on it.

Suggested-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 4e79f021 20-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: p2v: fix handling of LPAE translation in BE mode

When running in BE mode on LPAE hardware with a PA-to-VA translation
that exceeds 4 GB, we patch bits 39:32 of the offset into the wrong
byte of the opcode. So fix that, by rotating the offset in r0 to the
right by 8 bits, which will put the 8-bit immediate in bits 31:24.

Note that this will also move bit #22 in its correct place when
applying the rotation to the constant #0x400000.

Fixes: d9a790df8e984 ("ARM: 7883/1: fix mov to mvn conversion in case of 64 bit phys_addr_t and BE")
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 7a1be318 11-Oct-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: 9012/1: move device tree mapping out of linear region

On ARM, setting up the linear region is tricky, given the constraints
around placement and alignment of the memblocks, and how the kernel
itself as well as the DT are placed in physical memory.

Let's simplify matters a bit, by moving the device tree mapping to the
top of the address space, right between the end of the vmalloc region
and the start of the the fixmap region, and create a read-only mapping
for it that is independent of the size of the linear region, and how it
is organized.

Since this region was formerly used as a guard region, which will now be
populated fully on LPAE builds by this read-only mapping (which will
still be able to function as a guard region for stray writes), bump the
start of the [underutilized] fixmap region by 512 KB as well, to ensure
that there is always a proper guard region here. Doing so still leaves
ample room for the fixmap space, even with NR_CPUS set to its maximum
value of 32.

Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 65fddcfc 08-Jun-2020 Mike Rapoport <rppt@kernel.org>

mm: reorder includes after introduction of linux/pgtable.h

The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
of the latter in the middle of asm includes. Fix this up with the aid of
the below script and manual adjustments here and there.

import sys
import re

if len(sys.argv) is not 3:
print "USAGE: %s <file> <header>" % (sys.argv[0])
sys.exit(1)

hdr_to_move="#include <linux/%s>" % sys.argv[2]
moved = False
in_hdrs = False

with open(sys.argv[1], "r") as f:
lines = f.readlines()
for _line in lines:
line = _line.rstrip('
')
if line == hdr_to_move:
continue
if line.startswith("#include <linux/"):
in_hdrs = True
elif not moved and in_hdrs:
moved = True
print hdr_to_move
print line

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# ca5999fd 08-Jun-2020 Mike Rapoport <rppt@kernel.org>

mm: introduce include/linux/pgtable.h

The include/linux/pgtable.h is going to be the home of generic page table
manipulation functions.

Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
make the latter include asm/pgtable.h.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# bc2eca9a 08-Nov-2018 Nicolas Pitre <nico@fluxnic.net>

ARM: 8811/1: always list both ldrd/strd registers explicitly

The ldrd and strd instructions work on a pair of consecutive registers.
It is possible to specify either the first register in the pair, or both
registers explicitly. Let's always do the later to make things clearer.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 1abd3502 25-Jul-2017 Russell King <rmk+kernel@armlinux.org.uk>

ARM: align .data section

Robert Jarzmik reports that his PXA25x system fails to boot with 4.12,
failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215:

0xc0019e20 <+0>: ldr r1, [pc, #788]
0xc0019e24 <+4>: ldr r0, [r1] <== here

with r1 containing 0xc06f82cd, which is the address of "clean_addr".
Examination of the System.map shows:

c06f22c8 D user_pmd_table
c06f22cc d __warned.19178
c06f22cd d clean_addr

indicating that a .data.unlikely section has appeared just before the
.data section from proc-xscale.S. According to objdump -h, it appears
that our assembly files default their .data alignment to 2**0, which
is bad news if the preceding .data section size is not power-of-2
aligned at link time.

Add the appropriate .align directives to all assembly files in arch/arm
that are missing them where we require an appropriate alignment.

Reported-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 8478132a 23-Nov-2016 Russell King <rmk+kernel@armlinux.org.uk>

Revert "arm: move exports to definitions"

This reverts commit 4dd1837d7589f468ed109556513f476e7a7f9121.

Moving the exports for assembly code into the assembly files breaks
KSYM trimming, but also breaks modversions.

While fixing the KSYM trimming is trivial, fixing modversions brings
us to a technically worse position that we had prior to the above
change:

- We end up with the prototype definitions divorsed from everything
else, which means that adding or removing assembly level ksyms
become more fragile:
* if adding a new assembly ksyms export, a missed prototype in
asm-prototypes.h results in a successful build if no module in
the selected configuration makes use of the symbol.
* when removing a ksyms export, asm-prototypes.h will get forgotten,
with armksyms.c, you'll get a build error if you forget to touch
the file.

- We end up with the same amount of include files and prototypes,
they're just in a header file instead of a .c file with their
exports.

As for lines of code, we don't get much of a size reduction:
(original commit)
47 files changed, 131 insertions(+), 208 deletions(-)
(fix for ksyms trimming)
7 files changed, 18 insertions(+), 5 deletions(-)
(two fixes for modversions)
1 file changed, 34 insertions(+)
3 files changed, 7 insertions(+), 2 deletions(-)
which results in a net total of only 25 lines deleted.

As there does not seem to be much benefit from this change of approach,
revert the change.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 4dd1837d 13-Jan-2016 Al Viro <viro@zeniv.linux.org.uk>

arm: move exports to definitions

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 0171356a 21-Aug-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: domains: move initial domain setting value to asm/domains.h

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 998ef5d8 06-Aug-2015 Gregory CLEMENT <gregory.clement@bootlin.com>

ARM: 8408/1: Fix the secondary_startup function in Big Endian case

Since the commit "b2c3e38a5471 ARM: redo TTBR setup code for LPAE",
the setup code had been reworked. As a result the secondary CPUs
failed to come online in Big Endian.

As explained by Russell, the new code expected the value in r4/r5 to
be the least significant 32bits in r4 and the most significant 32bits
in r5. However, in the secondary code, we load this using ldrd, which
on BE reverses that.

This patch swap r4/r5 after the ldrd. It is done using the xor
instructions in order to not use a temporary register.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c07b5fd0 18-May-2015 Yingjoe Chen <yingjoe.chen@mediatek.com>

ARM: 8359/1: correct secondary_startup_arm mode

secondary_startup_arm is used as ARM mode secondary start up function
when ther kernel is compiled in THUMB mode, however the label itself
is still in .thumb mode. readelf shows:

160979: c020a581 120 FUNC GLOBAL DEFAULT 2 secondary_startup_arm

Make sure the label is in ARM mode as well.

Signed-off-by: Yingjoe Chen <yingjoe.chen@mediatek.com>
Tested-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b2c3e38a 04-Apr-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: redo TTBR setup code for LPAE

Re-engineer the LPAE TTBR setup code. Rather than passing some shifted
address in order to fit in a CPU register, pass either a full physical
address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1).

This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of
cpu_set_ttbr() in the secondary CPU startup code path (which was there
to re-set TTBR1 to the appropriate high physical address space on
Keystone2.)

Tested-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 14327c66 21-Apr-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: replace BSYM() with badr assembly macro

BSYM() was invented to allow us to work around a problem with the
assembler, where local symbols resolved by the assembler for the 'adr'
instruction did not take account of their ISA.

Since we don't want BSYM() used elsewhere, replace BSYM() with a new
macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM()
mechanics integrated into it. This ensures that the BSYM()-ification
is only used in conjunction with 'adr'.

Acked-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# bf35706f 18-Mar-2015 Ard Biesheuvel <ardb@kernel.org>

ARM: 8314/1: replace PROCINFO embedded branch with relative offset

This patch replaces the 'branch to setup()' instructions embedded
in the PROCINFO structs with the offset to that setup function
relative to the base of the struct. This preserves the position
independent nature of that field, but uses a data item rather
than an instruction.

This is mainly done to prevent linker failures on large kernels,
where the setup function is out of reach for the branch.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# bafe5865 30-Jan-2015 Stephen Boyd <sboyd@codeaurora.org>

ARM: 8302/1: Add a secondary_startup that assumes ARM mode

Some platforms always enter the kernel in ARM mode even if the
kernel is compiled for THUMB2. Add a small wrapper on top of
secondary_startup() that switches into THUMB2 mode.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7a061928 19-Jan-2015 Masahiro Yamada <yamada.m@jp.panasonic.com>

ARM: 8291/1: replace magic number with PAGE_SHIFT macro in fixup_pv code

This line converts PHYS_OFFSET into PHYS_PFN_OFFSET.
It is better to use PAGE_SHIFT rather than the magic number 12.

Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6ebbf2ce 30-Jun-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+

ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).

We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.

Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code. This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.

Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1dc5455f 16-Apr-2014 Rob Herring <robh@kernel.org>

ARM: 8028/1: move __fixup_smp out of init section

With large kernel builds such as allyesconfig exceeding maximum relative
branch offsets, the init section will be too far away to branch to
directly. This causes veneers to be added by the linker, but veneers
don't work before the MMU is enabled. Fix this by moving __fixup_smp to
the .head.text section as it is not very big.

Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e3892e91 21-Apr-2014 Victor Kamensky <victor.kamensky@linaro.org>

ARM: 8033/1: fix big endian __pv_phys_pfn_offset size related issue

Fix e26a9e00afc482b971afcaef1db8c9034d4d6d7c 'ARM: Better
virt_to_page() handling' replaced __pv_phys_offset with
__pv_phys_pfn_offset. Also note that size of __pv_phys_offset
was quad but size of __pv_phys_pfn_offset is word. Instruction
that used to update __pv_phys_offset which address is in r6
had to update low word of __pv_phys_offset so it used #LOW_OFFSET
macro for store offset. Now when size of __pv_phys_pfn_offset is
word, no difference between little endian and big endian should
exist - i.e no offset should be used when __pv_phys_pfn_offset
is stored.

Note that for little endian image proposed change is noop,
since in little endian case #LOW_OFFSET is defined 0 anyway.

Reported-by: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e26a9e00 25-Mar-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Better virt_to_page() handling

virt_to_page() is incredibly inefficient when virt-to-phys patching is
enabled. This is because we end up with this calculation:

page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]

in assembly. The asm virt_to_phys() is equivalent this this operation:

addr - PAGE_OFFSET + __pv_phys_offset

and we can see that because this is assembly, the compiler has no chance
to optimise some of that away. This should reduce down to:

page = &mem_map[(addr - PAGE_OFFSET) >> 12]

for the common cases. Permit the compiler to make this optimisation by
giving it more of the information it needs - do this by providing a
virt_to_pfn() macro.

Another issue which makes this more complex is that __pv_phys_offset is
a 64-bit type on all platforms. This is needlessly wasteful - if we
store the physical offset as a PFN, we can save a lot of work having
to deal with 64-bit values, which sometimes ends up producing incredibly
horrid code:

a4c: e3009000 movw r9, #0
a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset
a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset
a50: R_ARM_MOVT_ABS __pv_phys_offset
a54: e3002000 movw r2, #0
a54: R_ARM_MOVW_ABS_NC __pv_phys_offset
a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset
a58: R_ARM_MOVT_ABS __pv_phys_offset
a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset
a60: e3001000 movw r1, #0
a60: R_ARM_MOVW_ABS_NC mem_map
a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b3634575 18-Feb-2014 Thomas Petazzoni <thomas.petazzoni@free-electrons.com>

ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU

Currently, when the kernel is configured with LPAE support, but the
CPU doesn't support it, the error message is fairly cryptic:

Error: unrecognized/unsupported processor variant (0x561f5811).

This messages is normally shown when there is an issue when comparing
the processor ID (CP15 0, c0, c0) with the values/masks described in
proc-v7.S. However, the same message is displayed when LPAE support is
enabled in the kernel configuration, but not available in the CPU,
after looking at ID_MMFR0 (CP15 0, c0, c1, 4). Having the same error
message is highly misleading.

This commit improves this by showing a different error message when
this situation occurs:

Error: Kernel with LPAE support, but CPU does not support LPAE.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2ab4e8c0 21-Jan-2014 Christopher Covington <cov@codeaurora.org>

ARM: 7947/1: Make pgtbl macro more robust

The pgtbl macro couldn't handle the specific
(TEXT_OFFSET - PG_DIR_SIZE) value that the combination of
MSM platforms and LPAE created:

head.S:163: Error: invalid constant (203000) after fixup

Regardless of whether this combination of configuration options
will work on currently support platforms at run time, make it
at least assemble properly.

Signed-off-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b713aa0b 10-Dec-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix asm/memory.h build error

Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
not defined:

In file included from arch/arm/include/asm/page.h:163:0,
from include/linux/mm_types.h:16,
from include/linux/sched.h:24,
from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_phys':
arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
arch/arm/include/asm/memory.h: In function '__phys_to_virt':
arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)

Fixes: ca5a45c06cd4 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d9a790df 07-Nov-2013 Victor Kamensky <victor.kamensky@linaro.org>

ARM: 7883/1: fix mov to mvn conversion in case of 64 bit phys_addr_t and BE

Fix patching code to convert mov instruction into mvn instruction
in case of CONFIG_ARCH_PHYS_ADDR_T_64BIT and CONFIG_ARM_PATCH_PHYS_VIRT.

In BE case store into r0 proper bits so byte swapped instruction
could be modified correctly.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: R Sricharan <r.sricharan@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 10593b2e 07-Nov-2013 Victor Kamensky <victor.kamensky@linaro.org>

ARM: 7881/1: __fixup_smp read of SCU config should do byteswap in BE case

Commit "bc41b8724f24b9a27d1dcc6c974b8f686b38d554 ARM: 7846/1:
Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices"
added read of SCU config register into __fixup_smp function.
Such read should be followed by byteswap, if kernel runs in
BE mode.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 830fd4d6 29-Oct-2013 Sricharan R <r.sricharan@ti.com>

ARM: 7870/1: head: Fix the missing underscore in __ARMEB__ macro and .align keyword

Commit 'f52bb722547f43caeaecbcc62db9f3c3b80ead9b'
Author: Sricharan R <r.sricharan@ti.com>
ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses

introduced a __ARMEB__ macro usage in a new place, but missed the second
underscore. So correcting it here.

Also a explicit .align keyword is needed for the label with .long
data-type to be aligned on the 4 byte boundary. Otherwise this can
cause problem for thumb2 build. So adding it here.

Signed-off-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 97bcb0fe 01-Feb-2013 Ben Dooks <ben.dooks@codethink.co.uk>

ARM: set BE8 if LE in head code

If we are booting in LE and compiled for BE8, then add code to
set the state to bE8. Since the instruction stream is always LE,
we do not need to do anything special to the instruction.

Also ensure that the secondary processors are started in the same mode.

Note, we do add about 20 bytes to the kernel image, but it seems easier
to do this than adding another configuration to change.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>


# 2f9bf9be 01-Feb-2013 Ben Dooks <ben.dooks@codethink.co.uk>

ARM: fixup_pv_table bug when CPU_ENDIAN_BE8

The fixup_pv_table assumes that the instructions are in the same
endian configuration as the data, but when the CPU is running in
BE8 the instructions stay in little-endian format.

Make sure if CONFIG_CPU_ENDIAN_BE8 is set that we do all the
alterations to the instructions taking in to account the LDR/STR
will be swapping the data endian-ness.

Since the code is only modifying a byte, we avoid dual-swapping
the data, and just change the bits we clear and ORR in (in the
case where the code is not thumb2).

For thumb2, we add the necessary rev16 instructions to ensure that
the instructions are processed in the correct format, as it was
easier than re-writing the code to contain a mask and shift.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>


# f52bb722 29-Jul-2013 Sricharan R <r.sricharan@ti.com>

ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses

The current phys_to_virt patching mechanism works only for 32 bit
physical addresses and this patch extends the idea for 64bit physical
addresses.

The 64bit v2p patching mechanism patches the higher 8 bits of physical
address with a constant using 'mov' instruction and lower 32bits are patched
using 'add'. While this is correct, in those platforms where the lowmem addressable
physical memory spawns across 4GB boundary, a carry bit can be produced as a
result of addition of lower 32bits. This has to be taken in to account and added
in to the upper. The patched __pv_offset and va are added in lower 32bits, where
__pv_offset can be in two's complement form when PA_START < VA_START and that can
result in a false carry bit.

e.g
1) PA = 0x80000000; VA = 0xC0000000
__pv_offset = PA - VA = 0xC0000000 (2's complement)

2) PA = 0x2 80000000; VA = 0xC000000
__pv_offset = PA - VA = 0x1 C0000000

So adding __pv_offset + VA should never result in a true overflow for (1).
So in order to differentiate between a true carry, a __pv_offset is extended
to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is
2's complement. So 'mvn #0' is inserted instead of 'mov' while patching
for the same reason. Since mov, add, sub instruction are to patched
with different constants inside the same stub, the rotation field
of the opcode is using to differentiate between them.

So the above examples for v2p translation becomes for VA=0xC0000000,
1) PA[63:32] = 0xffffffff
PA[31:0] = VA + 0xC0000000 --> results in a carry
PA[63:32] = PA[63:32] + carry

PA[63:0] = 0x0 80000000

2) PA[63:32] = 0x1
PA[31:0] = VA + 0xC0000000 --> results in a carry
PA[63:32] = PA[63:32] + carry

PA[63:0] = 0x2 80000000

The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as
part of the review of first and second versions of the subject patch.

There is no corresponding change on the phys_to_virt() side, because
computations on the upper 32-bits would be discarded anyway.

Cc: Russell King <linux@arm.linux.org.uk>

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>


# bc41b872 27-Sep-2013 Santosh Shilimkar <santosh.shilimkar@ti.com>

ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices

The generic code is well equipped to differentiate between
SMP and UP configurations.However, there are some devices which
use Cortex-A9 MP core IP with 1 CPU as configuration. To let
these SOCs to co-exist in a CONFIG_SMP=y build by leveraging
the SMP_ON_UP support, we need to additionally check the
number the cores in Cortex-A9 MPCore configuration. Without
such a check in place, the startup code tries to execute
ALT_SMP() set of instructions which lead to CPU faults.

The issue was spotted on TI's Aegis device and this patch
makes now the device work with omap2plus_defconfig which
enables SMP by default. The change is kept limited to only
Cortex-A9 MPCore detection code.

Note that if any future SoC *does* use 0x0 as the PERIPH_BASE, then
the SCU address check code needs to be #ifdef'd for for the Aegis
platform.

Acked-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Vaibhav Bedia <vaibhav.bedia@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2449189b 31-Jul-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Add .text annotations where required after __CPUINIT removal

Commit 8bd26e3a7 (arm: delete __cpuinit/__CPUINIT usage from all ARM
users) caused some code to leak into sections which are discarded
through the removal of __CPUINIT annotations. Add appropriate .text
annotations to bring these back into the kernel text.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 8bd26e3a 17-Jun-2013 Paul Gortmaker <paul.gortmaker@windriver.com>

arm: delete __cpuinit/__CPUINIT usage from all ARM users

The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.

After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.

Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.

This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.

[1] https://lkml.org/lkml/2013/5/20/589

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>


# 4756dcbf 21-Jul-2012 Cyril Chemparathy <cyril@ti.com>

ARM: LPAE: accomodate >32-bit addresses for page table base

This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to
38-bit physical addresses.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 4e1db26a 02-Apr-2013 Paul Bolle <pebolle@tiscali.nl>

ARM: 7690/1: mm: fix CONFIG_LPAE typos

CONFIG_LPAE doesn't exist: the correct option is CONFIG_ARM_LPAE, so fix
up the two typos under arch/arm/.

The fix to head.S is slightly scary, but this is just for setting up
an early io-mapping for the serial port when running on a big-endian,
LPAE system. Since these systems don't exist in the wild (at least, I
have no access to one outside of kvmtool, which doesn't provide a serial
port suitable for earlyprintk), then we can revisit the code later if it
causes any problems.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d61947a1 28-Feb-2013 Will Deacon <will@kernel.org>

ARM: 7657/1: head: fix swapper and idmap population with LPAE and big-endian

The LPAE page table format uses 64-bit descriptors, so we need to take
endianness into account when populating the swapper and idmap tables
during early initialisation.

This patch ensures that we store the two words making up each page table
entry in the correct order when running big-endian.

Cc: <stable@vger.kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6f16f499 15-Jan-2013 Nicolas Pitre <nico@fluxnic.net>

ARM: 7628/1: head.S: map one extra section for the ATAG/DTB area

We currently use a temporary 1MB section aligned to a 1MB boundary for
mapping the provided device tree until the final page table is created.
However, if the device tree happens to cross that 1MB boundary, the end
of it remains unmapped and the kernel crashes when it attempts to access
it. Given no restriction on the location of that DTB, it could end up
with only a few bytes mapped at the end of a section.

Solve this issue by mapping two consecutive sections.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Sascha Hauer <s.hauer@pengutronix.de>
Tested-by: Tomasz Figa <t.figa@samsung.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6e484be1 04-Jan-2013 Marc Zyngier <maz@kernel.org>

ARM: virt: boot secondary CPUs through the right entry point

Secondary CPUs should use the __hyp_stub_install_secondary entry
point, so boot mode inconsistencies can be detected.

Cc: <stable@vger.kernel.org>
Acked-by: Dave Martin <dave.martin@linaro.org>
Reported-by: Ian Molton <ian.molton@collabora.co.uk>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 80c59daf 09-Feb-2012 Dave Martin <dave.martin@linaro.org>

ARM: virt: allow the kernel to be entered in HYP mode

This patch does two things:

* Ensure that asynchronous aborts are masked at kernel entry.
The bootloader should be masking these anyway, but this reduces
the damage window just in case it doesn't.

* Enter svc mode via exception return to ensure that CPU state is
properly serialised. This does not matter when switching from
an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
parlance), but it potentially does matter when switching from a
another privileged mode such as hyp mode.

This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>


# 91a9fec0 30-Aug-2012 Rob Herring <rob.herring@calxeda.com>

ARM: move debug macros to common location

Based on suggestion by Russell King, create a common location for debug
macros and select the included debug macro file using config option.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>


# 9fa16b77 03-Jul-2012 Nicolas Pitre <nico@fluxnic.net>

ARM: 7439/1: head.S: simplify initial page table mapping

Let's map the initial RAM up to the end of the kernel .bss instead of
the strict kernel image area. This simplifies the code as the kernel
image only needs to be handled specially in the XIP case. That covers
the legacy ATAG location as well.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f67860a7 18-Mar-2012 Nicolas Pitre <nico@fluxnic.net>

ARM: 7363/1: DEBUG_LL: limit early mapping to the minimum

There is just no point mapping up to 512MB for a serial port.
Using a single 1MB entry is way sufficient for all users.
This will create less interference for the following debugging patch.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 15d07dc9 28-Mar-2012 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: move CP15 definitions to separate header file

Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.

Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>


# 9b5a146a 22-Feb-2012 Nicolas Pitre <nico@fluxnic.net>

ARM: 7338/1: add support for early console output via semihosting

This is a very simple method for code running in an emulator, or under
the supervision of a debugger, to use I/O facilities on the controlling
host.

Tested with OpenOCD, and ARM's Fast Models.

Details on semihosting can be found in chapter 8 of
DUI0203I_rvct_developer_guide.pdf from ARM Ltd.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 195864cf 19-Jan-2012 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: move CP15 definitions to separate header file

Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.

Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 294064f5 08-Jan-2012 Catalin Marinas <catalin.marinas@arm.com>

ARM: 7275/1: LPAE: Check the CPU support for the long descriptor format

This patch adds a check for the presence of the LPAE feature during the
CPU initialisation. If not present, it reports an error when
CONFIG_DEBUG_LL is enabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1b6ba46b 22-Nov-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: LPAE: MMU setup for the 3-level page table format

This patch adds the MMU initialisation for the LPAE page table format.
The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new
proc-v7-3level.S file contains the TTB initialisation, context switch
and PTE setting code with the LPAE. The TTBRx split is based on the
PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings
(supersections) and a few other memory types in mmu.c are conditionally
compiled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# d675d0bc 22-Nov-2011 Will Deacon <will@kernel.org>

ARM: LPAE: add ISBs around MMU enabling code

Before we enable the MMU, we must ensure that the TTBR registers contain
sane values. After the MMU has been enabled, we jump to the *virtual*
address of the following function, so we also need to ensure that the
SCTLR write has taken effect.

This patch adds ISB instructions around the SCTLR write to ensure the
visibility of the above.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 4e8ee7de 22-Nov-2011 Will Deacon <will@kernel.org>

ARM: SMP: use idmap_pgd for mapping MMU enable during secondary booting

The ARM SMP booting code allocates a temporary set of page tables
containing an identity mapping of the kernel image and provides this
to secondary CPUs for initial booting.

In reality, we only need to include the __turn_mmu_on function in the
identity mapping since the rest of the kernel is executing from virtual
addresses after this point.

This patch adds __turn_mmu_on to the .idmap.text section, allowing the
SMP booting code to use the idmap_pgd directly and not have to populate
its own set of page table.

As a result of this patch, we can make the identity_mapping_add function
static (since it is only used within mm/idmap.c) and also remove the
identity_mapping_del function. The identity map population is moved to
an early initcall so that it is setup in time for secondary CPU bringup.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 72662e01 22-Nov-2011 Will Deacon <will@kernel.org>

ARM: head.S: only include __turn_mmu_on in the initial identity mapping

__create_page_tables identity maps the region of memory from
__enable_mmu to the end of __turn_mmu_on.

In preparation for including __turn_mmu_on in the .idmap.text section,
this patch modifies the identity mapping so that it only includes the
__turn_mmu_on code.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 8428e84d 07-Nov-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: 7150/1: Allow kernel unaligned accesses on ARMv6+ processors

Recent gcc versions generate unaligned accesses by default on ARMv6 and
later processors. This patch ensures that the SCTLR.A bit is always
cleared on such processors to avoid kernel traping before
alignment_init() is called.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: John Linn <John.Linn@xilinx.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1b9f95f8 05-Jul-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: prepare for removal of a bunch of <mach/memory.h> files

When the CONFIG_NO_MACH_MEMORY_H symbol is selected by a particular
machine class, the machine specific memory.h include file is no longer
used and can be removed. In that case the equivalent information can
be obtained dynamically at runtime by enabling CONFIG_ARM_PATCH_PHYS_VIRT
or by specifying the physical memory address at kernel configuration time.

If/when all instances of mach/memory.h are removed then this symbol could
be removed.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 639da5ee 31-Aug-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: add an extra temp register to the low level debugging addruart macro

Some platforms (like OMAP not to name it) are doing rather complicated
hacks just to determine the base UART address to use. Let's give their
addruart macro some slack by providing an extra work register which will
allow for much needed cleanups.

This is basically a no-op as this commit is only adding the extra argument
to the macro but no one is using it yet.

Signed-off-by: nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Kevin Hilman <khilman@ti.com>


# e73fc88e 23-Aug-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: 7059/1: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_*

PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have
the same value (21). This patch converts the PGDIR_* uses in the kernel
to the PMD_* equivalent so that LPAE builds can reuse the same code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# daece596 11-Aug-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: 7013/1: P2V: Remove ARM_PATCH_PHYS_VIRT_16BIT

This code can be removed now that MSM targets no longer need the 16-bit
offsets for P2V.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 540b5738 13-Jul-2011 Dave Martin <dave.martin@linaro.org>

ARM: 6999/1: head, zImage: Always Enter the kernel in ARM state

Currently, the documented kernel entry requirements are not
explicit about whether the kernel should be entered in ARM or
Thumb, leading to an ambiguitity about how to enter Thumb-2
kernels. As a result, the kernel is reliant on the zImage
decompressor to enter the kernel proper in the correct instruction
set state.

This patch changes the boot entry protocol for head.S and Image to
be the same as for zImage: in all cases, the kernel is now entered
in ARM.

Documentation/arm/Booting is updated to reflect this new policy.

A different rule will be needed for Cortex-M class CPUs as and when
support for those lands in mainline, since these CPUs don't support
the ARM instruction set at all: a note is added to the effect that
the kernel must be entered in Thumb on such systems.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d427958a 26-May-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: 6942/1: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7

This patch makes TTBR1 point to swapper_pg_dir so that global, kernel
mappings can be used exclusively on v6 and v7 cores where they are
needed.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4c2896e8 28-Apr-2011 Grant Likely <grant.likely@secretlab.ca>

arm/dt: Make __vet_atags also accept a dtb image

The dtb is passed to the kernel via register r2, which is the same
method that is used to pass an atags pointer. This patch modifies
__vet_atags to not clear r2 when it encounters a dtb image.

v2: fixed bugs pointed out by Nicolas Pitre

Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>


# b511d75d 20-Feb-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: 6747/1: P2V: Thumb2 support

Adding Thumb2 support to the runtime patching of the virt_to_phys and
phys_to_virt opcodes.

Tested both the 8-bit and the 16-bit fixups, using different placements
in memory to exercize all code paths.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4d901c42 02-Feb-2011 Rob Herring <rob.herring@calxeda.com>

ARM: 6648/1: map ATAGs when not in first 1MB of RAM

If ATAGs or DTB pointer is not within first 1MB of RAM, then the boot params
will not be mapped early enough, so map the 1MB region that r2 points to. Only
map the first 1MB when r2 is 0.

Some assembly improvements from Nicolas Pitre.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cada3c08 04-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: P2V: extend to 16-bit translation offsets

MSM's memory is aligned to 2MB, which is more than we can do with our
existing method as we're limited to the upper 8 bits. Extend this by
using two instructions to 16 bits, automatically selected when MSM is
enabled.

Acked-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# dc21af99 04-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching

This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.

Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.

As many translations are of the form:

physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)

we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.

Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.

At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.

Add a module version magic string for this feature to prevent
incompatible modules being loaded.

Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 72a20e22 04-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: P2V: eliminate head.S use of PHYS_OFFSET for !XIP_KERNEL

head.S makes use of PHYS_OFFSET. When it becomes a variable, the
assembler won't understand this. Compute PHYS_OFFSET by the following
method. This code is linked at its virtual address, but run at before
the MMU is enabled, so at his physical address.

1: .long .
.long PAGE_OFFSET

adr r0, 1b @ r0 = physical ','
ldmia r0, {r1, r2} @ r1 = virtual '.', r2 = PAGE_OFFSET
sub r1, r0, r1 @ r1 = physical-virtual
add r2, r2, r1 @ r2 = PAGE_OFFSET + physical-virtual
@ := PHYS_OFFSET.

Switch XIP users of PHYS_OFFSET to use PLAT_PHYS_OFFSET - we can't
use this method for XIP kernels as the code doesn't execute in RAM.

Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6fc31d54 12-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Defer lookup of machine_type to setup.c

Since the debug macros no longer depend on the machine type information,
the machine type lookup can be deferred to setup_arch() in setup.c which
simplifies the code somewhat.

We also move the __error_a functionality into setup.c for displaying a
message when a bad machine ID is passed to the kernel via the LL debug
code. We also log this into the kernel ring buffer which makes it
possible to retrieve the message via a debugger.

Original idea from Grant Likely.

Acked-by: Grant Likely <grant.likely@secretlab.ca>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4a9cb360 10-Feb-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fixup SMP alternatives in modules

With certain configurations, we inline the unlock functions in modules,
which results in SMP alternatives being created in modules. We need to
fix those up when loading a module to prevent undefined instruction
faults.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e98ff0f5 30-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: smp_on_up: allow non-ARM SMP processors

Allow non-ARM SMP processors to use the SMP_ON_UP feature. CPUs
supporting SMP must have the new CPU ID format, so check for this first.
Then check for ARM11MPCore, which fails the MPIDR check. Lastly check
the MPIDR reports multiprocessing extensions and that the CPU is part of
a multiprocessing system.

Cc: <stable@kernel.org>
Reported-and-Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ed3768a8 01-Dec-2010 Dave Martin <dave.martin@linaro.org>

ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels.

* __fixup_smp_on_up has been modified with support for the
THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split
into halfwords in case of misalignment, since we can't rely on
unaligned accesses working before turning the MMU on.

No attempt is made to optimise the aligned case, since the
number of fixups is typically small, and it seems best to keep
the code as simple as possible.

* Add a rotate in the fixup_smp code in order to support
CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.

* Add an assembly-time sanity-check to ALT_UP() to ensure that
the content really is the right size (4 bytes).

(No check is done for ALT_SMP(). Possibly, this could be fixed
by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
ALT_SMP...SMP_UP_B) into two macros. In the first case,
ALT_SMP needs to expand to >= 4 bytes, not == 4.)

* smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
to macro limitations) has not been modified: the affected
instruction (mov) has no 16-bit encoding, so the correct
instruction size is satisfied in this case.

* A "mode" parameter has been added to smp_dmb:

smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP()

This avoids assembly failures due to use of W() inside smp_dmb,
when assembling pure-ARM code in the vectors page.

There might be a better way to achieve this.

* Kconfig: make SMP_ON_UP depend on
(!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
currently assumes little-endian order.)

Tested using a single generic realview kernel on:
ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
ARM RealView PBX-A9 (SMP)

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0eb0511d 21-Nov-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: SMP: use more sane register allocation for __fixup_smp_on_up

Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to
understand which registers can be modified. Also document which
registers hold values which must be preserved.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a75e5248 29-Nov-2010 Dave Martin <dave.martin@linaro.org>

ARM: 6504/1: Thumb-2: Fix long-distance conditional branches in head.S for Thumb-2.

The 32-bit conditional branches in Thumb-2 have a shorter range
(+/-512K) than their ARM counterparts (+/-32MB). The linker does
not currently generate trampolines to extend the range of these
Thumb-2 conditional branches, resulting in link errors when vmlinux
is sufficiently large, e.g.:

head.o:(.text+0x464): relocation truncated to fit: R_ARM_THM_JUMP19

This patch forces the longer-range, unconditional branch encoding
by use of an explicit IT instruction. The resulting branches are
triggered on the same conditions as before.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4f79a5dd 29-Nov-2010 Dave Martin <dave.martin@linaro.org>

ARM: 6500/1: Thumb-2: Correct data alignment for CONFIG_THUMB2_KERNEL in kernel/head.S

Directives such as .long and .word do not magically cause the
assembler location counter to become aligned in gas. As a result,
using these directives in code sections can result in misaligned
data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL).

This is a Bad Thing, since the ABI permits the compiler to assume
that fundamental types of word size or above are word- aligned when
accessing them from C. If the data is not really word-aligned,
this can cause impaired performance and stray alignment faults in
some circumstances.

In general, the following rules should be applied when using data
word declaration directives inside code sections:

* .quad and .double:
.align 3

* .long, .word, .single, .float:
.align (or .align 2)

* .short:
No explicit alignment required, since Thumb-2
instructions are always 2 or 4 bytes in size.
immediately after an instruction.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c293393f 06-Jul-2010 Jeremy Kerr <jeremy.kerr@canonical.com>

arm: use addruart macro to establish debug mappings

Since we can get both physical and virtual addresses from the addruart
macro, we can use this to establish the debug mappings.

In the case of CONFIG_DEBUG_ICEDCC, we don't need any mappings, but
may still need to setup r7 correctly.

Incorporating ASM changes from Nicolas Pitre <npitre@fluxnic.net>.

Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Tested-by: Kevin Hilman <khilman@deeprootsystems.com>


# 865a4fae 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: add register documentation for __enable_mmu

Add some additional documentation on register usage in __enable_mmu
to help complete the overall picture.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 00945010 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: hotplug cpu: move secondary_startup, __enable_mmu to cpuinit

Move these two functions, both of which are required for secondary
CPU booting, into the cpuinit section. Ensure bad processors call
__error_p for better diagnostics, rather than just __error.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 786f1b73 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: hotplug cpu: ensure that __enable_mmu is identity mapped

__enable_mmu is required to be executed in an identity mapped region
to ensure that variances in CPUs do not cause a crash. We currently
achieve this by assuming that it will be co-located with
__create_page_tables. With hotplug CPU support, this assumption
becomes invalid. Implement a better solution which ensures that
it will be appropriately mapped no matter where it is placed.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a4ae4134 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: cleanup boot cpu calling __mmap_switched

This allows us to relocate __mmap_switched and associated data away
from the head section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f00ec48f 04-Sep-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Allow SMP kernels to boot on UP systems

UP systems do not implement all the instructions that SMP systems have,
so in order to boot a SMP kernel on a UP system, we need to rewrite
parts of the kernel.

Do this using an 'alternatives' scheme, where the kernel code and data
is modified prior to initialization to replace the SMP instructions,
thereby rendering the problematical code ineffectual. We use the linker
to generate a list of 32-bit word locations and their replacement values,
and run through these replacements when we detect a UP system.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2abc1c50 02-Oct-2009 Tim Abbott <tabbott@ksplice.com>

ARM: convert to use __HEAD and HEAD_TEXT macros.

This has the consequence of changing the section name used for head
code from ".text.head" to ".head.text". Since this commit changes all
users in the architecture, this change should be harmless.

The .text.head output section is eliminated and the head text code is
included at the start of the .init output section.

Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b86040a5 23-Jul-2009 Catalin Marinas <catalin.marinas@arm.com>

Thumb-2: Implementation of the unified start-up and exceptions code

This patch implements the ARM/Thumb-2 unified kernel start-up and
exception handling code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 93ed3970 28-Aug-2008 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 5227/1: Add the ENDPROC declarations to the .S files

This declaration specifies the "function" type and size for various
assembly functions, mainly needed for generating the correct branch
instructions in Thumb-2.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9c4c9f38 04-Mar-2008 Greg Ungerer <gerg@snapgear.com>

[ARM] 4849/1: move ATAGS asm definitions

Move the definitions of ATAG_CORE and ATAG_CORE_SIZE in head.S to
head-common.S. There is no use of these in head.S itself, but they
are used in head-common.S. When building for the !CONFIG_MMU case
these were not defined when compiling head-nommu.S (which includes
head-common.S).

Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9d20fdd5 31-May-2007 Bill Gatliff <bgat@billgatliff.com>

[ARM] 4423/1: add ATAGS support

Examines the ATAGS pointer (r2) at boot, and interprets
a nonzero value as a reference to an ATAGS structure. A
suitable ATAGS structure replaces the kernel's command line.

Signed-off-by: Bill Gatliff <bgat@billgatliff.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 08fdffd4 08-May-2007 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Ensure head text is always placed at the start of kernel

Commit 86c0baf123e474b6eb404798926ecf62b426bf3a highlighted that we
may end up with the head text placed elsewhere in the kernel image.
Introduce a new .text.head section to contain the initial kernel
startup code, and always place this section at the beginning of the
kernel image.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 40435792 21-Feb-2007 Nicolas Pitre <nico@cam.org>

[ARM] 4227/1: minor head.S fixups

Let's surround constructs like:

orr r3, r3, #(KERNEL_RAM_PADDR & 0x00f00000)

between .if .endif since (KERNEL_RAM_PADDR & 0x00f00000) is 0 in 99% of
all cases.

Also let's mask PHYS_OFFSET with 0x00f00000 instead of 0x00e00000.
Section mappings are really 1MB not 2MB and the 2MB groupping is
a higher level issue already much better enforced with

#if (PHYS_OFFSET & 0x001fffff)
#error "PHYS_OFFSET must be at an even 2MiB boundary!"
#endif

at the top of the file.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ec3622d9 21-Feb-2007 Nicolas Pitre <nico@cam.org>

[ARM] 4226/1: initial .data and .bss mappings of XIP kernel should be TEXT_OFFSET
aware

Since TEXT_OFFSET is meant to determine RAM location for kernel use,
itshould affect .data and .bss initial mapping in the XIP case.
Otherwise a XIP kernel would crash if TEXT_OFFSET gets somewhat larger
than 2MB.

Corresponding code is also moved up a bit to be near the similar .text
mapping code making the whole a bit more straight forward to understand.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e98ff7f6 22-Feb-2007 Nicolas Pitre <nico@cam.org>

[ARM] 4224/2: allow XIP kernel to boot again

Since commit 2552fc27ff79b10b9678d92bcaef21df38bb7bb6 XIP kernels failed
to boot because (_end - PAGE_OFFSET - 1) is much smaller than the size
of the kernel text and data in the XIP case, causing the kernel not to
be entirely mapped.

Even in the non-XIP case, the use of (_end - PAGE_OFFSET - 1) is wrong
because it produces a too large value if TEXT_OFFSET is larger than 1MB.

Finally the original code was performing one loop too many.

Let's break the loop when the section pointer has passed the last byte
of the kernel instead.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d4e1c889 21-Jan-2007 Linus Walleij <triad@df.lth.se>

[ARM] 4102/1: Allow for PHYS_OFFSET on any valid 2MiB address

This patchs allows the offset to the first page of
physical memory to be on any 2MB boundary
whereas the previous code could only handle psysical
offset to any 16MB boundary (0xNN000000) or any 1MB
boundary below 0x01000000 (e.g. 0x00N00000). The
problem is a consequence of the orr one-byte syntax,
so we fix this and we can place the first bank of
memory at 0x28e00000. I have also included an explicit
check that disallow compilation when PHYS_OFFSET is
not on a 2MiB boundary. head.S would be the proper place
to have this at since this is the first file that
attempts to use PHYS_OFFSET during compile.

Signed-off-by: Linus Walleij <triad@df.lth.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f06b97ff 11-Dec-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Clean up KERNEL_RAM_ADDR

Clean up the KERNEL_RAM_ADDR stuff in arch/arm/kernel/head.S to
make it clearer what's referring to what. In doing so, remove
the usage of __virt_to_phys(), which is not guaranteed to be
something that the assembler can parse.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ee90dabc 09-Nov-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Include asm/elf.h instead of asm/procinfo.h

These files want to provide/access ELF hwcap information, so should
be including asm/elf.h rather than asm/procinfo.h

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2552fc27 29-Sep-2006 Lennert Buytenhek <buytenh@org.rmk.(none)>

[ARM] 3809/3: get rid of 4 megabyte kernel image size limit

We currently have a hardcoded 4 megabyte uncompressed kernel image
size limit, which is easily exceeded by, for example, enabling some of
the various kernel debugging options.

When setting up the initial page tables (which is where this 4M limit
is hardcoded), it's actually relatively easy to find out the true size
of the uncompressed kernel image and create enough page table entries
for things to fit, so this patch makes it so.

In the decompressor, we also need to know the size of the uncompressed
kernel image, to figure out whether there is any chance that uncompressing
the kernel might overwrite the compressed kernel image stored elsewhere
in memory. We don't have that info at this boot stage, though, so we
approximate the size of the uncompressed kernel by taking the compressed
kernel image size and allowing for a maximum 4x expansion.

Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 90af774a 18-Aug-2006 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 3757/1: Use PROCINFO_INITFUNC in head.S

Patch from Catalin Marinas

This is instead of a magic number.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 34d92626 26-Jul-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Fix SMP booting

Processor support files now use r6 in their CPU setup code, so
we can't rely on r6 being preserved. Use r7 instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6ab3d562 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de>

Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>


# 8799ee9f 29-Jun-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Set bit 4 on section mappings correctly depending on CPU

On some CPUs, bit 4 of section mappings means "update the
cache when written to". On others, this bit is required to
be one, and others it's required to be zero. Finally, on
ARMv6 and above, setting it turns on "no execute" and prevents
speculative prefetches.

With all these combinations, no one value fits all CPUs, so we
have to pick a value depending on the CPU type, and the area
we're mapping.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 801194e3 24-Jun-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Remove MODE_(SVC|IRQ|FIQ|USR) and DEFAULT_FIQ

DEFAULT_FIQ was entirely unused. MODE_* are just redefinitions
of *_MODE. Use *_MODE instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2eb9d315 05-May-2006 Uwe Zeisberger <Uwe_Zeisberger@digi.com>

[ARM] 3496/1: more constants for asm-offsets.h

Patch from Uwe Zeisberger

added the following constants:
- MACHINFO_TYPE
- MACHINFO_NAME
- MACHINFO_PHYSIO
- MACHINFO_PGOFFIO
- PROCINFO_INITFUNC
- PROCINFO_MMUFLAGS

and removed their definition from head.S and head-nommu.S

Signed-off-by: Uwe Zeisberger <Uwe_Zeisberger@digi.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 75d90832 27-Mar-2006 Hyok S. Choi <hyok.choi@samsung.com>

[ARM] nommu: start-up code

This patch adds nommu version start-up code head-nommu.S.
The common part of the start-up codes is moved to head-common.S.

Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a61ea932 20-Mar-2006 Nicolas Pitre <nico@cam.org>

[ARM] 3261/2: remove phys_ram from struct machine_desc (part 3)

Patch from Nicolas Pitre

This field is redundent since it must be equal to PHYS_OFFSET anyway.

There is no reference to it anymore so remove it at last.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0f44ba1d 24-Feb-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Move read of processor ID out of lookup_processor_type()

Read the processor ID at boot, and save it in "processor_id" as we
did before. Later, when we re-parse the CPU type in the setup.c code,
re-use the value stored in "processor_id".

This allows a cleaner work-around for noMMU devices without CP#15.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2df96b34 13-Jan-2006 Nicolas Pitre <nico@cam.org>

[ARM] 3259/1: remove phys_ram from struct machine_desc (part 1)

Patch from Nicolas Pitre

This field is redundent since it must be equal to PHYS_OFFSET anyway.

First, let's use PHYS_OFFSET directly instead.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9d4f13e5 03-Jan-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Make kernel link address depend on PAGE_OFFSET

We are coding the kernel link address into the makefiles, which is
invisibly dependent on PAGE_OFFSET. If PAGE_OFFSET is changed, the
makefiles also need to be changed.

Make adjustments such that the makefiles encode just the offset from
PAGE_OFFSET for the kernel link address, and use PAGE_OFFSET in the
linker scripts directly.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3c0bdac3 25-Nov-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Remove mach-types.h from head.S

We don't really need to check whether the machine type is Netwinder
or CATS before setting up the PCI IO mapping for debugging. This
allows us to eliminate asm/mach-types.h from head.S

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 37d07b72 29-Oct-2005 Nicolas Pitre <nico@cam.org>

[ARM] 3061/1: cleanup the XIP link address mess

Patch from Nicolas Pitre

Since vmlinux.lds.S is preprocessed, we can use the defines already
present in asm/memory.h (allowed by patch #3060) for the XIP kernel link
address instead of relying on a duplicated Makefile hardcoded value, and
also get rid of its dependency on awk to handle it at the same time.

While at it let's clean XIP stuff even further and make things clearer
in head.S with a nice code reduction.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f09b9979 29-Oct-2005 Nicolas Pitre <nico@cam.org>

[ARM] 3060/1: allow constants found in asm/memory.h to be used in asm code

Patch from Nicolas Pitre

This patch allows for assorted type of cleanups by letting assembly code
use the same set of defines for constant values and avoid duplicated
definitions that might not always be in sync, or that might simply be
confusing due to the different names for the same thing.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e6ae744d 09-Sep-2005 Sam Ravnborg <sam@mars.(none)>

kbuild: arm - use generic asm-offsets.h support

Delete obsoleted stuff from arch Makefile and rename
constants.h to asm-offsets.h

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>


# c77b0427 01-Jul-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: Make the magic values in head.S more obvious

Make the magic address values in head.S more obvious as to where
they came from. Wrap all debug code in CONFIG_DEBUG_LL.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e65f38ed 18-Jun-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM SMP: Add support for startup of secondary processors

Create a temporary page table to startup secondary processors. This
page table must have a 1:1 virtual/physical mapping for the kernel
in addition to the standard mappings to ensure that the secondary
CPU can enable its MMU safely.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4f7a1812 05-May-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: Fix kernel stack offset calculations

Various places in the ARM kernel implicitly assumed that kernel
stacks are always 8K due to hard coded constants. Replace these
constants with definitions.

Correct the allowable range of kernel stack pointer values within
the allocation. Arrange for the entire kernel stack to be zeroed,
not just the upper 4K if CONFIG_DEBUG_STACK_USAGE is set.

Signed-off-by: Russell King <rmk@arm.linux.org.uk>


# 1da177e4 16-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org>

Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!