History log of /linux-master/arch/arm/kernel/head-common.S
Revision Date Author Comments
# 9c46929e 24-Nov-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems

On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.

To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M


# 50596b75 18-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: smp: Store current pointer in TPIDRURO register if available

Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# 62c4a2e2 14-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

ARM: head-common.S: use PC-relative insn sequence for __proc_info

Replace the open coded PC relative offset calculations with a pair of
adr_l invocations. This removes some open coded arithmetic involving
virtual addresses, avoids literal pools on v7+, and slightly reduces
the footprint of the code.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 5615f69b 25-Oct-2020 Linus Walleij <linus.walleij@linaro.org>

ARM: 9016/2: Initialize the mapping of KASan shadow memory

This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:

1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)

2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.

When using KASan we also need to increase the THREAD_SIZE_ORDER
from 1 to 2 as the extra calls for shadow memory uses quite a bit
of stack.

As we need to make a temporary copy of the PGD when setting up
shadow memory we create a helpful PGD_SIZE definition for both
LPAE and non-LPAE setups.

The KASan core code unconditionally calls pud_populate() so this
needs to be changed from BUG() to do {} while (0) when building
with KASan enabled.

After the initial development by Andre Ryabinin several modifications
have been made to this code:

Abbott Liu <liuwenliang@huawei.com>
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
Reported by Florian Fainelli <f.fainelli@gmail.com>

Linus Walleij <linus.walleij@linaro.org>:
- Drop the custom mainpulation of TTBR0 and just use
cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
sequence to be recursive based on ARM64:s kasan_init.c.

Ard Biesheuvel <ardb@kernel.org>:
- Necessary underlying fixes.
- Crucial bug fixes to the memory set-up code.

Co-developed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-developed-by: Abbott Liu <liuwenliang@huawei.com>
Co-developed-by: Ard Biesheuvel <ardb@kernel.org>

Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Russell King - ARM Linux <rmk+kernel@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# d6d51a96 25-Oct-2020 Linus Walleij <linus.walleij@linaro.org>

ARM: 9014/2: Replace string mem* functions for KASan

Functions like memset()/memmove()/memcpy() do a lot of memory
accesses.

If a bad pointer is passed to one of these functions it is important
to catch this. Compiler instrumentation cannot do this since these
functions are written in assembly.

KASan replaces these memory functions with instrumented variants.

The original functions are declared as weak symbols so that
the strong definitions in mm/kasan/kasan.c can replace them.

The original functions have aliases with a '__' prefix in their
name, so we can call the non-instrumented variant if needed.

We must use __memcpy()/__memset() in place of memcpy()/memset()
when we copy .data to RAM and when we clear .bss, because
kasan_early_init cannot be called before the initialization of
.data and .bss.

For the kernel compression and EFI libstub's custom string
libraries we need a special quirk: even if these are built
without KASan enabled, they rely on the global headers for their
custom string libraries, which means that e.g. memcpy()
will be defined to __memcpy() and we get link failures.
Since these implementations are written i C rather than
assembly we use e.g. __alias(memcpy) to redirected any
users back to the local implementation.

Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Russell King - ARM Linux <rmk+kernel@armlinux.org.uk>
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 4c0742f6 10-Oct-2019 Vladimir Murzin <vladimir.murzin@arm.com>

ARM: 8914/1: NOMMU: Fix exc_ret for XIP

It was reported that 72cd4064fcca "NOMMU: Toggle only bits in
EXC_RETURN we are really care of" breaks NOMMU+XIP combination.
It happens because saved EXC_RETURN gets overwritten when data
section is relocated.

The fix is to propagate EXC_RETURN via register and let relocation
code to commit that value into memory.

Fixes: 72cd4064fcca ("ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care of")
Reported-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Tested-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 899a42f8 19-Jul-2018 Russell King <rmk+kernel@armlinux.org.uk>

ARM: make lookup_processor_type() non-__init

Move lookup_processor_type() out of the __init section so it is callable
from (eg) the secondary startup code during hotplug.

Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# ff5fdafc 19-Jan-2018 Nicolas Pitre <nico@fluxnic.net>

ARM: 8745/1: get rid of __memzero()

The __memzero assembly code is almost identical to memset's except for
two orr instructions. The runtime performance of __memset(p, n) and
memset(p, 0, n) is accordingly almost identical.

However, the memset() macro used to guard against a zero length and to
call __memzero at compile time when the fill value is a constant zero
interferes with compiler optimizations.

Arnd found tha the test against a zero length brings up some new
warnings with gcc v8:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82103

And successively rremoving the test against a zero length and the call
to __memzero optimization produces the following kernel sizes for
defconfig with gcc 6:

text data bss dec hex filename
12248142 6278960 413588 18940690 1210312 vmlinux.orig
12244474 6278960 413588 18937022 120f4be vmlinux.no_zero_test
12239160 6278960 413588 18931708 120dffc vmlinux.no_memzero

So it is probably not worth keeping __memzero around given that the
compiler can do a better job at inlining trivial memset(p,0,n) on its
own. And the memset code already handles a zero length just fine.

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 59b6359d 03-Oct-2017 Geert Uytterhoeven <geert@linux-m68k.org>

ARM: 8702/1: head-common.S: Clear lr before jumping to start_kernel()

If CONFIG_DEBUG_LOCK_ALLOC=y, the kernel log is spammed with a few
hundred identical messages:

unwind: Unknown symbol address c0800300
unwind: Index not found c0800300

c0800300 is the return address from the last subroutine call (to
__memzero()) in __mmap_switched(). Apparently having this address in
the link register confuses the unwinder.

To fix this, reset the link register to zero before jumping to
start_kernel().

Fixes: 9520b1a1b5f7a348 ("ARM: head-common.S: speed up startup code")
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# ca8b5d97 24-Aug-2017 Nicolas Pitre <nico@fluxnic.net>

ARM: XIP kernel: store .data compressed in ROM

The .data segment stored in ROM is only copied to RAM once at boot time
and never referenced afterwards. This is arguably a suboptimal usage of
ROM resources.

This patch allows for compressing the .data segment before storing it
into ROM and decompressing it to RAM rather than simply copying it,
saving on precious ROM space.

Because global data is not available yet (obviously) we must allocate
decompressor workspace memory on the stack. The .bss area is used as a
stack area for that purpose before it is cleared. The required stack
frame is 9568 bytes for __inflate_kernel_data() alone, so make sure
the .bss is large enough to cope with that plus extra room for called
functions or fail the build.

Those numbers were picked arbitrarily based on the above 9568 byte
stack frame:

10240 (2.5 * PAGE_SIZE): used to override -Wframe-larger-than whose
default value is 1024.
12288 (3 * PAGE_SIZE): minimum .bss size to contain the stack.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Chris Brandt <Chris.Brandt@renesas.com>


# 9520b1a1 24-Aug-2017 Nicolas Pitre <nico@fluxnic.net>

ARM: head-common.S: speed up startup code

Let's use optimized routines such as memcpy to copy .data and memzero
to clear .bss in the startup code instead of doing it one word at a
time. Those routines don't use any global data so they're safe to use
even if .data and .bss segments are not initialized.

In the .data copy case a temporary stack is installed in the .bss area
as the actual kernel stack is located within the copied data area. The
XIP kernel linker script ensures a 8 byte alignment for that purpose.

Finally, make the .data copy and related pointers surrounded by
CONFIG_XIP_KERNEL to make it obvious what it is all about. This will
allow for further cleanups in the non-XIP linker script.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Chris Brandt <Chris.Brandt@renesas.com>


# 6ebbf2ce 30-Jun-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+

ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).

We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.

Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code. This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.

Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0aeb3408 13-Apr-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: remove global cr_no_alignment

cr_no_alignment is really only used by the alignment code. Since we no
longer change the setting of cr_alignment after boot, we can localise
this to alignment.c

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b3634575 18-Feb-2014 Thomas Petazzoni <thomas.petazzoni@free-electrons.com>

ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU

Currently, when the kernel is configured with LPAE support, but the
CPU doesn't support it, the error message is fairly cryptic:

Error: unrecognized/unsupported processor variant (0x561f5811).

This messages is normally shown when there is an issue when comparing
the processor ID (CP15 0, c0, c0) with the values/masks described in
proc-v7.S. However, the same message is displayed when LPAE support is
enabled in the kernel configuration, but not available in the CPU,
after looking at ID_MMFR0 (CP15 0, c0, c1, 4). Having the same error
message is highly misleading.

This commit improves this by showing a different error message when
this situation occurs:

Error: Kernel with LPAE support, but CPU does not support LPAE.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 8bd26e3a 17-Jun-2013 Paul Gortmaker <paul.gortmaker@windriver.com>

arm: delete __cpuinit/__CPUINIT usage from all ARM users

The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.

After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.

Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.

This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.

[1] https://lkml.org/lkml/2013/5/20/589

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>


# 8c69d7af 05-Jul-2013 Stephen Warren <swarren@nvidia.com>

ARM: 7780/1: add missing linker section markup to head-common.S

Macro __INIT is used to place various code in head-common.S into the init
section. This should be matched by a closing __FINIT. Also, add an
explicit ".text" to ensure subsequent code is placed into the correct
section; __FINIT is simply a closing marker to match __INIT and doesn't
guarantee to revert to .text.

This historically caused no problem, because macro __CPUINIT was used at
the exact location where __FINIT was missing, which then placed following
code into the cpuinit section. However, with commit 22f0a2736 "init.h:
remove __cpuinit sections from the kernel" applied, __CPUINIT becomes a
no-op, thus leaving all this code in the init section, rather than the
regular text section. This caused issues such as secondary CPU boot
failures or crashes.

Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b849a60e 16-Jan-2012 Uwe Kleine-König <u.kleine-koenig@pengutronix.de>

ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15

This makes cr_alignment a constant 0 to break code that tries to modify
the value as it's likely that it's built on wrong assumption when
CONFIG_CPU_CP15 isn't defined. For code that is only reading the value 0
is more or less a fine value to report.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Message-Id: 1358413196-5609-2-git-send-email-u.kleine-koenig@pengutronix.de (v8)


# 4c2896e8 28-Apr-2011 Grant Likely <grant.likely@secretlab.ca>

arm/dt: Make __vet_atags also accept a dtb image

The dtb is passed to the kernel via register r2, which is the same
method that is used to pass an atags pointer. This patch modifies
__vet_atags to not clear r2 when it encounters a dtb image.

v2: fixed bugs pointed out by Nicolas Pitre

Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>


# 6fc31d54 12-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Defer lookup of machine_type to setup.c

Since the debug macros no longer depend on the machine type information,
the machine type lookup can be deferred to setup_arch() in setup.c which
simplifies the code somewhat.

We also move the __error_a functionality into setup.c for displaying a
message when a bad machine ID is passed to the kernel via the LL debug
code. We also log this into the kernel ring buffer which makes it
possible to retrieve the message via a debugger.

Original idea from Grant Likely.

Acked-by: Grant Likely <grant.likely@secretlab.ca>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cb4d3eae 15-Jan-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix missing branch in __error_a

When DEBUG_LL is not set, we don't want __error_a re-entering
__lookup_machine_type - we want it to go to the error function. This
used to be the case before we reorganized the layout for hotplug cpu,
as we used to fall through to __error. With the changed layout, we
need an explicit branch here instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 80924ac5 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: cleanup lookup_machine_type data and ensure these are placed in __HEAD

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c083c660 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: hotplug cpu: move __error and __error_p to cpuinit section

__error and __error_p may be used by secondary CPUs, so these
need to be in the cpuinit section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 17bb5e2c 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: move __mmap_switched, C-API functions to init section

Move these functions, which are only ever used during boot CPU
initialization, to the init section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a4ae4134 04-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: cleanup boot cpu calling __mmap_switched

This allows us to relocate __mmap_switched and associated data away
from the head section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5085f3ff 01-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: hotplug cpu: Keep processor information, startup code & __lookup_processor_type

When hotplug CPU is enabled, we need to keep the list of supported CPUs,
their setup functions, and __lookup_processor_type in place so that we
can find and initialize secondary CPUs. Move these into the __CPUINIT
section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 842eab40 01-Oct-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: vmlinux.lds: Refer to start of .data using _sdata rather than _data

Use _sdata as the start of the data section, rather than _data.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 158bc5af 06-Nov-2009 Nicolas Pitre <nico@fluxnic.net>

ARM: 5784/1: fix early boot machine ID mismatch error display

That code was refactored a long time ago, but one particular label
didn't get adjusted properly which broke the listing of supported
machines.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 31abdb74 01-Oct-2009 David Brown <davidb@codeaurora.org>

ARM: 5739/1: ARM: allow empty ATAG_CORE

From: David Brown <davidb@quicinc.com>

The ATAG_CORE is allowed to be empty. Although this is handled
by parse_tag_core(), __vet_atags during startup rejects this tag
unless it contains data. Allow the initial tag to be either the
full size, or empty.

Signed-off-by: David Brown <davidb@quicinc.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b86040a5 23-Jul-2009 Catalin Marinas <catalin.marinas@arm.com>

Thumb-2: Implementation of the unified start-up and exceptions code

This patch implements the ARM/Thumb-2 unified kernel start-up and
exception handling code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 88987ef9 23-Jul-2009 Catalin Marinas <catalin.marinas@arm.com>

Thumb-2: Add some .align statements to the .S files

Since the Thumb-2 instructions can be 16-bit wide, data in the .text
sections may not be aligned to a 32-bit word and this leads to unaligned
exceptions. This patch does not affect the ARM code generation.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 37efe642 01-Dec-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] use asm/sections.h

Update to use the asm/sections.h header rather than declaring these
symbols ourselves. Change __data_start to _data to conform with the
naming found within asm/sections.h.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 93ed3970 28-Aug-2008 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 5227/1: Add the ENDPROC declarations to the .S files

This declaration specifies the "function" type and size for various
assembly functions, mainly needed for generating the correct branch
instructions in Thumb-2.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4baa9922 02-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] move include/asm-arm to arch/arm/include/asm

Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 84081bd2 28-Mar-2008 Lennert Buytenhek <buytenh@wantstofly.org>

[ARM] 4881/1: print unrecognised processor ID as part of failure message

If we fail to boot due to an unsupported processor ID, print the
processor ID as part of the failure message.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Acked-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9c4c9f38 04-Mar-2008 Greg Ungerer <gerg@snapgear.com>

[ARM] 4849/1: move ATAGS asm definitions

Move the definitions of ATAG_CORE and ATAG_CORE_SIZE in head.S to
head-common.S. There is no use of these in head.S itself, but they
are used in head-common.S. When building for the !CONFIG_MMU case
these were not defined when compiling head-nommu.S (which includes
head-common.S).

Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9d20fdd5 31-May-2007 Bill Gatliff <bgat@billgatliff.com>

[ARM] 4423/1: add ATAGS support

Examines the ATAGS pointer (r2) at boot, and interprets
a nonzero value as a reference to an ATAGS structure. A
suitable ATAGS structure replaces the kernel's command line.

Signed-off-by: Bill Gatliff <bgat@billgatliff.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 75d90832 27-Mar-2006 Hyok S. Choi <hyok.choi@samsung.com>

[ARM] nommu: start-up code

This patch adds nommu version start-up code head-nommu.S.
The common part of the start-up codes is moved to head-common.S.

Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>