History log of /linux-master/arch/arm/kernel/process.c
Revision Date Author Comments
# 6ee1e677 19-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

ARM: kernel: Get rid of thread_info::used_cp[] array

We keep track of which coprocessor triggered a fault in the used_cp[]
array in thread_info, but this data is never used anywhere. So let's
remove it.

Linus did some digging and found out that the last user of this field
was removed in commit bb1a773d5b6b ("kill unused dump_fpu() instances").

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 1c71222e 26-Jan-2023 Suren Baghdasaryan <surenb@google.com>

mm: replace vma->vm_flags direct modifications with modifier calls

Replace direct modifications to vma->vm_flags with calls to modifier
functions to be able to track flag changes and to keep vma locking
correctness.

[akpm@linux-foundation.org: fix drivers/misc/open-dice.c, per Hyeonggon Yoo]
Link: https://lkml.kernel.org/r/20230126193752.297968-5-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arjun Roy <arjunroy@google.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: David Rientjes <rientjes@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Minchan Kim <minchan@google.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Oskolkov <posk@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Punit Agrawal <punit.agrawal@bytedance.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 89b30987 12-Jan-2023 Peter Zijlstra <peterz@infradead.org>

arch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabled

Current arch_cpu_idle() is called with IRQs disabled, but will return
with IRQs enabled.

However, the very first thing the generic code does after calling
arch_cpu_idle() is raw_local_irq_disable(). This means that
architectures that can idle with IRQs disabled end up doing a
pointless 'enable-disable' dance.

Therefore, push this IRQ disabling into the idle function, meaning
that those architectures can avoid the pointless IRQ state flipping.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64]
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Guo Ren <guoren@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.618076436@infradead.org


# 8032bf12 09-Oct-2022 Jason A. Donenfeld <Jason@zx2c4.com>

treewide: use get_random_u32_below() instead of deprecated function

This is a simple mechanical transformation done by:

@@
expression E;
@@
- prandom_u32_max
+ get_random_u32_below
(E)

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs
Reviewed-by: SeongJae Park <sj@kernel.org> # for damon
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>


# 81895a65 05-Oct-2022 Jason A. Donenfeld <Jason@zx2c4.com>

treewide: use prandom_u32_max() when possible, part 1

Rather than incurring a division or requesting too many random bytes for
the given range, use the prandom_u32_max() function, which only takes
the minimum required bytes from the RNG and avoids divisions. This was
done mechanically with this coccinelle script:

@basic@
expression E;
type T;
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
typedef u64;
@@
(
- ((T)get_random_u32() % (E))
+ prandom_u32_max(E)
|
- ((T)get_random_u32() & ((E) - 1))
+ prandom_u32_max(E * XXX_MAKE_SURE_E_IS_POW2)
|
- ((u64)(E) * get_random_u32() >> 32)
+ prandom_u32_max(E)
|
- ((T)get_random_u32() & ~PAGE_MASK)
+ prandom_u32_max(PAGE_SIZE)
)

@multi_line@
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
identifier RAND;
expression E;
@@

- RAND = get_random_u32();
... when != RAND
- RAND %= (E);
+ RAND = prandom_u32_max(E);

// Find a potential literal
@literal_mask@
expression LITERAL;
type T;
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
position p;
@@

((T)get_random_u32()@p & (LITERAL))

// Add one to the literal.
@script:python add_one@
literal << literal_mask.LITERAL;
RESULT;
@@

value = None
if literal.startswith('0x'):
value = int(literal, 16)
elif literal[0] in '123456789':
value = int(literal, 10)
if value is None:
print("I don't know how to handle %s" % (literal))
cocci.include_match(False)
elif value == 2**32 - 1 or value == 2**31 - 1 or value == 2**24 - 1 or value == 2**16 - 1 or value == 2**8 - 1:
print("Skipping 0x%x for cleanup elsewhere" % (value))
cocci.include_match(False)
elif value & (value + 1) != 0:
print("Skipping 0x%x because it's not a power of two minus one" % (value))
cocci.include_match(False)
elif literal.startswith('0x'):
coccinelle.RESULT = cocci.make_expr("0x%x" % (value + 1))
else:
coccinelle.RESULT = cocci.make_expr("%d" % (value + 1))

// Replace the literal mask with the calculated result.
@plus_one@
expression literal_mask.LITERAL;
position literal_mask.p;
expression add_one.RESULT;
identifier FUNC;
@@

- (FUNC()@p & (LITERAL))
+ prandom_u32_max(RESULT)

@collapse_ret@
type T;
identifier VAR;
expression E;
@@

{
- T VAR;
- VAR = (E);
- return VAR;
+ return E;
}

@drop_var@
type T;
identifier VAR;
@@

{
- T VAR;
... when != VAR
}

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: KP Singh <kpsingh@kernel.org>
Reviewed-by: Jan Kara <jack@suse.cz> # for ext4 and sbitmap
Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> # for drbd
Acked-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Heiko Carstens <hca@linux.ibm.com> # for s390
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc
Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>


# 2be9880d 18-Aug-2022 Kefeng Wang <wangkefeng.wang@huawei.com>

kernel: exit: cleanup release_thread()

Only x86 has own release_thread(), introduce a new weak release_thread()
function to clean empty definitions in other ARCHs.

Link: https://lkml.kernel.org/r/20220819014406.32266-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Guo Ren <guoren@kernel.org> [csky]
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Brian Cain <bcain@quicinc.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
Acked-by: Stafford Horne <shorne@gmail.com> [openrisc]
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Acked-by: Huacai Chen <chenhuacai@kernel.org> [LoongArch]
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Chris Zankel <chris@zankel.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Guo Ren <guoren@kernel.org> [csky]
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Xuerui Wang <kernel@xen0n.name>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 09cffeca 03-Aug-2022 Zhen Lei <thunder.leizhen@huawei.com>

ARM: 9224/1: Dump the stack traces based on the parameter 'regs' of show_regs()

Function show_regs() is usually called in interrupt handler or exception
handler, it prints the registers specified by the parameter 'regs', then
dump the stack traces. Although not explicitly documented, dump the stack
traces based on'regs' seems to make the most sense. Although dump_stack()
can finally dump the desired content, because 'regs' are saved by the
entry of current interrupt or exception. In the following example we can
see: 1) The backtrace of interrupt or exception handler is not expected,
it causes confusion. 2) Something is printed repeatedly. The line with
the kernel version "CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8",
the registers saved in "Exception stack" which 'regs' actually point to.

For example:
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: 0-....: (499 ticks this GP) idle=379/1/0x40000002 softirq=91/91 fqs=249
(t=500 jiffies g=-911 q=13 ncpus=4)
CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8
Hardware name: ARM-Versatile Express
PC is at ktime_get+0x4c/0xe8
LR is at ktime_get+0x4c/0xe8
pc : 8019a474 lr : 8019a474 psr: 60000013
sp : cabd1f28 ip : 00000001 fp : 00000005
r10: 527bf1b8 r9 : 431bde82 r8 : d7b634db
r7 : 0000156e r6 : 61f234f8 r5 : 00000001 r4 : 80ca86c0
r3 : ffffffff r2 : fe5bce0b r1 : 00000000 r0 : 01a431f4
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
Control: 10c5387d Table: 6121406a DAC: 00000051
CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8 <-----------start----------
Hardware name: ARM-Versatile Express |
unwind_backtrace from show_stack+0x10/0x14 |
show_stack from dump_stack_lvl+0x40/0x4c |
dump_stack_lvl from rcu_dump_cpu_stacks+0x10c/0x134 |
rcu_dump_cpu_stacks from rcu_sched_clock_irq+0x780/0xaf4 |
rcu_sched_clock_irq from update_process_times+0x54/0x74 |
update_process_times from tick_periodic+0x3c/0xd4 |
tick_periodic from tick_handle_periodic+0x20/0x80 worthless
tick_handle_periodic from twd_handler+0x30/0x40 or
twd_handler from handle_percpu_devid_irq+0x8c/0x1c8 duplicated
handle_percpu_devid_irq from generic_handle_domain_irq+0x24/0x34 |
generic_handle_domain_irq from gic_handle_irq+0x74/0x88 |
gic_handle_irq from generic_handle_arch_irq+0x34/0x44 |
generic_handle_arch_irq from call_with_stack+0x18/0x20 |
call_with_stack from __irq_svc+0x98/0xb0 |
Exception stack(0xcabd1ed8 to 0xcabd1f20) |
1ec0: 01a431f4 00000000 |
1ee0: fe5bce0b ffffffff 80ca86c0 00000001 61f234f8 0000156e d7b634db 431bde82 |
1f00: 527bf1b8 00000005 00000001 cabd1f28 8019a474 8019a474 60000013 ffffffff |
__irq_svc from ktime_get+0x4c/0xe8 <---------end--------------
ktime_get from test_task+0x44/0x110
test_task from kthread+0xd8/0xf4
kthread from ret_from_fork+0x14/0x2c
Exception stack(0xcabd1fb0 to 0xcabd1ff8)
1fa0: 00000000 00000000 00000000 00000000
1fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1fe0: 00000000 00000000 00000000 00000000 00000013 00000000

After replacing dump_stack() with dump_backtrace():
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: 0-....: (500 ticks this GP) idle=8f7/1/0x40000002 softirq=129/129 fqs=241
(t=500 jiffies g=-915 q=13 ncpus=4)
CPU: 0 PID: 69 Comm: test0 Not tainted 5.19.0+ #9
Hardware name: ARM-Versatile Express
PC is at ktime_get+0x4c/0xe8
LR is at ktime_get+0x4c/0xe8
pc : 8019a494 lr : 8019a494 psr: 60000013
sp : cabddf28 ip : 00000001 fp : 00000002
r10: 0779cb48 r9 : 431bde82 r8 : d7b634db
r7 : 00000a66 r6 : e835ab70 r5 : 00000001 r4 : 80ca86c0
r3 : ffffffff r2 : ff337d39 r1 : 00000000 r0 : 00cc82c6
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
Control: 10c5387d Table: 611d006a DAC: 00000051
ktime_get from test_task+0x44/0x110
test_task from kthread+0xd8/0xf4
kthread from ret_from_fork+0x14/0x2c
Exception stack(0xcabddfb0 to 0xcabddff8)
dfa0: 00000000 00000000 00000000 00000000
dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
dfe0: 00000000 00000000 00000000 00000000 00000013 00000000

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 5bd2e97c 12-Apr-2022 Eric W. Biederman <ebiederm@xmission.com>

fork: Generalize PF_IO_WORKER handling

Add fn and fn_arg members into struct kernel_clone_args and test for
them in copy_thread (instead of testing for PF_KTHREAD | PF_IO_WORKER).
This allows any task that wants to be a user space task that only runs
in kernel mode to use this functionality.

The code on x86 is an exception and still retains a PF_KTHREAD test
because x86 unlikely everything else handles kthreads slightly
differently than user space tasks that start with a function.

The functions that created tasks that start with a function
have been updated to set ".fn" and ".fn_arg" instead of
".stack" and ".stack_size". These functions are fork_idle(),
create_io_thread(), kernel_thread(), and user_mode_thread().

Link: https://lkml.kernel.org/r/20220506141512.516114-4-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>


# c5febea0 08-Apr-2022 Eric W. Biederman <ebiederm@xmission.com>

fork: Pass struct kernel_clone_args into copy_thread

With io_uring we have started supporting tasks that are for most
purposes user space tasks that exclusively run code in kernel mode.

The kernel task that exec's init and tasks that exec user mode
helpers are also user mode tasks that just run kernel code
until they call kernel execve.

Pass kernel_clone_args into copy_thread so these oddball
tasks can be supported more cleanly and easily.

v2: Fix spelling of kenrel_clone_args on h8300
Link: https://lkml.kernel.org/r/20220506141512.516114-2-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>


# 9c46929e 24-Nov-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems

On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.

To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M


# 50596b75 18-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: smp: Store current pointer in TPIDRURO register if available

Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# dfbdcda2 18-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

gcc-plugins: arm-ssp: Prepare for THREAD_INFO_IN_TASK support

We will be enabling THREAD_INFO_IN_TASK support for ARM, which means
that we can no longer load the stack canary value by masking the stack
pointer and taking the copy that lives in thread_info. Instead, we will
be able to load it from the task_struct directly, by using the TPIDRURO
register which will hold the current task pointer when
THREAD_INFO_IN_TASK is in effect. This is much more straight-forward,
and allows us to declutter this code a bit while at it.

Note that this means that ARMv6 (non-v6K) SMP systems can no longer use
this feature, but those are quite rare to begin with, so this is a
reasonable trade off.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# 42a20f86 29-Sep-2021 Kees Cook <keescook@chromium.org>

sched: Add wrapper for get_wchan() to keep task blocked

Having a stable wchan means the process must be blocked and for it to
stay that way while performing stack unwinding.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [arm]
Tested-by: Mark Rutland <mark.rutland@arm.com> [arm64]
Link: https://lkml.kernel.org/r/20211008111626.332092234@infradead.org


# 8ac6f5d7 11-Aug-2021 Arnd Bergmann <arnd@arndb.de>

ARM: 9113/1: uaccess: remove set_fs() implementation

There are no remaining callers of set_fs(), so just remove it
along with all associated code that operates on
thread_info->addr_limit.

There are still further optimizations that can be done:

- In get_user(), the address check could be moved entirely
into the out of line code, rather than passing a constant
as an argument,

- I assume the DACR handling can be simplified as we now
only change it during user access when CONFIG_CPU_SW_DOMAIN_PAN
is set, but not during set_fs().

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 39f75da7 02-Aug-2021 Alexey Dobriyan <adobriyan@gmail.com>

isystem: trim/fixup stdarg.h and other headers

Delete/fixup few includes in anticipation of global -isystem compile
option removal.

Note: crypto/aegis128-neon-inner.c keeps <stddef.h> due to redefinition
of uintptr_t error (one definition comes from <stddef.h>, another from
<linux/types.h>).

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>


# b03fbd4f 11-Jun-2021 Peter Zijlstra <peterz@infradead.org>

sched: Introduce task_is_running()

Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
task_is_running(p).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org


# 5aa6b70e 06-May-2021 Maninder Singh <maninder1.s@samsung.com>

arm: print alloc free paths for address in registers

In case of a use after free kernel oops, the freeing path of the object
is required to debug futher. In most of cases the object address is
present in one of the registers.

Thus check the register's address and if it belongs to slab, print its
alloc and free path.

e.g. in the below issue register r6 belongs to slab, and a use after
free issue occurred on one of its dereferenced values:

Unable to handle kernel paging request at virtual address 6b6b6b6f
....
pc : [<c0538afc>] lr : [<c0465674>] psr: 60000013
sp : c8927d40 ip : ffffefff fp : c8aa8020
r10: c8927e10 r9 : 00000001 r8 : 00400cc0
r7 : 00000000 r6 : c8ab0180 r5 : c1804a80 r4 : c8aa8008
r3 : c1a5661c r2 : 00000000 r1 : 6b6b6b6b r0 : c139bf48
.....
Register r6 information: slab kmalloc-64 start c8ab0140 data offset 64 pointer offset 0 size 64 allocated at meminfo_proc_show+0x40/0x4fc
meminfo_proc_show+0x40/0x4fc
seq_read_iter+0x18c/0x4c4
proc_reg_read_iter+0x84/0xac
generic_file_splice_read+0xe8/0x17c
splice_direct_to_actor+0xb8/0x290
do_splice_direct+0xa0/0xe0
do_sendfile+0x2d0/0x438
sys_sendfile64+0x12c/0x140
ret_fast_syscall+0x0/0x58
0xbeeacde4
Free path:
meminfo_proc_show+0x5c/0x4fc
seq_read_iter+0x18c/0x4c4
proc_reg_read_iter+0x84/0xac
generic_file_splice_read+0xe8/0x17c
splice_direct_to_actor+0xb8/0x290
do_splice_direct+0xa0/0xe0
do_sendfile+0x2d0/0x438
sys_sendfile64+0x12c/0x140
ret_fast_syscall+0x0/0x58
0xbeeacde4

Link: https://lkml.kernel.org/r/1615891032-29160-3-git-send-email-maninder1.s@samsung.com
Co-developed-by: Vaneet Narang <v.narang@samsung.com>
Signed-off-by: Vaneet Narang <v.narang@samsung.com>
Signed-off-by: Maninder Singh <maninder1.s@samsung.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dmitry Safonov <0x7f454c46@gmail.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 4727dc20 17-Feb-2021 Jens Axboe <axboe@kernel.dk>

arch: setup PF_IO_WORKER threads like PF_KTHREAD

PF_IO_WORKER are kernel threads too, but they aren't PF_KTHREAD in the
sense that we don't assign ->set_child_tid with our own structure. Just
ensure that every arch sets up the PF_IO_WORKER threads like kthreads
in the arch implementation of copy_thread().

Signed-off-by: Jens Axboe <axboe@kernel.dk>


# 58c644ba 20-Nov-2020 Peter Zijlstra <peterz@infradead.org>

sched/idle: Fix arch_cpu_idle() vs tracing

We call arch_cpu_idle() with RCU disabled, but then use
local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.

Switch all arch_cpu_idle() implementations to use
raw_local_irq_{en,dis}able() and carefully manage the
lockdep,rcu,tracing state like we do in entry.

(XXX: we really should change arch_cpu_idle() to not return with
interrupts enabled)

Reported-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20201120114925.594122626@infradead.org


# 15107230 15-Jun-2020 Al Viro <viro@zeniv.linux.org.uk>

arm: kill dump_task_regs()

the last user had been fdpic

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# bb1a773d 22-May-2020 Al Viro <viro@zeniv.linux.org.uk>

kill unused dump_fpu() instances

dump_fpu() is used only on the architectures that support elf
and have neither CORE_DUMP_USE_REGSET nor ELF_CORE_COPY_FPREGS
defined.

Currently that's csky, m68k, microblaze, nds32 and unicore32. The rest
of the instances are dead code.

NB: THIS MUST GO AFTER ELF_FDPIC CONVERSION

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 714acdbd 11-Jun-2020 Christian Brauner <christian.brauner@ubuntu.com>

arch: rename copy_thread_tls() back to copy_thread()

Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls()
back simply copy_thread(). It's a simpler name, and doesn't imply that only
tls is copied here. This finishes an outstanding chunk of internal process
creation work since we've added clone3().

Cc: linux-arch@vger.kernel.org
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>A
Acked-by: Stafford Horne <shorne@gmail.com>
Acked-by: Greentime Hu <green.hu@gmail.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>A
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>


# d8ed45c5 08-Jun-2020 Michel Lespinasse <walken@google.com>

mmap locking API: use coccinelle to convert mmap_sem rwsem call sites

This change converts the existing mmap_sem rwsem calls to use the new mmap
locking API instead.

The change is generated using coccinelle with the following rule:

// spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir .

@@
expression mm;
@@
(
-init_rwsem
+mmap_init_lock
|
-down_write
+mmap_write_lock
|
-down_write_killable
+mmap_write_lock_killable
|
-down_write_trylock
+mmap_write_trylock
|
-up_write
+mmap_write_unlock
|
-downgrade_write
+mmap_write_downgrade
|
-down_read
+mmap_read_lock
|
-down_read_killable
+mmap_read_lock_killable
|
-down_read_trylock
+mmap_read_trylock
|
-up_read
+mmap_read_unlock
)
-(&mm->mmap_sem)
+(mm)

Signed-off-by: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Liam Howlett <Liam.Howlett@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ying Han <yinghan@google.com>
Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 167ee0b8 02-Jan-2020 Amanieu d'Antras <amanieu@gmail.com>

arm: Implement copy_thread_tls

This is required for clone3 which passes the TLS value through a
struct rather than a register.

Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: <stable@vger.kernel.org> # 5.3.x
Link: https://lore.kernel.org/r/20200102172413.654385-4-amanieu@gmail.com
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>


# 83dc1d99 11-Oct-2019 Ben Dooks (Codethink) <ben.dooks@codethink.co.uk>

ARM: 8920/1: share get_signal_page from signal.c to process.c

The get_signal_page() function is defined in signal.c and used in
process.c but there is no shared definition. Add one in signal.h to
silence the following warning:

arch/arm/kernel/signal.c:683:13: warning: symbol 'get_signal_page' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# dba79c3d 23-Sep-2019 Alexandre Ghiti <alex@ghiti.fr>

arm: use generic mmap top-down layout and brk randomization

arm uses a top-down mmap layout by default that exactly fits the generic
functions, so get rid of arch specific code and use the generic version by
selecting ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT.

As ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT selects ARCH_HAS_ELF_RANDOMIZE,
use the generic version of arch_randomize_brk since it also fits. Note
that this commit also removes the possibility for arm to have elf
randomization and no MMU: without MMU, the security added by randomization
is worth nothing.

Note that it is safe to remove STACK_RND_MASK since it matches the default
value.

Link: http://lkml.kernel.org/r/20190730055113.23635-9-alex@ghiti.fr
Signed-off-by: Alexandre Ghiti <alex@ghiti.fr>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Hogan <jhogan@kernel.org>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 736706be 04-Mar-2019 Linus Torvalds <torvalds@linux-foundation.org>

get rid of legacy 'get_ds()' function

Every in-kernel use of this function defined it to KERNEL_DS (either as
an actual define, or as an inline function). It's an entirely
historical artifact, and long long long ago used to actually read the
segment selector valueof '%ds' on x86.

Which in the kernel is always KERNEL_DS.

Inspired by a patch from Jann Horn that just did this for a very small
subset of users (the ones in fs/), along with Al who suggested a script.
I then just took it to the logical extreme and removed all the remaining
gunk.

Roughly scripted with

git grep -l '(get_ds())' -- :^tools/ | xargs sed -i 's/(get_ds())/(KERNEL_DS)/'
git grep -lw 'get_ds' -- :^tools/ | xargs sed -i '/^#define get_ds()/d'

plus manual fixups to remove a few unusual usage patterns, the couple of
inline function cases and to fix up a comment that had become stale.

The 'get_ds()' function remains in an x86 kvm selftest, since in user
space it actually does something relevant.

Inspired-by: Jann Horn <jannh@google.com>
Inspired-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 189af465 06-Dec-2018 Ard Biesheuvel <ardb@kernel.org>

ARM: smp: add support for per-task stack canaries

On ARM, we currently only change the value of the stack canary when
switching tasks if the kernel was built for UP. On SMP kernels, this
is impossible since the stack canary value is obtained via a global
symbol reference, which means
a) all running tasks on all CPUs must use the same value
b) we can only modify the value when no kernel stack frames are live
on any CPU, which is effectively never.

So instead, use a GCC plugin to add a RTL pass that replaces each
reference to the address of the __stack_chk_guard symbol with an
expression that produces the address of the 'stack_canary' field
that is added to struct thread_info. This way, each task will use
its own randomized value.

Cc: Russell King <linux@armlinux.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Laura Abbott <labbott@redhat.com>
Cc: kernel-hardening@lists.openwall.com
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>


# a670468f 21-Aug-2018 Andrew Morton <akpm@linux-foundation.org>

mm: zero out the vma in vma_init()

Rather than in vm_area_alloc(). To ensure that the various oddball
stack-based vmas are in a good state. Some of the callers were zeroing
them out, others were not.

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 2c4541e2 26-Jul-2018 Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

mm: use vma_init() to initialize VMAs on stack and data segments

Make sure to initialize all VMAs properly, not only those which come
from vm_area_cachep.

Link: http://lkml.kernel.org/r/20180724121139.62570-3-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 050e9baa 13-Jun-2018 Linus Torvalds <torvalds@linux-foundation.org>

Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables

The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.

That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.

HOWEVER.

It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like

CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_STACKPROTECTOR_AUTO=y

and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:

CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.

The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one. This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).

This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:

CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.

Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 3ea70d7d 11-Dec-2017 Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>

arm: do not use print_symbol()

print_symbol() is a very old API that has been obsoleted by %pS format
specifier in a normal printk() call.

Replace print_symbol() with a direct printk("%pS") call.

Link: http://lkml.kernel.org/r/20171211125025.2270-2-sergey.senozhatsky@gmail.com
To: Andrew Morton <akpm@linux-foundation.org>
To: Russell King <linux@armlinux.org.uk>
To: Catalin Marinas <catalin.marinas@arm.com>
To: Mark Salter <msalter@redhat.com>
To: Tony Luck <tony.luck@intel.com>
To: David Howells <dhowells@redhat.com>
To: Yoshinori Sato <ysato@users.sourceforge.jp>
To: Guan Xuetao <gxt@mprc.pku.edu.cn>
To: Borislav Petkov <bp@alien8.de>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: Thomas Gleixner <tglx@linutronix.de>
To: Peter Zijlstra <peterz@infradead.org>
To: Vineet Gupta <vgupta@synopsys.com>
To: Fengguang Wu <fengguang.wu@intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: LKML <linux-kernel@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-c6x-dev@linux-c6x.org
Cc: linux-ia64@vger.kernel.org
Cc: linux-am33-list@redhat.com
Cc: linux-sh@vger.kernel.org
Cc: linux-edac@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-snps-arc@lists.infradead.org
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
[pmladek@suse.com: updated commit message, fixed complication warning]
Signed-off-by: Petr Mladek <pmladek@suse.com>


# 280e87e9 19-Jun-2017 Dmitry Safonov <0x7f454c46@gmail.com>

ARM: 8683/1: ARM32: Support mremap() for sigpage/vDSO

CRIU restores application mappings on the same place where they
were before Checkpoint. That means, that we need to move vDSO
and sigpage during restore on exactly the same place where
they were before C/R.

Make mremap() code update mm->context.{sigpage,vdso} pointers
during VMA move. Sigpage is used for landing after handling
a signal - if the pointer is not updated during moving, the
application might crash on any signal after mremap().

vDSO pointer on ARM32 is used only for setting auxv at this moment,
update it during mremap() in case of future usage.

Without those updates, current work of CRIU on ARM32 is not reliable.
Historically, we error Checkpointing if we find vDSO page on ARM32
and suggest user to disable CONFIG_VDSO.
But that's not correct - it goes from x86 where signal processing
is ended in vDSO blob. For arm32 it's sigpage, which is not disabled
with `CONFIG_VDSO=n'.

Looks like C/R was working by luck - because userspace on ARM32 at
this moment always sets SA_RESTORER.

Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Deacon <will.deacon@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 801f19b9 04-May-2017 Joe Perches <joe@perches.com>

ARM: 8673/1: Fix __show_regs output timestamps

Multiple line formats are not preferred as the second and
subsequent lines may not have timestamps.

Lacking timestamps makes reading the output a bit difficult.
This also makes arm/arm64 output more similar.

Previous:

[ 1514.093231] pc : [<bf79c304>] lr : [<bf79ced8>] psr: a00f0013
sp : ecdd7e20 ip : 00000000 fp : ffffffff

New:

[ 1514.093231] pc : [<bf79c304>] lr : [<bf79ced8>] psr: a00f0013
[ 1514.105316] sp : ecdd7e20 ip : 00000000 fp : ffffffff

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 68db0cf1 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h>

We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task_stack.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 29930025 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task.h>

We are going to split <linux/sched/task.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# b17b0153 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h>

We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/debug.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# c984cbf2 11-Oct-2016 Jason Cooper <jason@lakedaemon.net>

ARM: use simpler API for random address requests

Currently, all callers to randomize_range() set the length to 0 and
calculate end by adding a constant to the start address. We can simplify
the API to remove a bunch of needless checks and variables.

Use the new randomize_addr(start, range) call to set the requested
address.

Link: http://lkml.kernel.org/r/20160803233913.32511-4-jason@lakedaemon.net
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: "Russell King - ARM Linux" <linux@arm.linux.org.uk>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# e6978e4b 13-May-2016 Russell King <rmk+kernel@armlinux.org.uk>

ARM: save and reset the address limit when entering an exception

When we enter an exception, the current address limit should not apply
to the exception context: if the exception context wishes to access
kernel space via the user accessors (eg, perf code), it must explicitly
request such access.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 5fa9da50 13-May-2016 Russell King <rmk+kernel@armlinux.org.uk>

ARM: get rid of horrible *(unsigned int *)(regs + 1)

Get rid of the horrible "*(unsigned int *)(regs + 1)" to get at the
parent context domain access register value, instead using the newly
introduced svc_pt_regs structure.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 69048176 23-May-2016 Michal Hocko <mhocko@suse.com>

vdso: make arch_setup_additional_pages wait for mmap_sem for write killable

most architectures are relying on mmap_sem for write in their
arch_setup_additional_pages. If the waiting task gets killed by the oom
killer it would block oom_reaper from asynchronous address space reclaim
and reduce the chances of timely OOM resolving. Wait for the lock in
the killable mode and return with EINTR if the task got killed while
waiting.

Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Andy Lutomirski <luto@amacapital.net> [x86 vdso]
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# e6464694 20-May-2016 Jiri Slaby <jirislaby@kernel.org>

exit_thread: accept a task parameter to be exited

We need to call exit_thread from copy_process in a fail path. So make it
accept task_struct as a parameter.

[v2]
* s390: exit_thread_runtime_instr doesn't make sense to be called for
non-current tasks.
* arm: fix the comment in vfp_thread_copy
* change 'me' to 'tsk' for task_struct
* now we can change only archs that actually have exit_thread

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 77f1b959 03-Dec-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: report proper DACR value in oops dumps

When printing the DACR value, we print the domain register value.
This is incorrect, as with SW_PAN enabled, that is the current setting,
rather than the faulting context's setting. Arrange to print the
faulting domain's saved DACR value instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# af4cb25d 09-Sep-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: uaccess: fix undefined instruction on ARMv7M/noMMU

The use of get_domain() in copy_thread() results in an oops on
ARMv7M/noMMU systems. The thread cpu_domain value is only used when
CONFIG_CPU_USE_DOMAINS is enabled, so there's no need to save the
value in copy_thread() except when this is enabled, and this option
will never be enabled on these platforms.

Unhandled exception: IPSR = 00000006 LR = fffffff1
CPU: 0 PID: 0 Comm: swapper Not tainted 4.2.0-next-20150909-00001-gb8ec5ad #41
Hardware name: NXP LPC18xx/43xx (Device Tree)
task: 2823fbe0 ti: 2823c000 task.ti: 2823c000
PC is at copy_thread+0x18/0x92
LR is at copy_thread+0x19/0x92
pc : [<2800a46e>] lr : [<2800a46f>] psr: 4100000b
sp : 2823df00 ip : 00000000 fp : 287c81c0
r10: 00000000 r9 : 00800300 r8 : 287c8000
r7 : 287c8000 r6 : 2818908d r5 : 00000000 r4 : 287ca000
r3 : 00000000 r2 : 00000000 r1 : fffffff0 r0 : 287ca048
xPSR: 4100000b

Reported-by: Ariel D'Alessandro <ariel@vanguardiasur.com.ar>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a5e090ac 19-Aug-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: software-based priviledged-no-access support

Provide a software-based implementation of the priviledged no access
support found in ARMv8.1.

Userspace pages are mapped using a different domain number from the
kernel and IO mappings. If we switch the user domain to "no access"
when we enter the kernel, we can prevent the kernel from touching
userspace.

However, the kernel needs to be able to access userspace via the
various user accessor functions. With the wrapping in the previous
patch, we can temporarily enable access when the kernel needs user
access, and re-disable it afterwards.

This allows us to trap non-intended accesses to userspace, eg, caused
by an inadvertent dereference of the LIST_POISON* values, which, with
appropriate user mappings setup, can be made to succeed. This in turn
can allow use-after-free bugs to be further exploited than would
otherwise be possible.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9205b797 24-Aug-2015 Stephen Boyd <sboyd@codeaurora.org>

ARM: 8421/1: smp: Collapse arch_cpu_idle_dead() into cpu_die()

The only caller of cpu_die() on ARM is arch_cpu_idle_dead(), so
let's simplify the code by renaming cpu_die() to
arch_cpu_idle_dead(). While were here, drop the __ref annotation
because __cpuinit is gone nowadays.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1eef5d2f 19-Aug-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: domains: switch to keeping domain value in register

Rather than modifying both the domain access control register and our
per-thread copy, modify only the domain access control register, and
use the per-thread copy to save and restore the register over context
switches. We can also avoid the explicit initialisation of the
init thread_info structure.

This allows us to avoid needing to gain access to the thread information
at the uaccess control sites.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 045ab94e 01-Apr-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: move reboot code to arch/arm/kernel/reboot.c

Move shutdown and reboot related code to a separate file, out of
process.c. This helps to avoid polluting process.c with non-process
related code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 767bf7e7 01-Apr-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix broken hibernation

Normally, when a CPU wants to clear a cache line to zero in the external
L2 cache, it would generate bus cycles to write each word as it would do
with any other data access.

However, a Cortex A9 connected to a L2C-310 has a specific feature where
the CPU can detect this operation, and signal that it wants to zero an
entire cache line. This feature, known as Full Line of Zeros (FLZ),
involves a non-standard AXI signalling mechanism which only the L2C-310
can properly interpret.

There are separate enable bits in both the L2C-310 and the Cortex A9 -
the L2C-310 needs to be enabled and have the FLZ enable bit set in the
auxiliary control register before the Cortex A9 has this feature
enabled.

Unfortunately, the suspend code was not respecting this - it's not
obvious from the code:

swsusp_arch_suspend()
cpu_suspend() /* saves the Cortex A9 auxiliary control register */
arch_save_image()
soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
cpu_resume() /* restores the Cortex A9 registers, inc auxcr */

At this point, we end up with the L2C disabled, but the Cortex A9 with
FLZ enabled - which means any memset() or zeroing of a full cache line
will fail to take effect.

A similar issue exists in the resume path, but it's slightly more
complex:

swsusp_arch_suspend()
cpu_suspend() /* saves the Cortex A9 auxiliary control register */
arch_save_image() /* image with A9 auxcr saved */
...
swsusp_arch_resume()
call_with_stack()
arch_restore_image() /* restores image with A9 auxcr saved above */
soft_restart() /* turns off FLZ in Cortex A9, and disables L2C */
cpu_resume() /* restores the Cortex A9 registers, inc auxcr */

Again, here we end up with the L2C disabled, but Cortex A9 FLZ enabled.

There's no need to turn off the L2C in either of these two paths; there
are benefits from not doing so - for example, the page copies will be
faster with the L2C enabled.

Hence, fix this by providing a variant of soft_restart() which can be
used without turning the L2 cache controller off, and use it in both
of these paths to keep the L2C enabled across the respective resume
transitions.

Fixes: 8ef418c7178f ("ARM: l2c: trial at enabling some Cortex-A9 optimisations")
Reported-by: Sean Cross <xobs@kosagi.com>
Tested-by: Sean Cross <xobs@kosagi.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ecf99a43 25-Mar-2015 Nathan Lynch <nathan_lynch@mentor.com>

ARM: 8331/1: VDSO initialization, mapping, and synchronization

Initialize the VDSO page list at boot, install the VDSO mapping at
exec time, and update the data page during timer ticks. This code is
not built if CONFIG_VDSO is not enabled.

Account for the VDSO length when randomizing the offset from the
stack. The [vdso] and [vvar] pages are placed immediately following
the sigpage with separate _install_special_mapping calls.

We want to "penalize" systems lacking the arch timer as little
as possible. Previous versions of this code installed the VDSO
unconditionally and unmodified, making it a measurably slower way for
glibc to invoke the real syscalls on such systems. E.g. calling
gettimeofday via glibc goes from ~560ns to ~630ns on i.MX6Q.

If we can indicate to glibc that the time-related APIs in the VDSO are
not accelerated, glibc can continue to invoke the syscalls directly
instead of dispatching through the VDSO only to fall back to the slow
path.

Thus, if the architected timer is unusable for whatever reason, patch
the VDSO at boot time so that symbol lookups for gettimeofday and
clock_gettime return NULL. (This is similar to what powerpc does and
borrows code from there.) This allows glibc to perform the syscall
directly instead of passing control to the VDSO, which minimizes the
penalty. In my measurements the time taken for a gettimeofday call
via glibc goes from ~560ns to ~580ns (again on i.MX6Q), and this is
solely due to adding a test and branch to glibc's gettimeofday syscall
wrapper.

An alternative to patching the VDSO at boot would be to not install
the VDSO at all when the arch timer isn't usable. Another alternative
is to include a separate "dummy" vdso.so without gettimeofday and
clock_gettime, which would be selected at boot time. Either of these
would get cumbersome if the VDSO were to gain support for an API such
as getcpu which is unrelated to arch timer support.

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f3a04202 01-Dec-2014 Stephen Boyd <sboyd@codeaurora.org>

ARM: 8241/1: Update processor_modes for hyp and monitor mode

If the kernel is running in hypervisor mode or monitor mode we'll
print UK6_32 or UK10_32 if we call into __show_regs(). Let's
update these strings to indicate the new modes that didn't exist
when this code was written.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 389522b0 22-Sep-2014 Nathan Lynch <nathan_lynch@mentor.com>

ARM: 8155/1: place sigpage at a random offset above stack

The sigpage is currently placed alongside shared libraries etc in the
address space. Similar to what x86_64 does for its VDSO, place the
sigpage at a randomized offset above the stack so that learning the
base address of the sigpage doesn't help expose where shared libraries
are loaded in the address space (and vice versa).

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 02e0409a 22-Sep-2014 Nathan Lynch <nathan_lynch@mentor.com>

ARM: 8154/1: use _install_special_mapping for sigpage

_install_special_mapping allows the VMA to be identifed in
/proc/pid/maps without the use of arch_vma_name, providing a
slight net reduction in object size:

text data bss dec hex filename
2996 96 144 3236 ca4 arch/arm/kernel/process.o (before)
2956 104 144 3204 c84 arch/arm/kernel/process.o (after)

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6cd6d94d 25-Sep-2014 Guenter Roeck <linux@roeck-us.net>

arm/arm64: unexport restart handlers

Implementing a restart handler in a module don't make sense as there would
be no guarantee that the module is loaded when a restart is needed.
Unexport arm_pm_restart to ensure that no one gets the idea to do it
anyway.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 1a9607a3 25-Sep-2014 Guenter Roeck <linux@roeck-us.net>

arm: support restart through restart handler call chain

The kernel core now supports a restart handler call chain for system
restart functions.

With this change, the arm_pm_restart callback is now optional, so drop its
initialization and check if it is set before calling it. Only call the
kernel restart handler if arm_pm_restart is not set.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 7f038073 03-Sep-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: remove extraneous newline in show_regs()

Remove an unnecessary newline in show_regs().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# fbfb872f 10-Sep-2014 Nathan Lynch <nathan_lynch@mentor.com>

ARM: 8148/1: flush TLS and thumbee register state during exec

The TPIDRURO and TPIDRURW registers need to be flushed during exec;
otherwise TLS information is potentially leaked. TPIDRURO in
particular needs careful treatment. Since flush_thread basically
needs the same code used to set the TLS in arm_syscall, pull that into
a common set_tls helper in tls.h and use it in both places.

Similarly, TEEHBR needs to be cleared during exec as well. Clearing
its save slot in thread_info isn't right as there is no guarantee
that a thread switch will occur before the new program runs. Just
setting the register directly is sufficient.

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 779dd959 06-Apr-2014 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: add missing system_misc.h include to process.c

arm_pm_restart(), arm_pm_idle() and soft_restart() are all declared in
system_misc.h, but this file is not included in process.c. Add this
missing include. Found via sparse:

arch/arm/kernel/process.c:98:6: warning: symbol 'soft_restart' was not declared. Should it be static?
arch/arm/kernel/process.c:127:6: warning: symbol 'arm_pm_restart' was not declared. Should it be static?
arch/arm/kernel/process.c:134:6: warning: symbol 'arm_pm_idle' was not declared. Should it be static?

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c7d442f4 24-Mar-2014 Sebastian Capella <sebastian.capella@linaro.org>

ARM: 8010/1: avoid tracers in soft_restart

Use of tracers in local_irq_disable is causes abort loops when called
with irqs disabled using a temporary stack. Replace local_irq_disable
with raw_local_irq_disable instead to avoid tracers.

Signed-off-by: Sebastian Capella <sebastian.capella@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ad68cc7a 28-Jan-2014 Nicolas Pitre <nico@fluxnic.net>

sched/idle, ARM: Remove redundant cpuidle_idle_call()

The core idle loop now takes care of it.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-sh@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linaro-kernel@lists.linaro.org
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-y2nbw5j3ma5siy5584919z5i@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# e2e55fde 16-Dec-2013 Uwe Kleine-König <u.kleine-koenig@pengutronix.de>

ARM: show_regs: on v7-M there are no FIQs, different processor modes, ...

no indication about irqs in PSR and only a single ISA. So skip the whole
decoding and just print the xPSR on v7-M.

Also mark two static variables as __maybe_unused to prevent the compiler
from emitting:

arch/arm/kernel/process.c:51:20: warning: 'processor_modes' defined but not used [-Wunused-variable]
arch/arm/kernel/process.c:58:20: warning: 'isa_modes' defined but not used [-Wunused-variable]

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>


# 1b15ec7a 05-Dec-2013 Konstantin Khlebnikov <koct9i@gmail.com>

ARM: 7912/1: check stack pointer in get_wchan

get_wchan() is lockless. Task may wakeup at any time and change its own stack,
thus each next stack frame may be overwritten and filled with random stuff.

/proc/$pid/stack interface had been disabled for non-current tasks, see [1]
But 'wchan' still allows to trigger stack frame unwinding on volatile stack.

This patch fixes oops in unwind_frame() by adding stack pointer validation on
each step (as x86 code do), unwind_frame() already checks frame pointer.

Also I've found another report of this oops on stackoverflow (irony).

Link: http://www.spinics.net/lists/arm-kernel/msg110589.html [1]
Link: http://stackoverflow.com/questions/18479894/unwind-frame-cause-a-kernel-paging-error

Cc: <stable@vger.kernel.org>
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1d0bbf42 06-Aug-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Fix the world famous typo with is_gate_vma()

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e0d40756 03-Aug-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix a cockup in 48be69a02 (ARM: move signal handlers into a vdso-like page)

Unfortunately, I never committed the fix to a nasty oops which can
occur as a result of that commit:

------------[ cut here ]------------
kernel BUG at /home/olof/work/batch/include/linux/mm.h:414!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 0 PID: 490 Comm: killall5 Not tainted 3.11.0-rc3-00288-gabe0308 #53
task: e90acac0 ti: e9be8000 task.ti: e9be8000
PC is at special_mapping_fault+0xa4/0xc4
LR is at __do_fault+0x68/0x48c

This doesn't show up unless you do quite a bit of testing; a simple
boot test does not do this, so all my nightly tests were passing fine.

The reason for this is that install_special_mapping() expects the
page array to stick around, and as this was only inserting one page
which was stored on the kernel stack, that's why this was blowing up.

Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 44424c34 30-Jul-2013 Stephen Boyd <sboyd@codeaurora.org>

ARM: 7803/1: Fix deadlock scenario with smp_send_stop()

If one process calls sys_reboot and that process then stops other
CPUs while those CPUs are within a spin_lock() region we can
potentially encounter a deadlock scenario like below.

CPU 0 CPU 1
----- -----
spin_lock(my_lock)
smp_send_stop()
<send IPI> handle_IPI()
disable_preemption/irqs
while(1);
<PREEMPT>
spin_lock(my_lock) <--- Waits forever

We shouldn't attempt to run any other tasks after we send a stop
IPI to a CPU so disable preemption so that this task runs to
completion. We use local_irq_disable() here for cross-arch
consistency with x86.

Reported-by: Sundarajan Srinivasan <sundaraj@codeaurora.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a5463cd3 31-Jul-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: make vectors page inaccessible from userspace

If kuser helpers are not provided by the kernel, disable user access to
the vectors page. With the kuser helpers gone, there is no reason for
this page to be visible to userspace.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 48be69a0 23-Jul-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: move signal handlers into a vdso-like page

Move the signal handlers into a VDSO page rather than keeping them in
the vectors page. This allows us to place them randomly within this
page, and also map the page at a random location within userspace
further protecting these code fragments from ROP attacks. The new
VDSO page is also poisoned in the same way as the vector page.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1b3a5d02 08-Jul-2013 Robin Holt <holt@sgi.com>

reboot: move arch/x86 reboot= handling to generic kernel

Merge together the unicore32, arm, and x86 reboot= command line
parameter handling.

Signed-off-by: Robin Holt <holt@sgi.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Russ Anderson <rja@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 7b6d864b 08-Jul-2013 Robin Holt <holt@sgi.com>

reboot: arm: change reboot_mode to use enum reboot_mode

Preparing to move the parsing of reboot= to generic kernel code forces
the change in reboot_mode handling to use the enum.

[akpm@linux-foundation.org: fix arch/arm/mach-socfpga/socfpga.c]
Signed-off-by: Robin Holt <holt@sgi.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Russ Anderson <rja@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 16d6d5b0 08-Jul-2013 Robin Holt <holt@sgi.com>

reboot: arm: prepare reboot_mode for moving to generic kernel code

Prepare for the moving the parsing of reboot= to the generic kernel code
by making reboot_mode into a more generic form.

Signed-off-by: Robin Holt <holt@sgi.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Russ Anderson <rja@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# a4780ade 18-Jun-2013 André Hentschel <nerv@dawncrow.de>

ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork

Since commit 6a1c53124aa1 the user writeable TLS register was zeroed to
prevent it from being used as a covert channel between two tasks.

There are more and more applications coming to Windows RT,
Wine could support them, but mostly they expect to have
the thread environment block (TEB) in TPIDRURW.

This patch preserves that register per thread instead of clearing it.
Unlike the TPIDRURO, which is already switched, the TPIDRURW
can be updated from userspace so needs careful treatment in the case that we
modify TPIDRURW and call fork(). To avoid this we must always read
TPIDRURW in copy_thread.

Signed-off-by: André Hentschel <nerv@dawncrow.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 19ab428f 14-Jun-2013 Stephen Warren <swarren@nvidia.com>

ARM: 7759/1: decouple CPU offlining from reboot/shutdown

Add comments to machine_shutdown()/halt()/power_off()/restart() that
describe their purpose and/or requirements re: CPUs being active/not.

In machine_shutdown(), replace the call to smp_send_stop() with a call to
disable_nonboot_cpus(). This completely disables all but one CPU, thus
satisfying the requirement that only a single CPU be active for kexec.
Adjust Kconfig dependencies for this change.

In machine_halt()/power_off()/restart(), call smp_send_stop() directly,
rather than via machine_shutdown(); these functions don't need to
completely de-activate all CPUs using hotplug, but rather just quiesce
them.

Remove smp_kill_cpus(), and its call from smp_send_stop().
smp_kill_cpus() was indirectly calling smp_ops.cpu_kill() without calling
smp_ops.cpu_die() on the target CPUs first. At least some implementations
of smp_ops had issues with this; it caused cpu_kill() to hang on Tegra,
for example. Since smp_send_stop() is only used for shutdown, halt, and
power-off, there is no need to attempt any kind of CPU hotplug here.

Adjust Kconfig to reflect that machine_shutdown() (and hence kexec)
relies upon disable_nonboot_cpus(). However, this alone doesn't guarantee
that hotplug will work, or even that hotplug is implemented for a
particular piece of HW that a multi-platform zImage runs on. Hence, add
error-checking to machine_kexec() to determine whether it did work.

Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Zhangfei Gao <zhangfei.gao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4ca46c5e 16-May-2013 Steven Capper <steve.capper@linaro.org>

ARM: 7727/1: remove the .vm_mm value from gate_vma

If one reads /proc/$PID/smaps, the mmap_sem belonging to the
address space of the task being examined is locked for reading.
All the pages of the vmas belonging to the task's address space
are then walked with this lock held.

If a gate_vma is present in the architecture, it too is examined
by the fs/proc/task_mmu.c code. As gate_vma doesn't belong to the
address space of the task though, its pages are not walked.

A recent cleanup (commit f6604efe) of the gate_vma initialisation
code set the vm_mm value to &init_mm. Unfortunately a non-NULL
vm_mm value in the gate_vma will cause the task_mmu code to attempt
to walk the pages of the gate_vma (with no mmap-sem lock held). If
one enables Transparent Huge Page support and vm debugging, this
will then cause OOPses as pmd_trans_huge_lock is called without
mmap_sem being locked.

This patch removes the .vm_mm value from gate_vma, restoring the
original behaviour of the task_mmu code.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a43cb95d 30-Apr-2013 Tejun Heo <tj@kernel.org>

dump_stack: unify debug information printed by show_regs()

show_regs() is inherently arch-dependent but it does make sense to print
generic debug information and some archs already do albeit in slightly
different forms. This patch introduces a generic function to print debug
information from show_regs() so that different archs print out the same
information and it's much easier to modify what's printed.

show_regs_print_info() prints out the same debug info as dump_stack()
does plus task and thread_info pointers.

* Archs which didn't print debug info now do.

alpha, arc, blackfin, c6x, cris, frv, h8300, hexagon, ia64, m32r,
metag, microblaze, mn10300, openrisc, parisc, score, sh64, sparc,
um, xtensa

* Already prints debug info. Replaced with show_regs_print_info().
The printed information is superset of what used to be there.

arm, arm64, avr32, mips, powerpc, sh32, tile, unicore32, x86

* s390 is special in that it used to print arch-specific information
along with generic debug info. Heiko and Martin think that the
arch-specific extra isn't worth keeping s390 specfic implementation.
Converted to use the generic version.

Note that now all archs print the debug info before actual register
dumps.

An example BUG() dump follows.

kernel BUG at /work/os/work/kernel/workqueue.c:4841!
invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0-rc1-work+ #7
Hardware name: empty empty/S3992, BIOS 080011 10/26/2007
task: ffff88007c85e040 ti: ffff88007c860000 task.ti: ffff88007c860000
RIP: 0010:[<ffffffff8234a07e>] [<ffffffff8234a07e>] init_workqueues+0x4/0x6
RSP: 0000:ffff88007c861ec8 EFLAGS: 00010246
RAX: ffff88007c861fd8 RBX: ffffffff824466a8 RCX: 0000000000000001
RDX: 0000000000000046 RSI: 0000000000000001 RDI: ffffffff8234a07a
RBP: ffff88007c861ec8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffffffff8234a07a
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff88007dc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff88015f7ff000 CR3: 00000000021f1000 CR4: 00000000000007f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Stack:
ffff88007c861ef8 ffffffff81000312 ffffffff824466a8 ffff88007c85e650
0000000000000003 0000000000000000 ffff88007c861f38 ffffffff82335e5d
ffff88007c862080 ffffffff8223d8c0 ffff88007c862080 ffffffff81c47760
Call Trace:
[<ffffffff81000312>] do_one_initcall+0x122/0x170
[<ffffffff82335e5d>] kernel_init_freeable+0x9b/0x1c8
[<ffffffff81c47760>] ? rest_init+0x140/0x140
[<ffffffff81c4776e>] kernel_init+0xe/0xf0
[<ffffffff81c6be9c>] ret_from_fork+0x7c/0xb0
[<ffffffff81c47760>] ? rest_init+0x140/0x140
...

v2: Typo fix in x86-32.

v3: CPU number dropped from show_regs_print_info() as
dump_stack_print_info() has been updated to print it. s390
specific implementation dropped as requested by s390 maintainers.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Acked-by: Chris Metcalf <cmetcalf@tilera.com> [tile bits]
Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon bits]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# f7b861b7 21-Mar-2013 Thomas Gleixner <tglx@linutronix.de>

arm: Use generic idle loop

Use the generic idle loop and replace enable/disable_hlt with the
respective core functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Tested-by: Kevin Hilman <khilman@linaro.org> # OMAP
Link: http://lkml.kernel.org/r/20130321215233.826238797@linutronix.de


# 6546327a 21-Mar-2013 Thomas Gleixner <tglx@linutronix.de>

arch: Cleanup enable/disable_hlt

enable/disable_hlt() does not need to be exported and can be killed on
architectures which do not use it at all.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Link: http://lkml.kernel.org/r/20130321215233.377959540@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 80bbe9f2 06-Mar-2013 Thomas Gleixner <tglx@linutronix.de>

arm: Use tick broadcast expired check

Avoid going back into deep idle if the tick broadcast IPI is about to
fire.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: LAK <linux-arm-kernel@lists.infradead.org>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Arjan van de Veen <arjan@infradead.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Jason Liu <liu.h.jason@gmail.com>
Link: http://lkml.kernel.org/r/20130306111537.640722922@linutronix.de


# f6604efe 23-Feb-2013 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: cleanup gate_vma initialization

Three's no need to have code initializing this by hand; it's more
efficient to initialize the constant structure members directly.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b0ea1149 09-Feb-2013 Len Brown <len.brown@intel.com>

ARM idle: delete pm_idle

pm_idle() on ARM was a synonym for default_idle(),
so simply invoke default_idle() directly.

Signed-off-by: Len Brown <len.brown@intel.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>


# afa86fc4 22-Oct-2012 Al Viro <viro@zeniv.linux.org.uk>

flagday: don't pass regs to copy_thread()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 38a61b6b 21-Oct-2012 Al Viro <viro@zeniv.linux.org.uk>

arm: switch to generic fork/vfork/clone

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 9ecb47de 08-Nov-2012 Nicolas Pitre <nico@fluxnic.net>

ARM: 7574/1: kernel/process.c: include idmap.h instead of redeclaring setup_mm_for_reboot()

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 871df85a 28-Sep-2012 fwu <fwu@marvell.com>

ARM: 7544/1: Add BUG_ON when hlt counter is wrongly used

1. On ARM platform, "nohlt" can be used to prevent core from idle
process, returning immediately.
2. There are two interfaces, exported for other modules, named
"disable_hlt" and "enable_hlt" are used to enable/disable the
cpuidle mechanism by increasing/decreasing "hlt_counter".
Disable_hlt and enable_hlt are paired operation,
when you first call disable_hlt and then enable_hlt, the
semantics are right.
3. There is no obvious constraint to prevent user(driver/module)
code to prevent the case that enable_hlt is ahead of disable_hlt,
which is a fatal operation on kernel state change from user,
and there is no any WARNING or notification if the case happens
in current kernel code.
This patch aims to report BUG when the case happens, just like
what the kernel do when enable_irq is ahead of disable_irq.

Link: https://patchwork.kernel.org/patch/1527881/

Signed-off-by: fwu <fwu@marvell.com>
Signed-off-by: YiLu Mao <ylmao@marvell.com>
Signed-off-by: Ning Jiang <ning.jiang@marvell.com>
Acked-by: Nicolas Pitre
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9fff2fa0 10-Oct-2012 Al Viro <viro@zeniv.linux.org.uk>

arm: switch to saner kernel_execve() semantics

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 9e14f828 09-Sep-2012 Al Viro <viro@zeniv.linux.org.uk>

arm: split ret_from_fork, simplify kernel_thread() [based on patch by rmk]

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# fa8bbb13 13-Mar-2012 Bryan Wu <bryan.wu@canonical.com>

ARM: use new LEDS CPU trigger stub to replace old one

Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Bryan Wu <bryan.wu@canonical.com>


# 98bd8b96 13-Jul-2012 Shawn Guo <shawn.guo@linaro.org>

ARM: 7466/1: disable interrupt before spinning endlessly

The CPU will endlessly spin at the end of machine_halt and
machine_restart calls. However, this will lead to a soft lockup
warning after about 20 seconds, if CONFIG_LOCKUP_DETECTOR is enabled,
as system timer is still alive.

Disable interrupt before going to spin endlessly, so that the lockup
warning will never be seen.

Cc: <stable@vger.kernel.org>
Reported-by: Marek Vasut <marex@denx.de>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 448eca90 07-May-2012 Thomas Gleixner <tglx@linutronix.de>

arm: Remove unused cpu_idle_wait()

cpuidle uses a generic function now. Remove the unused code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Link: http://lkml.kernel.org/r/20120507175652.260797846@linutronix.de


# 9f97da78 28-Mar-2012 David Howells <dhowells@redhat.com>

Disintegrate asm/system.h for ARM

Disintegrate asm/system.h for ARM.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Russell King <linux@arm.linux.org.uk>
cc: linux-arm-kernel@lists.infradead.org


# f9d4861f 19-Jan-2012 Will Deacon <will@kernel.org>

ARM: 7294/1: vectors: use gate_vma for vectors user mapping

The current user mapping for the vectors page is inserted as a `horrible
hack vma' into each task via arch_setup_additional_pages. This causes
problems with the MM subsystem and vm_normal_page, as described here:

https://lkml.org/lkml/2012/1/14/55

Following the suggestion from Hugh in the above thread, this patch uses
the gate_vma for the vectors user mapping, therefore consolidating
the horrible hack VMAs into one.

Acked-and-Tested-by: Nicolas Pitre <nico@linaro.org>

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 909af768 23-Mar-2012 Jason Baron <jbaron@redhat.com>

coredump: remove VM_ALWAYSDUMP flag

The motivation for this patchset was that I was looking at a way for a
qemu-kvm process, to exclude the guest memory from its core dump, which
can be quite large. There are already a number of filter flags in
/proc/<pid>/coredump_filter, however, these allow one to specify 'types'
of kernel memory, not specific address ranges (which is needed in this
case).

Since there are no more vma flags available, the first patch eliminates
the need for the 'VM_ALWAYSDUMP' flag. The flag is used internally by
the kernel to mark vdso and vsyscall pages. However, it is simple
enough to check if a vma covers a vdso or vsyscall page without the need
for this flag.

The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new
'VM_NODUMP' flag, which can be set by userspace using new madvise flags:
'MADV_DONTDUMP', and unset via 'MADV_DODUMP'. The core dump filters
continue to work the same as before unless 'MADV_DONTDUMP' is set on the
region.

The qemu code which implements this features is at:

http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch

In my testing the qemu core dump shrunk from 383MB -> 13MB with this
patch.

I also believe that the 'MADV_DONTDUMP' flag might be useful for
security sensitive apps, which might want to select which areas are
dumped.

This patch:

The VM_ALWAYSDUMP flag is currently used by the coredump code to
indicate that a vma is part of a vsyscall or vdso section. However, we
can determine if a vma is in one these sections by checking it against
the gate_vma and checking for a non-NULL return value from
arch_vma_name(). Thus, freeing a valuable vma bit.

Signed-off-by: Jason Baron <jbaron@redhat.com>
Acked-by: Roland McGrath <roland@hack.frob.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Avi Kivity <avi@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# bd2f5536 20-Mar-2011 Thomas Gleixner <tglx@linutronix.de>

sched/rt: Use schedule_preempt_disabled()

Coccinelle based conversion.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-24swm5zut3h9c4a6s46x8rws@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# ae940913 19-Dec-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: substitute arch_idle()

Now that all implementations of arch_idle() are equivalent to cpu_do_idle()
we can just use the later directly and stop including mach/system.h.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-and-tested-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Stephen Warren <swarren@nvidia.com>


# 4fa20439 01-Aug-2011 Nicolas Pitre <nico@fluxnic.net>

ARM: clean up idle handlers

Let's factor out the need_resched() check instead of having it duplicated
in every pm_idle implementations to avoid inconsistencies (omap2_pm_idle
is missing it already).

The forceful re-enablement of IRQs after pm_idle has returned can go.
The warning certainly doesn't trigger for existing users.

To get rid of the pm_idle calling convention oddity, let's introduce
arm_pm_idle() allowing for the local_irq_enable() to be factored out
from SOC specific implementations. The default pm_idle function becomes
a wrapper for arm_pm_idle and it takes care of enabling IRQs closer to
where they are initially disabled.

And finally move the comment explaining the reason for that turning off
of IRQs to a more proper location.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-and-tested-by: Jamie Iles <jamie@jamieiles.com>


# f88b8979 05-Nov-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: restart: remove the now empty arch_reset()

Remove the now empty arch_reset() from all the mach/system.h includes,
and remove its callsite. Remove arm_machine_restart() as this function
no longer does anything useful.

For samsung platforms, remove the include of mach/system-reset.h and
plat/system-reset.h from their respective mach/system.h headers as these
just define their arch_reset functions. As a result, the s3c2410 and
plat-samsung system-reset.h files are no longer referenced, so remove
these files entirely.

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 290130a1 05-Jun-2011 Will Deacon <will@kernel.org>

ARM: reset: implement soft_restart for jumping to a physical address

Tools such as kexec and CPU hotplug require a way to reset the processor
and branch to some code in physical space. This requires various bits of
jiggery pokery with the caches and MMU which, when it goes wrong, tends
to lock up the system.

This patch fleshes out the soft_restart implementation so that it
branches to the reset code using the identity mapping. This requires us
to change to a temporary stack, held within the kernel image as a static
array, to avoid conflicting with the new view of memory.

Signed-off-by: Will Deacon <will.deacon@arm.com>


# 1268fbc7 17-Nov-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu()

Those two APIs were provided to optimize the calls of
tick_nohz_idle_enter() and rcu_idle_enter() into a single
irq disabled section. This way no interrupt happening in-between would
needlessly process any RCU job.

Now we are talking about an optimization for which benefits
have yet to be measured. Let's start simple and completely decouple
idle rcu and dyntick idle logics to simplify.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


# 2bbb6817 08-Oct-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Allow rcu extended quiescent state handling seperately from tick stop

It is assumed that rcu won't be used once we switch to tickless
mode and until we restart the tick. However this is not always
true, as in x86-64 where we dereference the idle notifiers after
the tick is stopped.

To prepare for fixing this, add two new APIs:
tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().

If no use of RCU is made in the idle loop between
tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
must instead call the new *_norcu() version such that the arch doesn't
need to call rcu_idle_enter() and rcu_idle_exit().

Otherwise the arch must call tick_nohz_enter_idle() and
tick_nohz_exit_idle() and also call explicitly:

- rcu_idle_enter() after its last use of RCU before the CPU is put
to sleep.
- rcu_idle_exit() before the first use of RCU after the CPU is woken
up.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


# 280f0677 07-Oct-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Separate out irq exit and idle loop dyntick logic

The tick_nohz_stop_sched_tick() function, which tries to delay
the next timer tick as long as possible, can be called from two
places:

- From the idle loop to start the dytick idle mode
- From interrupt exit if we have interrupted the dyntick
idle mode, so that we reprogram the next tick event in
case the irq changed some internal state that requires this
action.

There are only few minor differences between both that
are handled by that function, driven by the ts->inidle
cpu variable and the inidle parameter. The whole guarantees
that we only update the dyntick mode on irq exit if we actually
interrupted the dyntick idle mode, and that we enter in RCU extended
quiescent state from idle loop entry only.

Split this function into:

- tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
dynticks idle mode unconditionally if it can, and enters into RCU
extended quiescent state.

- tick_nohz_irq_exit() which only updates the dynticks idle mode
when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
into tick_nohz_idle_exit().

This simplifies the code and micro-optimize the irq exit path (no need
for local_irq_save there). This also prepares for the split between
dynticks and rcu extended quiescent state logics. We'll need this split to
further fix illegal uses of RCU in extended quiescent states in the idle
loop.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>


# 11ed0ba1 14-Nov-2011 Will Deacon <will@kernel.org>

ARM: 7161/1: errata: no automatic store buffer drain

This patch implements a workaround for PL310 erratum 769419. On
revisions of the PL310 prior to r3p2, the Store Buffer does not
automatically drain. This can cause normal, non-cacheable writes to be
retained when the memory system is idle, leading to suboptimal I/O
performance for drivers using coherent DMA.

This patch adds an optional wmb() call to the cpu_idle loop. On systems
with an outer cache, this causes an explicit flush of the store buffer.

Cc: stable@vger.kernel.org
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# e879c862 01-Nov-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: restart: only perform setup for restart when soft-restarting

We only need to set the system up for a soft-restart if we're going to
be doing a soft-restart. Provide a new function (soft_restart()) which
does the setup and final call for this, and make platforms use it.
Eliminate the call to setup_restart() from the default handler.

This means that platforms arch_reset() function is no longer called with
the page tables prepared for a soft-restart, and caches will still be
enabled.

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Krzysztof Ha■asa <khc@pm.waw.pl>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Acked-by: Wan ZongShun <mcuos.com@gmail.com>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5aafec15 01-Nov-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: restart: remove argument to setup_mm_for_reboot()

setup_mm_for_reboot() doesn't make use of its argument, so remove it.

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ac15e00b 31-Oct-2011 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: restart: move reboot failure handing into machine_restart()

Move the failure to reboot into machine_restart() to always catch
this condition, even if a platform decides to hook the restarting
via arm_pm_restart().

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ecea4ab6 22-Jul-2011 Paul Gortmaker <paul.gortmaker@windriver.com>

arm: convert core files from module.h to export.h

Many of the core ARM kernel files are not modules, but just
including module.h for exporting symbols. Now these files can
use the lighter footprint export.h for this role.

There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>


# b380ab4f 30-Aug-2011 Laura Abbott <lauraa@codeaurora.org>

ARM: 7068/1: process: change from __backtrace to dump_stack in show_regs

Currently, show_regs calls __backtrace which does
nothing if CONFIG_FRAME_POINTER is not set. Switch to
dump_stack which handles both CONFIG_FRAME_POINTER and
CONFIG_ARM_UNWIND correctly.

__backtrace is now superseded by dump_stack in general
and show_regs was the last caller so remove __backtrace
as well.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cbc158d6 04-Aug-2011 David Brown <davidb@codeaurora.org>

cpuidle: Consistent spelling of cpuidle_idle_call()

Commit a0bfa1373859e9d11dc92561a8667588803e42d8 mispells
cpuidle_idle_call() on ARM and SH code. Fix this to be consistent.

Cc: Kevin Hilman <khilman@deeprootsystems.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: x86@kernel.org
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: David Brown <davidb@codeaurora.org>
[ Also done by Mark Brown - th ebug has been around forever, and was
noticed in -next, but the idle tree never picked it up. Bad bad bad ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# a0bfa137 01-Apr-2011 Len Brown <len.brown@intel.com>

cpuidle: stop depending on pm_idle

cpuidle users should call cpuidle_call_idle() directly
rather than via (pm_idle)() function pointer.

Architecture may choose to continue using (pm_idle)(),
but cpuidle need not depend on it:

my_arch_cpu_idle()
...
if(cpuidle_call_idle())
pm_idle();

cc: Kevin Hilman <khilman@deeprootsystems.com>
cc: Paul Mundt <lethal@linux-sh.org>
cc: x86@kernel.org
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>


# 2e82669a 06-Apr-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: 6867/1: Introduce THREAD_NOTIFY_COPY for copy_thread() hooks

This patch adds THREAD_NOTIFY_COPY for calling registered handlers
during the copy_thread() function call. It also changes the VFP handler
to use a switch statement rather than if..else and ignore this event.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6cde6d42 11-Jan-2011 Will Deacon <will@kernel.org>

ARM: 6619/1: nommu: avoid mapping vectors page when !CONFIG_MMU

When running without an MMU, we do not need to install a mapping for the
vectors page. Attempting to do so causes a compile-time error because
install_special_mapping is not defined.

This patch adds compile-time guards to the vector mapping functions
so that we can build nommu configurations once more.

Acked-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c7b0aff4 01-Oct-2010 Kevin Hilman <khilman@deeprootsystems.com>

ARM: 6428/1: add cpu_idle_wait() to support CPUidle on SMP systems.

In order for CPUidle to work on SMP systems, an implementation of
cpu_idle_wait() is needed.

This patch duplicates the x86 implementation of cpu_idle_wait() for
ARM.

Tested-by: Colin Cross <ccross@android.com>
Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ec706dab 26-Aug-2010 Nicolas Pitre <nico@fluxnic.net>

ARM: add a vma entry for the user accessible vector page

The kernel makes the high vector page visible to user space. This page
contains (amongst others) small code segments that can be executed in
user space. Make this page visible through ptrace and /proc/<pid>/mem
in order to let gdb perform code parsing needed for proper unwinding.

For example, the ERESTART_RESTARTBLOCK handler actually has a stack
frame -- it returns to a PC value stored on the user's stack. To
unwind after a "sleep" system call was interrupted twice, GDB would
have to recognize this situation and understand that stack frame
layout -- which it currently cannot do.

We could fix this by hard-coding addresses in the vector page range into
GDB, but that isn't really portable as not all of those addresses are
guaranteed to remain stable across kernel releases. And having the gdb
process make an exception for this page and get content from its own
address space for it looks strange, and it is not future proof either.

Being located above PAGE_OFFSET, this vma cannot be deleted by
user space code.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 864232fa 03-Sep-2010 Will Deacon <will@kernel.org>

ARM: 6357/1: hw-breakpoint: add new ptrace requests for hw-breakpoint interaction

For debuggers to take advantage of the hw-breakpoint framework in the kernel,
it is necessary to expose the API calls via a ptrace interface.

This patch exposes the hardware breakpoints framework as a collection of
virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS
requests. The breakpoints are stored in the debug_info struct of the running
thread.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3d3f78d7 26-Jul-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: call machine_shutdown() from machine_halt(), etc

x86 calls machine_shutdown() from the various machine_*() calls which
take the machine down ready for halting, restarting, etc, and uses
this to bring the system safely to a point where those actions can be
performed. Such actions are stopping the secondary CPUs.

So, change the ARM implementation of these to reflect what x86 does.

This solves kexec problems on ARM SMP platforms, where the secondary
CPUs were left running across the kexec call.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9ca03a21 25-Jul-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Factor out common code from cpu_proc_fin()

All implementations of cpu_proc_fin() start by disabling interrupts
and then flush caches. Rather than have every processors proc_fin()
implementation do this, move it out into generic code - and move the
cache flush past setup_mm_for_reboot() (so it can benefit from having
caches still enabled.)

This allows cpu_proc_fin() to become independent of the L1/L2 cache
types, and eventually move the L2 cache flushing into the L2 support
code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ac78884e 10-Jul-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: lockdep: fix unannotated irqs-on

CPU: Testing write buffer coherency: ok
------------[ cut here ]------------
WARNING: at kernel/lockdep.c:3145 check_flags+0xcc/0x1dc()
Modules linked in:
[<c0035120>] (unwind_backtrace+0x0/0xf8) from [<c0355374>] (dump_stack+0x20/0x24)
[<c0355374>] (dump_stack+0x20/0x24) from [<c0060c04>] (warn_slowpath_common+0x58/0x70)
[<c0060c04>] (warn_slowpath_common+0x58/0x70) from [<c0060c3c>] (warn_slowpath_null+0x20/0x24)
[<c0060c3c>] (warn_slowpath_null+0x20/0x24) from [<c008f224>] (check_flags+0xcc/0x1dc)
[<c008f224>] (check_flags+0xcc/0x1dc) from [<c00945dc>] (lock_acquire+0x50/0x140)
[<c00945dc>] (lock_acquire+0x50/0x140) from [<c0358434>] (_raw_spin_lock+0x50/0x88)
[<c0358434>] (_raw_spin_lock+0x50/0x88) from [<c00fd114>] (set_task_comm+0x2c/0x60)
[<c00fd114>] (set_task_comm+0x2c/0x60) from [<c007e184>] (kthreadd+0x30/0x108)
[<c007e184>] (kthreadd+0x30/0x108) from [<c0030104>] (kernel_thread_exit+0x0/0x8)
---[ end trace 1b75b31a2719ed1c ]---
possible reason: unannotated irqs-on.
irq event stamp: 3
hardirqs last enabled at (2): [<c0059bb0>] finish_task_switch+0x48/0xb0
hardirqs last disabled at (3): [<c002f0b0>] ret_slow_syscall+0xc/0x1c
softirqs last enabled at (0): [<c005f3e0>] copy_process+0x394/0xe5c
softirqs last disabled at (0): [<(null)>] (null)

Fix this by ensuring that the lockdep interrupt state is manipulated in
the appropriate places. We essentially treat userspace as an entirely
separate environment which isn't relevant to lockdep (lockdep doesn't
monitor userspace.) We don't tell lockdep that IRQs will be enabled
in that environment.

Instead, when creating kernel threads (which is a rare event compared
to entering/leaving userspace) we have to update the lockdep state. Do
this by starting threads with IRQs disabled, and in the kthread helper,
tell lockdep that IRQs are enabled, and enable them.

This provides lockdep with a consistent view of the current IRQ state
in kernel space.

This also revert portions of 0d928b0b616d1c5c5fe76019a87cba171ca91633
which didn't fix the problem.

Tested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c743f380 24-May-2010 Nicolas Pitre <nico@fluxnic.net>

ARM: initial stack protector (-fstack-protector) support

This is the very basic stuff without the changing canary upon
task switch yet. Just the Kconfig option and a constant canary
value initialized at boot time.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 990cb8ac 14-Jun-2010 Nicolas Pitre <nico@fluxnic.net>

[ARM] implement arch_randomize_brk()

For this feature to take effect, CONFIG_COMPAT_BRK must be turned
off. This can safely be turned off for any EABI user space versions.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 4260415f 19-Apr-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: fix build error in arch/arm/kernel/process.c

/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'

This is caused because:

.section .data
.section .text
.section .text
.previous

does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.

Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 5a0e3ad6 24-Mar-2010 Tejun Heo <tj@kernel.org>

include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h

percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.

2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).

* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>


# 22325525 08-Jan-2010 Rabin Vincent <rabin@rab.in>

ARM: 5868/1: ARM: fix "BUG: using smp_processor_id() in preemptible code"

Fix the following warning, which appears when the register dump for a
faulting process is printed in a kernel with SMP, DEBUG_PREEMPT, and
DEBUG_USER (with user_debug=31) enabled:

BUG: using smp_processor_id() in preemptible [00000000] code: init/1
caller is __show_regs+0x18/0x234
Backtrace:
[<c0159e5c>] (dump_backtrace+0x0/0x114) from [<c01faf30>] (dump_stack+0x18/0x1c)
r6:c781a000 r5:c0157544 r4:00000001 r3:00000000
[<c01faf18>] (dump_stack+0x0/0x1c) from [<c01e5230>] (debug_smp_processor_id+0xc4/0xf8)
[<c01e516c>] (debug_smp_processor_id+0x0/0xf8) from [<c0157544>] (__show_regs+0x18/0x234)
r6:c781bfb0 r5:00000000 r4:c781bfb0 r3:00000000
[<c015752c>] (__show_regs+0x0/0x234) from [<c01577a0>] (show_regs+0x40/0x50)
[<c0157760>] (show_regs+0x0/0x50) from [<c015c968>] (__do_user_fault+0x5c/0xa4)
r4:c781c000 r3:00000000
[<c015c90c>] (__do_user_fault+0x0/0xa4) from [<c015cbe0>] (do_page_fault+0x1b4/0x1e4)
r7:00000000 r6:00010000 r5:c781bfb0 r4:c781c000
[<c015ca2c>] (do_page_fault+0x0/0x1e4) from [<c01554c8>] (do_DataAbort+0x3c/0xa0)
[<c015548c>] (do_DataAbort+0x0/0xa0) from [<c01560c4>] (ret_from_exception+0x0/0x10)

Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 797245f5 18-Dec-2009 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Convert VFP/Crunch/XscaleCP thread_release() to exit_thread()

This avoids races in the VFP code where the dead thread may have
state on another CPU. By moving this code to exit_thread(), we
will be running as the thread, and therefore be running on the
current CPU.

This means that we can ensure that the only local state is accessed
in the thread notifiers.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# cde3f860 13-Oct-2009 Artem Bityutskiy <dedekind1@gmail.com>

ARM: 5759/1: Add register information of threads to coredump

Defines ELF_CORE_COPY_TASK_REGS so that CPU register information
of every thread is included in coredump. Without this, only the faulting
thread is coredumped.

Cc: Roger Quadros <ext-roger.quadros@nokia.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b86040a5 23-Jul-2009 Catalin Marinas <catalin.marinas@arm.com>

Thumb-2: Implementation of the unified start-up and exceptions code

This patch implements the ARM/Thumb-2 unified kernel start-up and
exception handling code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 9ccdac36 22-Jun-2009 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] idle: clean up pm_idle calling, obey hlt_counter

pm_idle is used by infrastructure (eg, cpuidle) which expects architectures
to call it in a certain way. Arrange for ARM to follow x86's lead on this
and call pm_idle() with interrupts already disabled. However, we expect
pm_idle() to enable interrupts before it returns.

Also, OMAP wants to be able to disable hlt-ing, so allow hlt_counter to
prevent all calls to pm_idle.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# feb97c36 19-Jun-2009 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 5559/1: Limit the stack unwinding caused by a kthread exit

When a kthread function returns, it branches to do_exit(). However, the
unwinding information isn't valid anymore and any stack trace caused by
do_exit() may be incorrect. This patch adds a kernel_thread_exit()
function and annotated with '.cantunwind' so that the unwinder stops
when reaching it.

Tested-by: Tony Lindgren <tony@atomide.com>

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 26584853 30-May-2009 Catalin Marinas <catalin.marinas@arm.com>

Add core support for ARMv6/v7 big-endian

Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
(byte-invariant). This patch adds the core support:

- setting of the BE-8 mode via the CPSR.E register for both kernel and
user threads
- big-endian page table walking
- REV used to rotate instructions read from memory during fault
processing as they are still little-endian format
- Kconfig and Makefile support for BE-8. The --be8 option must be passed
to the final linking stage to convert the instructions to
little-endian

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 6f2c55b8 02-Apr-2009 Alexey Dobriyan <adobriyan@gmail.com>

Simplify copy_thread()

First argument unused since 2.3.11.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# be093beb 19-Mar-2009 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] pass reboot command line to arch_reset()

OMAP wishes to pass state to the boot loader upon reboot in order to
instruct it whether to wait for USB-based reflashing or not. There is
already a facility to do this via the reboot() syscall, except we ignore
the string passed to machine_restart().

This patch fixes things to pass this string to arch_reset(). This means
that we keep the reboot mode limited to telling the kernel _how_ to
perform the reboot which should be independent of what we request the
boot loader to do.

Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 2d7c11bf 11-Feb-2009 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 5382/1: unwind: Reorganise the stacktrace support

This patch changes the walk_stacktrace and its callers for easier
integration of stack unwinding. The arch/arm/kernel/stacktrace.h file is
also moved to arch/arm/include/asm/stacktrace.h.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 33fa9b13 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Convert asm/uaccess.h to linux/uaccess.h

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1de765c1 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] remove pc_pointer()

pc_pointer() was a function to mask the PC for 26-bit ARMs, which
we no longer support. Remove it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 09d9bae0 05-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] sparse: fix several warnings

arch/arm/kernel/process.c:270:6: warning: symbol 'show_fpregs' was not declared. Should it be static?

This function isn't used, so can be removed.

arch/arm/kernel/setup.c:532:9: warning: symbol 'len' shadows an earlier one
arch/arm/kernel/setup.c:524:6: originally declared here

A function containing two 'len's.

arch/arm/mm/fault-armv.c:188:13: warning: symbol 'check_writebuffer_bugs' was not declared. Should it be static?
arch/arm/mm/mmap.c:122:5: warning: symbol 'valid_phys_addr_range' was not declared. Should it be static?
arch/arm/mm/mmap.c:137:5: warning: symbol 'valid_mmap_phys_addr_range' was not declared. Should it be static?

Missing includes.

arch/arm/kernel/traps.c:71:77: warning: Using plain integer as NULL pointer
arch/arm/mm/ioremap.c:355:46: error: incompatible types in comparison expression (different address spaces)

Sillies.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a09e64fb 05-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Move include/asm-arm/arch-* to arch/arm/*/include/mach

This just leaves include/asm-arm/plat-* to deal with.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b8f8c3cf 18-Jul-2008 Thomas Gleixner <tglx@linutronix.de>

nohz: prevent tick stop outside of the idle loop

Jack Ren and Eric Miao tracked down the following long standing
problem in the NOHZ code:

scheduler switch to idle task
enable interrupts

Window starts here

----> interrupt happens (does not set NEED_RESCHED)
irq_exit() stops the tick

----> interrupt happens (does set NEED_RESCHED)

return from schedule()

cpu_idle(): preempt_disable();

Window ends here

The interrupts can happen at any point inside the race window. The
first interrupt stops the tick, the second one causes the scheduler to
rerun and switch away from idle again and we end up with the tick
disabled.

The fact that it needs two interrupts where the first one does not set
NEED_RESCHED and the second one does made the bug obscure and extremly
hard to reproduce and analyse. Kudos to Jack and Eric.

Solution: Limit the NOHZ functionality to the idle loop to make sure
that we can not run into such a situation ever again.

cpu_idle()
{
preempt_disable();

while(1) {
tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
are in the idle loop

while (!need_resched())
halt();

tick_nohz_restart_sched_tick(); <- disables NOHZ mode
preempt_enable_no_resched();
schedule();
preempt_disable();
}
}

In hindsight we should have done this forever, but ...

/me grabs a large brown paperbag.

Debugged-by: Jack Ren <jack.ren@marvell.com>,
Debugged-by: eric miao <eric.y.miao@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 205bee6a 20-Apr-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] dyntick: Remove obsolete and unused ARM dyntick support

dyntick is superseded by the clocksource/clockevent infrastructure,
using the NO_HZ configuration option. No one implements dyntick on
ARM anymore, so it's pointless keeping it around. Remove dyntick
support.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1eb11411 08-Feb-2008 David Howells <dhowells@redhat.com>

aout: remove unnecessary inclusions of {asm, linux}/a.out.h

Remove now unnecessary inclusions of {asm,linux}/a.out.h.

[akpm@linux-foundation.org: fix alpha build]
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 7fa30315 08-Feb-2008 David Howells <dhowells@redhat.com>

aout: suppress A.OUT library support if !CONFIG_ARCH_SUPPORTS_AOUT

Suppress A.OUT library support if CONFIG_ARCH_SUPPORTS_AOUT is not set.

Not all architectures support the A.OUT binfmt, so the ELF binfmt should not
be permitted to go looking for A.OUT libraries to load in such a case. Not
only that, but under such conditions A.OUT core dumps are not produced either.

To make this work, this patch also does the following:

(1) Makes the existence of the contents of linux/a.out.h contingent on
CONFIG_ARCH_SUPPORTS_AOUT.

(2) Renames dump_thread() to aout_dump_thread() as it's only called by A.OUT
core dumping code.

(3) Moves aout_dump_thread() into asm/a.out-core.h and makes it inline. This
is then included only where needed. This means that this bit of arch
code will be stored in the appropriate A.OUT binfmt module rather than
the core kernel.

(4) Drops A.OUT support for Blackfin (according to Mike Frysinger it's not
needed) and FRV.

This patch depends on the previous patch to move STACK_TOP[_MAX] out of
asm/a.out.h and into asm/processor.h as they're required whether or not A.OUT
format is available.

[jdike@addtoit.com: uml: re-remove accidentally restored code]
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 19c5870c 19-Oct-2007 Alexey Dobriyan <adobriyan@openvz.org>

Use helpers to obtain task pid in printks (arch code)

One of the easiest things to isolate is the pid printed in kernel log.
There was a patch, that made this for arch-independent code, this one makes
so for arch/xxx files.

It took some time to cross-compile it, but hopefully these are all the
printks in arch code.

Signed-off-by: Alexey Dobriyan <adobriyan@openvz.org>
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 909d6c6c 25-Jun-2007 George G. Davis <gdavis@mvista.com>

[ARM] 4453/1: Fully Decode ARM instruction set state in show_regs() tombstone

The ARM show_regs() tombstone only partially decodes which ARM ISA was
executing at the time a fault occurred displaying either "(T)" for the
Thumb case or nothing at all for other cases. This patch therefore
explicitly identifies which state the processor is in at the time of
a fault: ARM, Thumb, Jazelle or JazelleEE.

Signed-off-by: George G. Davis <gdavis@mvista.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 154c772e 18-Jun-2007 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Update show_regs/oops register format

Add the kernel release and version information to the output of
show_regs/oops. Add the CPU PSR register. Avoid using printk
to output partial lines; always output a complete line.

Re-combine the "Control" and "Table + DAC" lines after nommu
separated them; we don't want to waste vertical screen space
needlessly.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9e4559dd 14-Mar-2007 Kevin Hilman <khilman@mvista.com>

[ARM] 4258/2: Support for dynticks in idle loop

And, wrap timer_tick() and sysdev suspend/resume in
!GENERIC_CLOCKEVENTS since clockevent layer takes care
of these.

Signed-off-by: Kevin Hilman <khilman@mvista.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0f0a00be 03-Mar-2007 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Remove needless linux/ptrace.h includes

Lots of places in arch/arm were needlessly including linux/ptrace.h,
resumably because we used to pass a struct pt_regs to interrupt
handlers. Now that we don't, all these ptrace.h includes are
redundant.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ae0a846e 08-Jan-2007 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Move processor_modes[] to .../process.c

bad_mode() currently prints the mode which caused the exception, and
then causes an oops dump to be printed which again displays this
information (since the CPSR in the struct pt_regs is correct.) This
leads to processor_modes[] being shared between traps.c and process.c
with a local declaration of it.

We can clean this up by moving processor_modes[] to process.c and
removing the duplication, resulting in processor_modes[] becoming
static.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 12221442 02-Nov-2006 Paul Gortmaker <paul.gortmaker@gmail.com>

[ARM] 3911/2: Simplify alloc_thread_info on ARM

Remove ARM local cache of 4 struct thread_info.
Can cause oops under certain circumstances.

Russell indicated the original optimization was
required on older kernels to avoid thread starvation
on memory fragmentation, but may no longer be
required. I've updated the patch to 19rc4 and
ensured no <config.h> dain-bramage slipped in this
time (sorry about that).

Original description follows:

I was given some test results which pointed to an
Oops in alloc_thread_info (happened 2x), and after
looking at the code, I see that ARM has its own
local cache of 4 struct thread_info. There wasn't
any clear (to me) synchronization between the
alloc_thread_info and the free_thread_info.

I looked over the other arch, and they all simply
allocate them on an as needed basis, so I simplified
the ARM to do the same, based on the other arch
(e.g. PPC) and the folks doing the testing have
indicated that this fixed the oops.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f12d0d7c 26-Sep-2006 Hyok S. Choi <hyok.choi@samsung.com>

[ARM] nommu: manage the CP15 things

All the current CP15 access codes in ARM arch can be categorized and
conditioned by the defines as follows:

Related operation Safe condition
a. any CP15 access !CPU_CP15
b. alignment trap CPU_CP15_MMU
c. D-cache(C-bit) CPU_CP15
d. I-cache CPU_CP15 && !( CPU_ARM610 || CPU_ARM710 ||
CPU_ARM720 || CPU_ARM740 ||
CPU_XSCALE || CPU_XSC3 )
e. alternate vector CPU_CP15 && !CPU_ARM740
f. TTB CPU_CP15_MMU
g. Domain CPU_CP15_MMU
h. FSR/FAR CPU_CP15_MMU

For example, alternate vector is supported if and only if
"CPU_CP15 && !CPU_ARM740" is satisfied.

Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ae95bfbb 01-Jul-2006 Lennert Buytenhek <buytenh@wantstofly.org>

[ARM] 3707/1: iwmmxt: use the generic thread notifier infrastructure

Patch from Lennert Buytenhek

This patch makes the iWMMXt context switch hook use the generic
thread notifier infrastructure that was recently merged in commit
d6551e884cf66de072b81f8b6d23259462c40baf.

Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 6ab3d562 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de>

Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>


# d6551e88 21-Jun-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Add thread_notify infrastructure

Some machine classes need to allow VFP support to be built into the
kernel, but still allow the kernel to run even though VFP isn't
present. Unfortunately, the kernel hard-codes VFP instructions
into the thread switch, which prevents this being run-time selectable.

Solve this by introducing a notifier which things such as VFP can
hook into to be informed of events which affect the VFP subsystem
(eg, creation and destruction of threads, switches between threads.)

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 74617fb6 19-Jun-2006 Richard Purdie <rpurdie@rpsys.net>

[ARM] 3593/1: Add reboot and shutdown handlers for Zaurus handhelds

Patch from Richard Purdie

Add functionality to allow machine specific reboot handlers on ARM.
Add machine specific reboot and poweroff handlers for all PXA Zaurus
models.

Signed-off-by: Richard Purdie <rpurdie@rpsys.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9d494ccb 16-May-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] arch/arm/kernel/process.c: Fix warning

arch/arm/kernel/process.c:314: warning: assignment makes integer from pointer without a cast

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1929ab8c 09-May-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Fix thread struct allocator for SMP case

The ARM thread struct allocator is racy on SMP systems. Fix it by
turning it into a per-cpu based allocator. This also allows keeps
the cache cache warm for thread structs and kernel stacks.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0cb3463f 31-Mar-2006 Adrian Bunk <bunk@stusta.de>

[PATCH] unexport get_wchan

The only user of get_wchan is the proc fs - and proc can't be built modular.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 84dff1a7 15-Mar-2006 Ben Dooks <ben-linux@fluff.org>

[ARM] 3363/1: [cleanup] process.c - fix warnings

Patch from Ben Dooks

Fix the following warnings from sparse:

arch/arm/kernel/process.c:86:6: warning: symbol 'default_idle' was not declared. Should it be static?
arch/arm/kernel/process.c:378:5: warning: symbol 'dump_fpu' was not declared. Should it be static?

Include <linux/elfcore.h> for dump_fpu() decleration, and
make default_idle() static as it is not used outside the file.

Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 32d39a93 12-Jan-2006 Al Viro <viro@ftp.linux.org.uk>

[PATCH] arm: task_stack_page()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 55205823 12-Jan-2006 Al Viro <viro@ftp.linux.org.uk>

[PATCH] arm: end_of_stack()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 815d5ec8 12-Jan-2006 Al Viro <viro@ftp.linux.org.uk>

[PATCH] arm: task_pt_regs()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# e7c1b32f 12-Jan-2006 Al Viro <viro@ftp.linux.org.uk>

[PATCH] arm: task_thread_info()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 78ff18a4 03-Jan-2006 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] Cleanup ARM includes

arch/arm/kernel/entry-armv.S has contained a comment suggesting
that asm/hardware.h and asm/arch/irqs.h should be moved into the
asm/arch/entry-macro.S include. So move the includes to these
two files as required.

Add missing includes (asm/hardware.h, asm/io.h) to asm/arch/system.h
includes which use those facilities, and remove asm/io.h from
kernel/process.c.

Remove other unnecessary includes from arch/arm/kernel, arch/arm/mm
and arch/arm/mach-footbridge.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 64c7c8f8 08-Nov-2005 Nick Piggin <nickpiggin@yahoo.com.au>

[PATCH] sched: resched and cpu_idle rework

Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
confusion, and make their semantics rigid. Improves efficiency of
resched_task and some cpu_idle routines.

* In resched_task:
- TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
and as we hold it during resched_task, then there is no need for an
atomic test and set there. The only other time this should be set is
when the task's quantum expires, in the timer interrupt - this is
protected against because the rq lock is irq-safe.

- If TIF_NEED_RESCHED is set, then we don't need to do anything. It
won't get unset until the task get's schedule()d off.

- If we are running on the same CPU as the task we resched, then set
TIF_NEED_RESCHED and no further action is required.

- If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
after TIF_NEED_RESCHED has been set, then we need to send an IPI.

Using these rules, we are able to remove the test and set operation in
resched_task, and make clear the previously vague semantics of
POLLING_NRFLAG.

* In idle routines:
- Enter cpu_idle with preempt disabled. When the need_resched() condition
becomes true, explicitly call schedule(). This makes things a bit clearer
(IMO), but haven't updated all architectures yet.

- Many do a test and clear of TIF_NEED_RESCHED for some reason. According
to the resched_task rules, this isn't needed (and actually breaks the
assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
held). So remove that. Generally one less locked memory op when switching
to the idle thread.

- Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
most polling idle loops. The above resched_task semantics allow it to be
set until before the last time need_resched() is checked before going into
a halt requiring interrupt wakeup.

Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
can be always left set, completely eliminating resched IPIs when rescheduling
the idle task.

POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 5bfb5d69 08-Nov-2005 Nick Piggin <nickpiggin@yahoo.com.au>

[PATCH] sched: disable preempt in idle tasks

Run idle threads with preempt disabled.

Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
How did it ever work before?

Might fix the CPU hotplugging hang which Nigel Cunningham noted.

We think the bug hits if the idle thread is preempted after checking
need_resched() and before going to sleep, then the CPU offlined.

After calling stop_machine_run, the CPU eventually returns from preemption and
into the idle thread and goes to sleep. The CPU will continue executing
previous idle and have no chance to call play_dead.

By disabling preemption until we are ready to explicitly schedule, this bug is
fixed and the idle threads generally become more robust.

From: alexs <ashepard@u.washington.edu>

PPC build fix

From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>

MIPS build fix

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# c906107b 09-Nov-2005 Nicolas Pitre <nico@cam.org>

[ARM] 3100/1: simplify a pointer computation

Patch from Nicolas Pitre

Looks clearer this way.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a054a811 02-Nov-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM SMP] Add hotplug CPU infrastructure

This patch adds the infrastructure to support hotplug CPU on ARM
platforms.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 59586e5a 26-Jul-2005 Eric W. Biederman <ebiederm@xmission.com>

[PATCH] Don't export machine_restart, machine_halt, or machine_power_off.

machine_restart, machine_halt and machine_power_off are machine
specific hooks deep into the reboot logic, that modules
have no business messing with. Usually code should be calling
kernel_restart, kernel_halt, kernel_power_off, or
emergency_restart. So don't export machine_restart,
machine_halt, and machine_power_off so we can catch buggy users.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 2ea83398 27-Jun-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: Add VST idle loop call

This call allows the dynamic tick support to reprogram the timer
immediately before the CPU idles.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4f7a1812 05-May-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: Fix kernel stack offset calculations

Various places in the ARM kernel implicitly assumed that kernel
stacks are always 8K due to hard coded constants. Replace these
constants with definitions.

Correct the allowable range of kernel stack pointer values within
the allocation. Arrange for the entire kernel stack to be zeroed,
not just the upper 4K if CONFIG_DEBUG_STACK_USAGE is set.

Signed-off-by: Russell King <rmk@arm.linux.org.uk>


# 652a12ef 17-Apr-2005 Russell King <rmk@dyn-67.arm.linux.org.uk>

[PATCH] ARM: showregs

Fix show_regs() to provide a backtrace. Provide a new __show_regs()
function which implements the common subset of show_regs() and die().
Add prototypes to asm-arm/system.h

Signed-off-by: Russell King <rmk@arm.linux.org.uk>


# 1da177e4 16-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org>

Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!