History log of /linux-master/arch/arm/include/asm/thread_info.h
Revision Date Author Comments
# 303d6da1 19-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

ARM: iwmmxt: Use undef hook to enable coprocessor for task

Define a undef hook to deal with undef exceptions triggered by iwmmxt
instructions that were issued with the coprocessor disabled. This
removes the dependency on the coprocessor dispatch code in entry-armv.S,
which will be made NWFPE-only in a subsequent patch.

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 6ee1e677 19-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

ARM: kernel: Get rid of thread_info::used_cp[] array

We keep track of which coprocessor triggered a fault in the used_cp[]
array in thread_info, but this data is never used anywhere. So let's
remove it.

Linus did some digging and found out that the last user of this field
was removed in commit bb1a773d5b6b ("kill unused dump_fpu() instances").

Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 191f8453 04-Jan-2023 Jens Axboe <axboe@kernel.dk>

ARM: renumber bits related to _TIF_WORK_MASK

We want to ensure that the mask related to calling do_work_pending()
is within the first 16 bits. Move bits unrelated to that outside of
that range, to avoid spuriously calling do_work_pending() when we don't
need to.

Cc: stable@vger.kernel.org
Fixes: 32d59773da38 ("arm: add support for TIF_NOTIFY_SIGNAL")
Reported-and-tested-by: Hui Tang <tanghui20@huawei.com>
Suggested-by: Russell King (Oracle) <linux@armlinux.org.uk>
Link: https://lore.kernel.org/lkml/7ecb8f3c-2aeb-a905-0d4a-aa768b9649b5@huawei.com/
Signed-off-by: Jens Axboe <axboe@kernel.dk>


# 9c46929e 24-Nov-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems

On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.

To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.

Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M


# a1c510d0 23-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: implement support for vmap'ed stacks

Wire up the generic support for managing task stack allocations via vmalloc,
and implement the entry code that detects whether we faulted because of a
stack overrun (or future stack overrun caused by pushing the pt_regs array)

While this adds a fair amount of tricky entry asm code, it should be
noted that it only adds a TST + branch to the svc_entry path. The code
implementing the non-trivial handling of the overflow stack is emitted
out-of-line into the .text section.

Since on ARM, we rely on do_translation_fault() to keep PMD level page
table entries that cover the vmalloc region up to date, we need to
ensure that we don't hit such a stale PMD entry when accessing the
stack. So we do a dummy read from the new stack while still running from
the old one on the context switch path, and bump the vmalloc_seq counter
when PMD level entries in the vmalloc range are modified, so that the MM
switch fetches the latest version of the entries.

Note that we need to increase the per-mode stack by 1 word, to gain some
space to stash a GPR until we know it is safe to touch the stack.
However, due to the cacheline alignment of the struct, this does not
actually increase the memory footprint of the struct stack array at all.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Keith Packard <keithpac@amazon.com>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M


# fa191b71 29-Oct-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: 9150/1: Fix PID_IN_CONTEXTIDR regression when THREAD_INFO_IN_TASK=y

The code that implements the rarely used PID_IN_CONTEXTIDR feature
dereferences the 'task' field of struct thread_info directly, and this
is no longer possible when THREAD_INFO_IN_TASK=y, as the 'task' field is
omitted from the struct definition in that case. Instead, we should just
cast the thread_info pointer to a task_struct pointer, given that the
former is now the first member of the latter.

So use a helper that abstracts this, and provide implementations for
both cases.

Reported by: Arnd Bergmann <arnd@arndb.de>

Fixes: 18ed1c01a7dd ("ARM: smp: Enable THREAD_INFO_IN_TASK")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 18ed1c01 18-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: smp: Enable THREAD_INFO_IN_TASK

Now that we no longer rely on thread_info living at the base of the task
stack to be able to access the 'current' pointer, we can wire up the
generic support for moving thread_info into the task struct itself.

Note that this requires us to update the cpu field in thread_info
explicitly, now that the core code no longer does so. Ideally, we would
switch the percpu code to access the cpu field in task_struct instead,
but this unleashes #include circular dependency hell.

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# 50596b75 18-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

ARM: smp: Store current pointer in TPIDRURO register if available

Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

<do_one_initcall>:
e92d 41f0 stmdb sp!, {r4, r5, r6, r7, r8, lr}
ee1d 4f70 mrc 15, 0, r4, cr13, cr0, {3}
4606 mov r6, r0
b094 sub sp, #80 ; 0x50
f8d4 34e8 ldr.w r3, [r4, #1256] ; 0x4e8 <- stack canary
9313 str r3, [sp, #76] ; 0x4c
f8d4 8004 ldr.w r8, [r4, #4] <- preempt count

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# dfbdcda2 18-Sep-2021 Ard Biesheuvel <ardb@kernel.org>

gcc-plugins: arm-ssp: Prepare for THREAD_INFO_IN_TASK support

We will be enabling THREAD_INFO_IN_TASK support for ARM, which means
that we can no longer load the stack canary value by masking the stack
pointer and taking the copy that lives in thread_info. Instead, we will
be able to load it from the task_struct directly, by using the TPIDRURO
register which will hold the current task pointer when
THREAD_INFO_IN_TASK is in effect. This is much more straight-forward,
and allows us to declutter this code a bit while at it.

Note that this means that ARMv6 (non-v6K) SMP systems can no longer use
this feature, but those are quite rare to begin with, so this is a
reasonable trade off.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>


# 8ac6f5d7 11-Aug-2021 Arnd Bergmann <arnd@arndb.de>

ARM: 9113/1: uaccess: remove set_fs() implementation

There are no remaining callers of set_fs(), so just remove it
along with all associated code that operates on
thread_info->addr_limit.

There are still further optimizations that can be done:

- In get_user(), the address check could be moved entirely
into the out of line code, rather than passing a constant
as an argument,

- I assume the DACR handling can be simplified as we now
only change it during user access when CONFIG_CPU_SW_DOMAIN_PAN
is set, but not during set_fs().

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 4e57a4dd 11-Aug-2021 Arnd Bergmann <arnd@arndb.de>

ARM: 9107/1: syscall: always store thread_info->abi_syscall

The system call number is used in a a couple of places, in particular
ptrace, seccomp and /proc/<pid>/syscall.

The last one apparently never worked reliably on ARM for tasks that are
not currently getting traced.

Storing the syscall number in the normal entry path makes it work,
as well as allowing us to see if the current system call is for OABI
compat mode, which is the next thing I want to hook into.

Since the thread_info->syscall field is not just the number any more, it
is now renamed to abi_syscall. In kernels that enable both OABI and EABI,
the upper bits of this field encode 0x900000 (__NR_OABI_SYSCALL_BASE)
for OABI tasks, while normal EABI tasks do not set the upper bits. This
makes it possible to implement the in_oabi_syscall() helper later.

All other users of thread_info->syscall go through the syscall_get_nr()
helper, which in turn filters out the ABI bits.

Note that the ABI information is lost with PTRACE_SET_SYSCALL, so one
cannot set the internal number to a particular version, but this was
already the case. We could change it to let gdb encode the ABI type along
with the syscall in a CONFIG_OABI_COMPAT-enabled kernel, but that itself
would be a (backwards-compatible) ABI change, so I don't do it here.

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>


# 12c3dca2 27-Feb-2021 Arnd Bergmann <arnd@arndb.de>

ARM: ep93xx: remove MaverickCrunch support

The MaverickCrunch support for ep93xx never made it into glibc and
was removed from gcc in its 4.8 release in 2012. It is now one of
the last parts of arch/arm/ that fails to build with the clang
integrated assembler, which is unlikely to ever want to support it.

The two alternatives are to force the use of binutils/gas when
building the crunch support, or to remove it entirely.

According to Hartley Sweeten:

"Martin Guy did a lot of work trying to get the maverick crunch working
but I was never able to successfully use it for anything. It "kind"
of works but depending on the EP93xx silicon revision there are still
a number of hardware bugs that either give imprecise or garbage results.

I have no problem with removing the kernel support for the maverick
crunch."

Unless someone else comes up with a good reason to keep it around,
remove it now. This touches mostly the ep93xx platform, but removes
a bit of code from ARM common ptrace and signal frame handling as well.

If there are remaining users of MaverickCrunch, they can use LTS
kernels for at least another five years before kernel support ends.

Link: https://lore.kernel.org/linux-arm-kernel/20210802141245.1146772-1-arnd@kernel.org/
Link: https://lore.kernel.org/linux-arm-kernel/20210226164345.3889993-1-arnd@kernel.org/
Link: https://github.com/ClangBuiltLinux/linux/issues/1272
Link: https://gcc.gnu.org/legacy-ml/gcc/2008-03/msg01063.html
Cc: "Martin Guy" <martinwguy@martinwguy@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>


# 32d59773 09-Oct-2020 Jens Axboe <axboe@kernel.dk>

arm: add support for TIF_NOTIFY_SIGNAL

Wire up TIF_NOTIFY_SIGNAL handling for arm.

Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Jens Axboe <axboe@kernel.dk>


# 5615f69b 25-Oct-2020 Linus Walleij <linus.walleij@linaro.org>

ARM: 9016/2: Initialize the mapping of KASan shadow memory

This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:

1. At early boot stage the whole shadow region is mapped to just
one physical page (kasan_zero_page). It is finished by the function
kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
head-common.S)

2. After the calling of paging_init, we use kasan_zero_page as zero
shadow for some memory that KASan does not need to track, and we
allocate a new shadow space for the other memory that KASan need to
track. These issues are finished by the function kasan_init which is
call by setup_arch.

When using KASan we also need to increase the THREAD_SIZE_ORDER
from 1 to 2 as the extra calls for shadow memory uses quite a bit
of stack.

As we need to make a temporary copy of the PGD when setting up
shadow memory we create a helpful PGD_SIZE definition for both
LPAE and non-LPAE setups.

The KASan core code unconditionally calls pud_populate() so this
needs to be changed from BUG() to do {} while (0) when building
with KASan enabled.

After the initial development by Andre Ryabinin several modifications
have been made to this code:

Abbott Liu <liuwenliang@huawei.com>
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
kasan_pgd_populate from .meminit.text section to .init.text section.
Reported by Florian Fainelli <f.fainelli@gmail.com>

Linus Walleij <linus.walleij@linaro.org>:
- Drop the custom mainpulation of TTBR0 and just use
cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
sequence to be recursive based on ARM64:s kasan_init.c.

Ard Biesheuvel <ardb@kernel.org>:
- Necessary underlying fixes.
- Crucial bug fixes to the memory set-up code.

Co-developed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Co-developed-by: Abbott Liu <liuwenliang@huawei.com>
Co-developed-by: Ard Biesheuvel <ardb@kernel.org>

Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
Reported-by: Russell King - ARM Linux <rmk+kernel@armlinux.org.uk>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Abbott Liu <liuwenliang@huawei.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# a6342915 22-Jun-2020 Peter Zijlstra <peterz@infradead.org>

arm: Break cyclic percpu include

In order to use <asm/percpu.h> in irqflags.h, we need to make sure
asm/percpu.h does not itself depend on irqflags.h.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/20200623083721.454517573@infradead.org


# 1acb2249 28-Jan-2020 Frederic Weisbecker <frederic@kernel.org>

arm: Remove TIF_NOHZ

Arm entry code calls context tracking from fast path. TIF_NOHZ is unused
and can be safely removed.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>


# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 189af465 06-Dec-2018 Ard Biesheuvel <ardb@kernel.org>

ARM: smp: add support for per-task stack canaries

On ARM, we currently only change the value of the stack canary when
switching tasks if the kernel was built for UP. On SMP kernels, this
is impossible since the stack canary value is obtained via a global
symbol reference, which means
a) all running tasks on all CPUs must use the same value
b) we can only modify the value when no kernel stack frames are live
on any CPU, which is effectively never.

So instead, use a GCC plugin to add a RTL pass that replaces each
reference to the address of the __stack_chk_guard symbol with an
expression that produces the address of the 'stack_canary' field
that is added to struct thread_info. This way, each task will use
its own randomized value.

Cc: Russell King <linux@armlinux.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Laura Abbott <labbott@redhat.com>
Cc: kernel-hardening@lists.openwall.com
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>


# 3aa2df6e 11-Sep-2018 Julien Thierry <julien.thierry.kdev@gmail.com>

ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state

Use __copy_to_user() rather than __put_user_error() for individual
members when saving VFP state.
This has the benefit of disabling/enabling PAN once per copied struct
intead of once per write.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 42019fc5 09-Jul-2018 Russell King <rmk+kernel@armlinux.org.uk>

ARM: vfp: use __copy_from_user() when restoring VFP state

__get_user_error() is used as a fast accessor to make copying structure
members in the signal handling path as efficient as possible. However,
with software PAN and the recent Spectre variant 1, the efficiency is
reduced as these are no longer fast accessors.

In the case of software PAN, it has to switch the domain register around
each access, and with Spectre variant 1, it would have to repeat the
access_ok() check for each access.

Use __copy_from_user() rather than __get_user_err() for individual
members when restoring VFP state.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>


# 0500871f 02-Jan-2018 David Howells <dhowells@redhat.com>

Construct init thread stack in the linker script rather than by union

Construct the init thread stack in the linker script rather than doing it
by means of a union so that ia64's init_task.c can be got rid of.

The following symbols are then made available from INIT_TASK_DATA() linker
script macro:

init_thread_union
init_stack

INIT_TASK_DATA() also expands the region to THREAD_SIZE to accommodate the
size of the init stack. init_thread_union is given its own section so that
it can be placed into the stack space in the right order. I'm assuming
that the ia64 ordering is correct and that the task_struct is first and the
thread_info second.

Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Tested-by: Will Deacon <will.deacon@arm.com> (arm64)
Tested-by: Palmer Dabbelt <palmer@sifive.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>


# 2404269b 07-Sep-2017 Thomas Garnier <thgarnie@google.com>

Revert "arm/syscalls: Check address limit on user-mode return"

This reverts commit 73ac5d6a2b6ac3ae8d1e1818f3e9946f97489bc9.

The work pending loop can call set_fs after addr_limit_user_check
removed the _TIF_FSCHECK flag. This may happen at anytime based on how
ARM handles alignment exceptions. It leads to an infinite loop condition.

After discussion, it has been agreed that the generic approach is not
tailored to the ARM architecture and any fix might not be complete. This
patch will be replaced by an architecture specific implementation. The
work flag approach will be kept for other architectures.

Reported-by: Leonard Crestez <leonard.crestez@nxp.com>
Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Will Drewry <wad@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-api@vger.kernel.org
Cc: Yonghong Song <yhs@fb.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1504798247-48833-3-git-send-email-keescook@chromium.org


# 73ac5d6a 14-Jun-2017 Thomas Garnier <thgarnie@google.com>

arm/syscalls: Check address limit on user-mode return

Ensure the address limit is a user-mode segment before returning to
user-mode. Otherwise a process can corrupt kernel-mode memory and
elevate privileges [1].

The set_fs function sets the TIF_SETFS flag to force a slow path on
return. In the slow path, the address limit is checked to be USER_DS if
needed.

The TIF_SETFS flag is added to _TIF_WORK_MASK shifting _TIF_SYSCALL_WORK
for arm instruction immediate support. The global work mask is too big
to used on a single instruction so adapt ret_fast_syscall.

[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990

Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: kernel-hardening@lists.openwall.com
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Drewry <wad@chromium.org>
Cc: linux-api@vger.kernel.org
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Link: http://lkml.kernel.org/r/20170615011203.144108-2-thgarnie@google.com


# 716ff192 11-Sep-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: domains: thread_info.h no longer needs asm/domains.h

As of 1eef5d2f1b46 ("ARM: domains: switch to keeping domain value in
register") we no longer need to include asm/domains.h into
asm/thread_info.h. Remove it.

Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3302cadd 20-Aug-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: entry: efficiency cleanups

Make the "fast" syscall return path fast again. The addition of IRQ
tracing and context tracking has made this path grossly inefficient.
We can do much better if these options are enabled if we save the
syscall return code on the stack - we then don't need to save a bunch
of registers around every single callout to C code.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1eef5d2f 19-Aug-2015 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: domains: switch to keeping domain value in register

Rather than modifying both the domain access control register and our
per-thread copy, modify only the domain access control register, and
use the per-thread copy to save and restore the register over context
switches. We can also avoid the explicit initialisation of the
init thread_info structure.

This allows us to avoid needing to gain access to the thread information
at the uaccess control sites.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a4980448 13-Jul-2014 Richard Weinberger <richard@nod.at>

arm: Remove signal translation and exec_domain

As execution domain support is gone we can remove
signal translation from the signal code and remove
exec_domain from thread_info.

Signed-off-by: Richard Weinberger <richard@nod.at>


# f56141e3 12-Feb-2015 Andy Lutomirski <luto@amacapital.net>

all arches, signal: move restart_block to struct task_struct

If an attacker can cause a controlled kernel stack overflow, overwriting
the restart block is a very juicy exploit target. This is because the
restart_block is held in the same memory allocation as the kernel stack.

Moving the restart block to struct task_struct prevents this exploit by
making the restart_block harder to locate.

Note that there are other fields in thread_info that are also easy
targets, at least on some architectures.

It's also a decent simplification, since the restart code is more or less
identical on all architectures.

[james.hogan@imgtec.com: metag: align thread_info::supervisor_stack]
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: David Miller <davem@davemloft.net>
Acked-by: Richard Weinberger <richard@nod.at>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 3f4aa45c 27-Nov-2014 Vladimir Murzin <vladimir.murzin@arm.com>

ARM: 8226/1: cacheflush: get rid of restarting block

We cannot restart cacheflush safely if a process provides user-defined
signal handler and signal is pending. In this case -EINTR is returned
and it is expected that process re-invokes syscall. However, there are
a few problems with that:
* looks like nobody bothers checking return value from cacheflush
* but if it did, we don't provide the restart address for that, so the
process has to use the same range again
* ...and again, what might lead to looping forever

So, remove cacheflush restarting code and terminate cache flushing
as early as fatal signal is pending.

Cc: stable@vger.kernel.org # 3.12+
Reported-by: Chanho Min <chanho.min@lge.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# f6c9cbf0 26-Sep-2014 Behan Webster <behanw@converseincode.com>

ARM: 8173/1: Calculate current_thread_info from current_stack_pointer

Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Reviewed-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 0abc08ba 26-Sep-2014 Behan Webster <behanw@converseincode.com>

ARM: 8170/1: Add global named register current_stack_pointer for ARM

Define a global named register for current_stack_pointer. The use of this new
variable guarantees that both gcc and clang can access this register in C code.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Mark Charlebois <charlebm@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9a2b51b6 18-Jun-2014 Andrey Ryabinin <ryabinin.a.a@gmail.com>

ARM: 8078/1: get rid of hardcoded assumptions about kernel stack size

Changing kernel stack size on arm is not as simple as it should be:
1) THREAD_SIZE macro doesn't respect PAGE_SIZE and THREAD_SIZE_ORDER
2) stack size is hardcoded in get_thread_info macro

This patch fixes it by calculating THREAD_SIZE and thread_info address
taking into account PAGE_SIZE and THREAD_SIZE_ORDER.

Now changing stack size becomes simply changing THREAD_SIZE_ORDER.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 870cbe8c 03-Jun-2014 Nikolay Borisov <Nikolay.Borisov@arm.com>

ARM: 8069/1: Make thread_save_fp macro aware of THUMB2 mode

The thread_save_fp macro has been defined so that it always reads the fp member
of the cpu_context_save struct. However, in the case of THUMB2 the fp is saved
not in the fp (r11) member but rather in r7.

This patch changes the way the macro is defined such that FP is read from the
correct place depending on whether we are a THUMB2 kernel or not. This enables
the backtrace in sitaution such as "echo t > /proc/sysrq-trigger" or the
function in which a process sleeping when "ps -Al" is invoked.

Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Reviewed-by: Anurag Aggarwal <anurag19aggarwal@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c7edc9e3 07-Mar-2014 David A. Long <dave.long@linaro.org>

ARM: add uprobes support

Using Rabin Vincent's ARM uprobes patches as a base, enable uprobes
support on ARM.

Caveats:

- Thumb is not supported

Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: David A. Long <dave.long@linaro.org>


# 00d1a39e 17-Sep-2013 Thomas Gleixner <tglx@linutronix.de>

preempt: Make PREEMPT_ACTIVE generic

No point in having this bit defined by architecture.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20130917183629.090698799@linutronix.de


# 28256d61 13-May-2013 Will Deacon <will@kernel.org>

ARM: cacheflush: split user cache-flushing into interruptible chunks

Flushing a large, non-faulting VMA from userspace can potentially result
in a long time spent flushing the cache line-by-line without preemption
occurring (in the case of CONFIG_PREEMPT=n).

Whilst this doesn't affect the stability of the system, it can certainly
affect the responsiveness and CPU availability for other tasks.

This patch splits up the user cacheflush code so that it flushes in
chunks of a page. After each chunk has been flushed, we may reschedule
if appropriate and, before processing the next chunk, we allow any
pending signals to be handled before resuming from where we left off.

Signed-off-by: Will Deacon <will.deacon@arm.com>


# bdae73cd 23-Jul-2013 Catalin Marinas <catalin.marinas@arm.com>

ARM: 7790/1: Fix deferred mm switch on VIVT processors

As of commit b9d4d42ad9 (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
finish_arch_post_lock_switch() function to avoid whole cache flushing
with interrupts disabled. The need for deferred mm switch is stored as a
thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
have another thread switch before finish_arch_post_lock_switch(). If the
new thread has the same mm as the previous 'next' thread, the scheduler
will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
the new thread.

This patch moves the switch pending flag to the mm_context_t structure
since this is specific to the mm rather than thread.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Marc Kleine-Budde <mkl@pengutronix.de>
Tested-by: Marc Kleine-Budde <mkl@pengutronix.de>
Cc: <stable@vger.kernel.org> # 3.5+
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# a4780ade 18-Jun-2013 André Hentschel <nerv@dawncrow.de>

ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork

Since commit 6a1c53124aa1 the user writeable TLS register was zeroed to
prevent it from being used as a covert channel between two tasks.

There are more and more applications coming to Windows RT,
Wine could support them, but mostly they expect to have
the thread environment block (TEB) in TPIDRURW.

This patch preserves that register per thread instead of clearing it.
Unlike the TPIDRURO, which is already switched, the TPIDRURW
can be updated from userspace so needs careful treatment in the case that we
modify TPIDRURW and call fork(). To avoid this we must always read
TPIDRURW in copy_thread.

Signed-off-by: André Hentschel <nerv@dawncrow.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# b0088480 28-Mar-2013 Kevin Hilman <khilman@deeprootsystems.com>

ARM: 7688/1: add support for context tracking subsystem

commit 91d1aa43 (context_tracking: New context tracking susbsystem)
generalized parts of the RCU userspace extended quiescent state into
the context tracking subsystem. Context tracking is then used
to implement adaptive tickless (a.k.a extended nohz)

To support the new context tracking subsystem on ARM, the user/kernel
boundary transtions need to be instrumented.

For exceptions and IRQs in usermode, the existing usr_entry macro is
used to instrument the user->kernel transition. For the return to
usermode path, the ret_to_user* path is instrumented. Using the
usr_entry macro, this covers interrupts in userspace, data abort and
prefetch abort exceptions in userspace as well as undefined exceptions
in userspace (which is where FP emulation and VFP are handled.)

For syscalls, the slow return path is covered by instrumenting the
ret_to_user path. In addition, the syscall entry point is
instrumented which covers the user->kernel transition for both fast
and slow syscalls, and an additional instrumentation point is added
for the fast syscall return path (ret_fast_syscall).

Cc: Mats Liljegren <mats.liljegren@enea.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9b790d71 15-Nov-2012 Kees Cook <keescook@chromium.org>

ARM: 7578/1: arch/move secure_computing into trace

There is very little difference in the TIF_SECCOMP and TIF_SYSCALL_WORK
path in entry-common.S, so merge TIF_SECCOMP into TIF_SYSCALL_WORK and
move seccomp into the syscall_trace_enter() handler.

Expanded some of the tracehook logic into the callers to make this code
more readable. Since tracehook needs to do register changing, this portion
is best left in its own function instead of copy/pasting into the callers.

Additionally, the return value for secure_computing() is now checked
and a -1 value will result in the system call being skipped.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Drewry <wad@chromium.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 16a80163 01-Jun-2012 Al Viro <viro@zeniv.linux.org.uk>

sanitize tsk_is_polling()

Make default just return 0. The current default (checking
TIF_POLLING_NRFLAG) is taken to architectures that need it;
ones that don't do polling in their idle threads don't need
to defined TIF_POLLING_NRFLAG at all.

ia64 defined both TS_POLLING (used by its tsk_is_polling())
and TIF_POLLING_NRFLAG (not used at all). Killed the latter...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 1f66e06f 07-Sep-2012 Wade Farnsworth <wade_farnsworth@mentor.com>

ARM: 7524/1: support syscall tracing

As specified by ftrace-design.txt, TIF_SYSCALL_TRACEPOINT was
added, as well as NR_syscalls in asm/unistd.h. Additionally,
__sys_trace was modified to call trace_sys_enter and
trace_sys_exit when appropriate.

Tests #2 - #4 of "perf test" now complete successfully.

Signed-off-by: Steven Walter <stevenrwalter@gmail.com>
Signed-off-by: Wade Farnsworth <wade_farnsworth@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 9fc31ddc 29-Aug-2012 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: Don't unconditionally bloat thread_info

There is no point reserving space at the bottom of the kernel stack for
per-thread crunch state, and per-thread VFP state if these are not being
supported by the kernel being built. Remove these members from the
thread union when these features are disabled.

Reported-by: Tim Bird <tim.bird@am.sony.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 66285217 19-Jul-2012 Al Viro <viro@zeniv.linux.org.uk>

ARM: 7474/1: get rid of TIF_SYSCALL_RESTARTSYS

just let do_work_pending() return 1 on normal local restarts and
-1 on those that had been caused by ERESTART_RESTARTBLOCK (and 0
is still "all done, sod off to userland now"). And let the asm
glue flip scno to restart_syscall(2) one if it got negative from
us...

[will: resolved conflicts with audit fixes]

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# ad82cc08 19-Jul-2012 Will Deacon <will@kernel.org>

ARM: 7470/1: Revert "7443/1: Revert "new way of handling ERESTART_RESTARTBLOCK""

This reverts commit 433e2f307beff8adba241646ce9108544e0c5a03.

Conflicts:

arch/arm/kernel/ptrace.c

Reintroduce the new syscall restart handling in preparation for further
patches from Al Viro.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 433e2f30 04-Jul-2012 Will Deacon <will@kernel.org>

ARM: 7443/1: Revert "new way of handling ERESTART_RESTARTBLOCK"

This reverts commit 6b5c8045ecc7e726cdaa2a9d9c8e5008050e1252.

Conflicts:

arch/arm/kernel/ptrace.c

The new syscall restarting code can lead to problems if we take an
interrupt in userspace just before restarting the svc instruction. If
a signal is delivered when returning from the interrupt, the
TIF_SYSCALL_RESTARTSYS will remain set and cause any syscalls executed
from the signal handler to be treated as a restart of the previously
interrupted system call. This includes the final sigreturn call, meaning
that we may fail to exit from the signal context. Furthermore, if a
system call made from the signal handler requires a restart via the
restart_block, it is possible to clear the thread flag and fail to
restart the originally interrupted system call.

The right solution to this problem is to perform the restarting in the
kernel, avoiding the possibility of handling a further signal before the
restart is complete. Since we're almost at -rc6, let's revert the new
method for now and aim for in-kernel restarting at a later date.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7d181b96 06-May-2012 Al Viro <viro@zeniv.linux.org.uk>

arm: bury unused _TIF_RESTORE_SIGMASK

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 6b5c8045 02-May-2012 Al Viro <viro@zeniv.linux.org.uk>

arm: new way of handling ERESTART_RESTARTBLOCK

new "syscall start" flag; handled in syscall_trace() by switching
syscall number to that of syscall_restart(2). Restarts of that
kind (ERESTART_RESTARTBLOCK) are handled by setting that bit;
syscall number is not modified until the actual call.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 84849b3e 24-Apr-2012 Al Viro <viro@zeniv.linux.org.uk>

arm: trim _TIF_WORK_MASK, get rid of useless test and branch...

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 2498814f 23-Apr-2012 Will Deacon <will@kernel.org>

ARM: 7399/1: vfp: move user vfp state save/restore code out of signal.c

The user VFP state must be preserved (subject to ucontext modifications)
across invocation of a signal handler and this is currently handled by
vfp_{preserve,restore}_context in signal.c

Since this code requires intimate low-level knowledge of the VFP state,
this patch moves it into vfpmodule.c.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 7fec1b57 28-Nov-2011 Catalin Marinas <catalin.marinas@arm.com>

ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs

Since the ASIDs must be unique to an mm across all the CPUs in a system,
the __new_context() function needs to broadcast a context reset event to
all the CPUs during ASID allocation if a roll-over occurred. Such IPIs
cannot be issued with interrupts disabled and ARM had to define
__ARCH_WANT_INTERRUPTS_ON_CTXSW.

This patch changes the check_context() function to
check_and_switch_context() called from switch_mm(). In case of
ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the
interrupts are disabled, it defers the __new_context() and
cpu_switch_mm() calls to the post-lock switch hook where the interrupts
are enabled. Setting the reserved TTBR0 was also moved to
check_and_switch_context() from cpu_v7_switch_mm().

Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Frank Rowand <frank.rowand@am.sony.com>
Tested-by: Marc Zyngier <Marc.Zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 29ef73b7 03-Jan-2012 Nathaniel Husted <nhusted@gmail.com>

Kernel: Audit Support For The ARM Platform

This patch provides functionality to audit system call events on the
ARM platform. The implementation was based off the structure of the
MIPS platform and information in this
(http://lists.fedoraproject.org/pipermail/arm/2009-October/000382.html)
mailing list thread. The required audit_syscall_exit and
audit_syscall_entry checks were added to ptrace using the standard
registers for system call values (r0 through r3). A thread information
flag was added for auditing (TIF_SYSCALL_AUDIT) and a meta-flag was
added (_TIF_SYSCALL_WORK) to simplify modifications to the syscall
entry/exit. Now, if either the TRACE flag is set or the AUDIT flag is
set, the syscall_trace function will be executed. The prober changes
were made to Kconfig to allow CONFIG_AUDITSYSCALL to be enabled.

Due to platform availability limitations, this patch was only tested
on the Android platform running the modified "android-goldfish-2.6.29"
kernel. A test compile was performed using Code Sourcery's
cross-compilation toolset and the current linux-3.0 stable kernel. The
changes compile without error. I'm hoping, due to the simple modifications,
the patch is "obviously correct".

Signed-off-by: Nathaniel Husted <nhusted@gmail.com>
Signed-off-by: Eric Paris <eparis@redhat.com>


# d88e4cb6 21-Nov-2011 Tejun Heo <tj@kernel.org>

freezer: remove now unused TIF_FREEZE

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arch@vger.kernel.org


# 70c70d97 26-Aug-2010 Nicolas Pitre <nico@fluxnic.net>

ARM: SECCOMP support

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>


# 0ddc9324 14-May-2010 Andreas Dilger <adilger@dilger.ca>

add descriptive comment for TIF_MEMDIE task flag declaration.

Signed-off-by: Andreas Dilger <adilger@dilger.ca>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>


# ad187f95 06-Feb-2010 Russell King <rmk+kernel@arm.linux.org.uk>

ARM: vfp ptrace: no point flushing hw context for PTRACE_GETVFPREGS

If we're only reading the VFP context via the ptrace call, there's
no need to invalidate the hardware context - we only need to do that
on PTRACE_SETVFPREGS. This allows more efficient monitoring of a
traced task.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# d0420c83 02-Sep-2009 David Howells <dhowells@redhat.com>

KEYS: Extend TIF_NOTIFY_RESUME to (almost) all architectures [try #6]

Implement TIF_NOTIFY_RESUME for most of those architectures in which isn't yet
available, and, whilst we're at it, have it call the appropriate tracehook.

After this patch, blackfin, m68k* and xtensa still lack support and need
alteration of assembly code to make it work.

Resume notification can then be used (by a later patch) to install a new
session keyring on the parent of a process.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>

cc: linux-arch@vger.kernel.org
Signed-off-by: James Morris <jmorris@namei.org>


# 36984265 14-Aug-2009 Mikael Pettersson <mikpe@it.uu.se>

ARM: 5677/1: ARM support for TIF_RESTORE_SIGMASK/pselect6/ppoll/epoll_pwait

This patch adds support for TIF_RESTORE_SIGMASK to ARM's
signal handling, which allows to hook up the pselect6, ppoll,
and epoll_pwait syscalls on ARM.

Tested here with eabi userspace and a test program with a
deliberate race between a child's exit and the parent's
sigprocmask/select sequence. Using sys_pselect6() instead
of sigprocmask/select reliably prevents the race.

The other arch's support for TIF_RESTORE_SIGMASK has evolved
over time:

In 2.6.16:
- add TIF_RESTORE_SIGMASK which parallels TIF_SIGPENDING
- test both when checking for pending signal [changed later]
- reimplement sys_sigsuspend() to use current->saved_sigmask,
TIF_RESTORE_SIGMASK [changed later], and -ERESTARTNOHAND;
ditto for sys_rt_sigsuspend(), but drop private code and
use common code via __ARCH_WANT_SYS_RT_SIGSUSPEND;
- there are now no "extra" calls to do_signal() so its oldset
parameter is always &current->blocked so need not be passed,
also its return value is changed to void
- change handle_signal() to return 0/-errno
- change do_signal() to honor TIF_RESTORE_SIGMASK:
+ get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
is set
+ if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
+ if no signal was delivered and TIF_RESTORE_SIGMASK is set then
clear it and restore the sigmask
- hook up sys_pselect6() and sys_ppoll()

In 2.6.19:
- hook up sys_epoll_pwait()

In 2.6.26:
- allow archs to override how TIF_RESTORE_SIGMASK is implemented;
default set_restore_sigmask() sets both TIF_RESTORE_SIGMASK and
TIF_SIGPENDING; archs need now just test TIF_SIGPENDING again
when checking for pending signal work; some archs now implement
TIF_RESTORE_SIGMASK as a secondary/non-atomic thread flag bit
- call set_restore_sigmask() in sys_sigsuspend() instead of setting
TIF_RESTORE_SIGMASK

In 2.6.29-rc:
- kill sys_pselect7() which no arch wanted

So for 2.6.31-rc6/ARM this patch does the following:
- Add TIF_RESTORE_SIGMASK. Use the generic set_restore_sigmask()
which sets both TIF_SIGPENDING and TIF_RESTORE_SIGMASK, so
TIF_RESTORE_SIGMASK need not claim one of the scarce low thread
flags, and existing TIF_SIGPENDING and _TIF_WORK_MASK tests need
not be extended for TIF_RESTORE_SIGMASK.
- sys_sigsuspend() is reimplemented to use current->saved_sigmask
and set_restore_sigmask(), making it identical to most other archs
- The private code for sys_rt_sigsuspend() is removed, instead
generic code supplies it via __ARCH_WANT_SYS_RT_SIGSUSPEND.
- sys_sigsuspend() and sys_rt_sigsuspend() no longer need a pt_regs
parameter, so their assembly code wrappers are removed.
- handle_signal() is changed to return 0 on success or -errno.
- The oldset parameter to do_signal() is now redundant and removed,
and the return value is now also redundant and changed to void.
- do_signal() is changed to honor TIF_RESTORE_SIGMASK:
+ get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
is set
+ if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
+ if no signal was delivered and TIF_RESTORE_SIGMASK is set then
clear it and restore the sigmask
- Hook up sys_pselect6, sys_ppoll, and sys_epoll_pwait.

Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# c99e6efe 10-Jul-2009 Peter Zijlstra <a.p.zijlstra@chello.nl>

sched: INIT_PREEMPT_COUNT

Pull the initial preempt_count value into a single
definition site.

Maintainers for: alpha, ia64 and m68k, please have a look,
your arch code is funny.

The header magic is a bit odd, but similar to the KERNEL_DS
one, CPP waits with expanding these macros until the
INIT_THREAD_INFO macro itself is expanded, which is in
arch/*/kernel/init_task.c where we've already included
sched.h so we're good.

Cc: tony.luck@intel.com
Cc: rth@twiddle.net
Cc: geert@linux-m68k.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 2d7c11bf 11-Feb-2009 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 5382/1: unwind: Reorganise the stacktrace support

This patch changes the walk_stacktrace and its callers for easier
integration of stack unwinding. The arch/arm/kernel/stacktrace.h file is
also moved to arch/arm/include/asm/stacktrace.h.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 3d1228ea 11-Feb-2009 Catalin Marinas <catalin.marinas@arm.com>

[ARM] 5387/1: Add ptrace VFP support on ARM

This patch adds ptrace support for setting and getting the VFP registers
using PTRACE_SETVFPREGS and PTRACE_GETVFPREGS. The user_vfp structure
defined in asm/user.h contains 32 double registers (to cover VFPv3 and
Neon hardware) and the FPSCR register.

Cc: Paul Brook <paul@codesourcery.com>
Cc: Daniel Jacobowitz <dan@codesourcery.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 1de765c1 06-Sep-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] remove pc_pointer()

pc_pointer() was a function to mask the PC for 26-bit ARMs, which
we no longer support. Remove it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>


# 4baa9922 02-Aug-2008 Russell King <rmk@dyn-67.arm.linux.org.uk>

[ARM] move include/asm-arm to arch/arm/include/asm

Move platform independent header files to arch/arm/include/asm, leaving
those in asm/arch* and asm/plat* alone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>