History log of /linux-master/arch/mips/kernel/process.c
Revision Date Author Comments
# a58a1734 30-Nov-2023 Thomas Bogendoerfer <tsbogend@alpha.franken.de>

MIPS: kernel: Clear FPU states when setting up kernel threads

io_uring sets up the io worker kernel thread via a syscall out of an
user space prrocess. This process might have used FPU and since
copy_thread() didn't clear FPU states for kernel threads a BUG()
is triggered for using FPU inside kernel. Move code around
to always clear FPU state for user and kernel threads.

Cc: stable@vger.kernel.org
Reported-by: Aurelien Jarno <aurel32@debian.org>
Closes: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1055021
Suggested-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 8d539b84 04-Aug-2023 Douglas Anderson <dianders@chromium.org>

nmi_backtrace: allow excluding an arbitrary CPU

The APIs that allow backtracing across CPUs have always had a way to
exclude the current CPU. This convenience means callers didn't need to
find a place to allocate a CPU mask just to handle the common case.

Let's extend the API to take a CPU ID to exclude instead of just a
boolean. This isn't any more complex for the API to handle and allows the
hardlockup detector to exclude a different CPU (the one it already did a
trace for) without needing to find space for a CPU mask.

Arguably, this new API also encourages safer behavior. Specifically if
the caller wants to avoid tracing the current CPU (maybe because they
already traced the current CPU) this makes it more obvious to the caller
that they need to make sure that the current CPU ID can't change.

[akpm@linux-foundation.org: fix trigger_allbutcpu_cpu_backtrace() stub]
Link: https://lkml.kernel.org/r/20230804065935.v4.1.Ia35521b91fc781368945161d7b28538f9996c182@changeid
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 071c44e4 14-Feb-2023 Josh Poimboeuf <jpoimboe@kernel.org>

sched/idle: Mark arch_cpu_idle_dead() __noreturn

Before commit 076cbf5d2163 ("x86/xen: don't let xen_pv_play_dead()
return"), in Xen, when a previously offlined CPU was brought back
online, it unexpectedly resumed execution where it left off in the
middle of the idle loop.

There were some hacks to make that work, but the behavior was surprising
as do_idle() doesn't expect an offlined CPU to return from the dead (in
arch_cpu_idle_dead()).

Now that Xen has been fixed, and the arch-specific implementations of
arch_cpu_idle_dead() also don't return, give it a __noreturn attribute.

This will cause the compiler to complain if an arch-specific
implementation might return. It also improves code generation for both
caller and callee.

Also fixes the following warning:

vmlinux.o: warning: objtool: do_idle+0x25f: unreachable instruction

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/60d527353da8c99d4cf13b6473131d46719ed16d.1676358308.git.jpoimboe@kernel.org
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>


# 8032bf12 09-Oct-2022 Jason A. Donenfeld <Jason@zx2c4.com>

treewide: use get_random_u32_below() instead of deprecated function

This is a simple mechanical transformation done by:

@@
expression E;
@@
- prandom_u32_max
+ get_random_u32_below
(E)

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs
Reviewed-by: SeongJae Park <sj@kernel.org> # for damon
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>


# 81895a65 05-Oct-2022 Jason A. Donenfeld <Jason@zx2c4.com>

treewide: use prandom_u32_max() when possible, part 1

Rather than incurring a division or requesting too many random bytes for
the given range, use the prandom_u32_max() function, which only takes
the minimum required bytes from the RNG and avoids divisions. This was
done mechanically with this coccinelle script:

@basic@
expression E;
type T;
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
typedef u64;
@@
(
- ((T)get_random_u32() % (E))
+ prandom_u32_max(E)
|
- ((T)get_random_u32() & ((E) - 1))
+ prandom_u32_max(E * XXX_MAKE_SURE_E_IS_POW2)
|
- ((u64)(E) * get_random_u32() >> 32)
+ prandom_u32_max(E)
|
- ((T)get_random_u32() & ~PAGE_MASK)
+ prandom_u32_max(PAGE_SIZE)
)

@multi_line@
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
identifier RAND;
expression E;
@@

- RAND = get_random_u32();
... when != RAND
- RAND %= (E);
+ RAND = prandom_u32_max(E);

// Find a potential literal
@literal_mask@
expression LITERAL;
type T;
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
position p;
@@

((T)get_random_u32()@p & (LITERAL))

// Add one to the literal.
@script:python add_one@
literal << literal_mask.LITERAL;
RESULT;
@@

value = None
if literal.startswith('0x'):
value = int(literal, 16)
elif literal[0] in '123456789':
value = int(literal, 10)
if value is None:
print("I don't know how to handle %s" % (literal))
cocci.include_match(False)
elif value == 2**32 - 1 or value == 2**31 - 1 or value == 2**24 - 1 or value == 2**16 - 1 or value == 2**8 - 1:
print("Skipping 0x%x for cleanup elsewhere" % (value))
cocci.include_match(False)
elif value & (value + 1) != 0:
print("Skipping 0x%x because it's not a power of two minus one" % (value))
cocci.include_match(False)
elif literal.startswith('0x'):
coccinelle.RESULT = cocci.make_expr("0x%x" % (value + 1))
else:
coccinelle.RESULT = cocci.make_expr("%d" % (value + 1))

// Replace the literal mask with the calculated result.
@plus_one@
expression literal_mask.LITERAL;
position literal_mask.p;
expression add_one.RESULT;
identifier FUNC;
@@

- (FUNC()@p & (LITERAL))
+ prandom_u32_max(RESULT)

@collapse_ret@
type T;
identifier VAR;
expression E;
@@

{
- T VAR;
- VAR = (E);
- return VAR;
+ return E;
}

@drop_var@
type T;
identifier VAR;
@@

{
- T VAR;
... when != VAR
}

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: KP Singh <kpsingh@kernel.org>
Reviewed-by: Jan Kara <jack@suse.cz> # for ext4 and sbitmap
Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> # for drbd
Acked-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Heiko Carstens <hca@linux.ibm.com> # for s390
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc
Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>


# 5bd2e97c 12-Apr-2022 Eric W. Biederman <ebiederm@xmission.com>

fork: Generalize PF_IO_WORKER handling

Add fn and fn_arg members into struct kernel_clone_args and test for
them in copy_thread (instead of testing for PF_KTHREAD | PF_IO_WORKER).
This allows any task that wants to be a user space task that only runs
in kernel mode to use this functionality.

The code on x86 is an exception and still retains a PF_KTHREAD test
because x86 unlikely everything else handles kthreads slightly
differently than user space tasks that start with a function.

The functions that created tasks that start with a function
have been updated to set ".fn" and ".fn_arg" instead of
".stack" and ".stack_size". These functions are fork_idle(),
create_io_thread(), kernel_thread(), and user_mode_thread().

Link: https://lkml.kernel.org/r/20220506141512.516114-4-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>


# c5febea0 08-Apr-2022 Eric W. Biederman <ebiederm@xmission.com>

fork: Pass struct kernel_clone_args into copy_thread

With io_uring we have started supporting tasks that are for most
purposes user space tasks that exclusively run code in kernel mode.

The kernel task that exec's init and tasks that exec user mode
helpers are also user mode tasks that just run kernel code
until they call kernel execve.

Pass kernel_clone_args into copy_thread so these oddball
tasks can be supported more cleanly and easily.

v2: Fix spelling of kenrel_clone_args on h8300
Link: https://lkml.kernel.org/r/20220506141512.516114-2-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>


# 455481fc 22-Feb-2022 Thomas Bogendoerfer <tsbogend@alpha.franken.de>

MIPS: Remove TX39XX support

No (active) developer owns this hardware, so let's remove Linux support.

Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Acked-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>


# 42a20f86 29-Sep-2021 Kees Cook <keescook@chromium.org>

sched: Add wrapper for get_wchan() to keep task blocked

Having a stable wchan means the process must be blocked and for it to
stay that way while performing stack unwinding.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [arm]
Tested-by: Mark Rutland <mark.rutland@arm.com> [arm64]
Link: https://lkml.kernel.org/r/20211008111626.332092234@infradead.org


# 730d070a 03-Aug-2021 Sebastian Andrzej Siewior <bigeasy@linutronix.de>

MIPS: Replace deprecated CPU-hotplug functions.

The functions get_online_cpus() and put_online_cpus() have been
deprecated during the CPU hotplug rework. They map directly to
cpus_read_lock() and cpus_read_unlock().

Replace deprecated CPU-hotplug functions with the official version.
The behavior remains unchanged.

Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# b03fbd4f 11-Jun-2021 Peter Zijlstra <peterz@infradead.org>

sched: Introduce task_is_running()

Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
task_is_running(p).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org


# 04324f44 01-Apr-2021 Thomas Bogendoerfer <tsbogend@alpha.franken.de>

MIPS: Remove get_fs/set_fs

All get_fs/set_fs calls in MIPS code are gone, so remove implementation
of it. With the clear separation of user/kernel space access we no
longer need the EVA special handling, so get rid of that, too.

Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>


# 4727dc20 17-Feb-2021 Jens Axboe <axboe@kernel.dk>

arch: setup PF_IO_WORKER threads like PF_KTHREAD

PF_IO_WORKER are kernel threads too, but they aren't PF_KTHREAD in the
sense that we don't assign ->set_child_tid with our own structure. Just
ensure that every arch sets up the PF_IO_WORKER threads like kthreads
in the arch implementation of copy_thread().

Signed-off-by: Jens Axboe <axboe@kernel.dk>


# ea4a1ea4 09-Feb-2021 Thomas Bogendoerfer <tsbogend@alpha.franken.de>

Revert "MIPS: microMIPS: Fix the judgment of mm_jr16_op and mm_jalr_op"

This reverts commit 9308579fef3ddde19da9d45e23bf36d41932417f.

Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 9f0781ba 07-Feb-2021 Jinyang He <hejinyang@loongson.cn>

MIPS: process: Fix no previous prototype warning

unwind_stack_by_address and unwind_stack need <asm/stacktrace.h>.
arch_align_stack needs <asm/exec.h>

link: https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/ZPL2RRA6RZKRQZI5IGOVLFXN2GVZBN3L/
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 50886234 20-Jan-2021 Jinyang He <hejinyang@loongson.cn>

MIPS: Add is_jr_ra_ins() to end the loop early

For those leaf functions, they are likely to have no stack operations.
Add is_jr_ra_ins() to determine whether jr ra has been touched before
the frame_size is found. Without this patch, the get frame_size operation
may be out of range and get the frame_size from the next nested function.

There is no POOL32A format in uapi/asm/inst.h, so some bits here use the
format of r_format instead.
e.g.
---------------------------------------------------------------------
| format | 31:26 | 25:21 | 20:16 | 15:6 | 5:0 |
-----------------+---------+-------+-------+------------+------------
| pool32a_format | pool32a | rt | rs | jalrc | pool32axf |
-----------------+---------+-------+-------+------------+------------
| r_format | opcode | rs | rt | rd:5, re:5 | func |
---------------------------------------------------------------------

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 2d62f64b 20-Jan-2021 Jinyang He <hejinyang@loongson.cn>

MIPS: Fix get_frame_info() handing of function size

[1]: Commit b6c7a324df37b ("MIPS: Fix get_frame_info() handling of
microMIPS function size")
[2]: Commit 2b424cfc69728 ("MIPS: Remove function size check in
get_frame_info()")

First patch added a constant to check the number of iterations against.
Second patch fixed the situation that info->func_size is zero.

However, func_size member became useless after the second commit. Without
ip_end, the get frame_size operation may be out of range although KALLSYMS
enabled. Thus, check func_size first. Then make ip_end be the sum of ip
and a constant (512) if func_size is equal to 0. Otherwise make ip_end be
the sum of ip and func_size.

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 9308579f 20-Jan-2021 Jinyang He <hejinyang@loongson.cn>

MIPS: microMIPS: Fix the judgment of mm_jr16_op and mm_jalr_op

mm16_r5_format.rt is 5 bits, so directly judge the value if equal or not.
mm_jalr_op requires 7th to 16th bits. These 10 which bits generated by
shifting u_format.uimmediate by 6 may be affected by sign extension.
Thus, take out the 10 bits for comparison.

Without this patch, errors may occur, such as these bits are all ones.

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# fa85d6ac 20-Jan-2021 Jinyang He <hejinyang@loongson.cn>

MIPS: process: Remove unnecessary headers inclusion

Some headers are not necessary, remove them and sort includes.

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Reviewed-by: Huacai Chen <chenhuacai@kernel.org>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 545b8c8d 15-Jun-2020 Peter Zijlstra <peterz@infradead.org>

smp: Cleanup smp_call_function*()

Get rid of the __call_single_node union and cleanup the API a little
to avoid external code relying on the structure layout as much.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>


# 047248ca 29-Sep-2020 Pujin Shi <shipujin.t@gmail.com>

MIPS: process: include exec.h header in process.c

arch/mips/kernel/process.c:696:15: error: no previous prototype for 'arch_align_stack' [-Werror=missing-prototypes]

Signed-off-by: Pujin Shi <shipujin.t@gmail.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# bc1c969f 21-Sep-2020 Huacai Chen <chenhuacai@kernel.org>

MIPS: Loongson-3: Calculate ra properly when unwinding the stack

Loongson-3 has 16-bytes load/store instructions: gslq and gssq. This
patch calculate ra properly when unwinding the stack, if ra is saved
by gssq and restored by gslq.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 195615ec 21-Sep-2020 Huacai Chen <chenhuacai@kernel.org>

MIPS: Loongson-3: Enable COP2 usage in kernel

Loongson-3's COP2 is Multi-Media coprocessor, it is disabled in kernel
mode by default. However, gslq/gssq (16-bytes load/store instructions)
overrides the instruction format of lwc2/swc2. If we wan't to use gslq/
gssq for optimization in kernel, we should enable COP2 usage in kernel.

Please pay attention that in this patch we only enable COP2 in kernel,
which means it will lose ST0_CU2 when a process go to user space (try
to use COP2 in user space will trigger an exception and then grab COP2,
which is similar to FPU). And as a result, we need to modify the context
switching code because the new scheduled process doesn't contain ST0_CU2
in its THERAD_STATUS probably.

For zboot, we disable gslq/gssq be generated by toolchain.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 714acdbd 11-Jun-2020 Christian Brauner <christian.brauner@ubuntu.com>

arch: rename copy_thread_tls() back to copy_thread()

Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls()
back simply copy_thread(). It's a simpler name, and doesn't imply that only
tls is copied here. This finishes an outstanding chunk of internal process
creation work since we've added clone3().

Cc: linux-arch@vger.kernel.org
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>A
Acked-by: Stafford Horne <shorne@gmail.com>
Acked-by: Greentime Hu <green.hu@gmail.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>A
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>


# e31cf2f4 08-Jun-2020 Mike Rapoport <rppt@kernel.org>

mm: don't include asm/pgtable.h if linux/mm.h is already included

Patch series "mm: consolidate definitions of page table accessors", v2.

The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.

Most of these definitions are actually identical and typically it boils
down to, e.g.

static inline unsigned long pmd_index(unsigned long address)
{
return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
}

static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}

These definitions can be shared among 90% of the arches provided
XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.

For architectures that really need a custom version there is always
possibility to override the generic version with the usual ifdefs magic.

These patches introduce include/linux/pgtable.h that replaces
include/asm-generic/pgtable.h and add the definitions of the page table
accessors to the new header.

This patch (of 12):

The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h>
in the files that include <linux/mm.h>.

The include statements in such cases are remove with a simple loop:

for f in $(git grep -l "include <linux/mm.h>") ; do
sed -i -e '/include <asm\/pgtable.h>/ d' $f
done

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# aebdc6ff 24-Mar-2020 Yousong Zhou <yszhou4tech@gmail.com>

MIPS: Exclude more dsemul code when CONFIG_MIPS_FP_SUPPORT=n

This furthers what commit 42b10815d559 ("MIPS: Don't compile math-emu
when CONFIG_MIPS_FP_SUPPORT=n") has done

Signed-off-by: Yousong Zhou <yszhou4tech@gmail.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 2b424cfc 28-Jan-2019 Jun-Ru Chang <jrjang@realtek.com>

MIPS: Remove function size check in get_frame_info()

Patch (b6c7a324df37b "MIPS: Fix get_frame_info() handling of
microMIPS function size.") introduces additional function size
check for microMIPS by only checking insn between ip and ip + func_size.
However, func_size in get_frame_info() is always 0 if KALLSYMS is not
enabled. This causes get_frame_info() to return immediately without
calculating correct frame_size, which in turn causes "Can't analyze
schedule() prologue" warning messages at boot time.

This patch removes func_size check, and let the frame_size check run
up to 128 insns for both MIPS and microMIPS.

Signed-off-by: Jun-Ru Chang <jrjang@realtek.com>
Signed-off-by: Tony Wu <tonywu@realtek.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: b6c7a324df37b ("MIPS: Fix get_frame_info() handling of microMIPS function size.")
Cc: <ralf@linux-mips.org>
Cc: <jhogan@kernel.org>
Cc: <macro@mips.com>
Cc: <yamada.masahiro@socionext.com>
Cc: <peterz@infradead.org>
Cc: <mingo@kernel.org>
Cc: <linux-mips@vger.kernel.org>
Cc: <linux-kernel@vger.kernel.org>


# 41e486f4 17-Dec-2018 Paul Burton <paulburton@kernel.org>

MIPS: Remove struct mm_context_t fp_mode_switching field

The fp_mode_switching field in struct mm_context_t was left unused by
commit 8c8d953c2800 ("MIPS: Schedule on CPUs we need to lose FPU for a
mode switch") in v4.19, with nothing modifying its value & nothing
waiting on it having any particular value after that commit. Remove the
unused field & the one remaining reference to it.

Signed-off-by: Paul Burton <paul.burton@mips.com>


# ea7e0480 25-Sep-2018 Paul Burton <paulburton@kernel.org>

MIPS: VDSO: Always map near top of user memory

When using the legacy mmap layout, for example triggered using ulimit -s
unlimited, get_unmapped_area() fills memory from bottom to top starting
from a fairly low address near TASK_UNMAPPED_BASE.

This placement is suboptimal if the user application wishes to allocate
large amounts of heap memory using the brk syscall. With the VDSO being
located low in the user's virtual address space, the amount of space
available for access using brk is limited much more than it was prior to
the introduction of the VDSO.

For example:

# ulimit -s unlimited; cat /proc/self/maps
00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils
004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils
004fd000-0050f000 rwxp 00000000 00:00 0
00cc3000-00ce4000 rwxp 00000000 00:00 0 [heap]
2ab96000-2ab98000 r--p 00000000 00:00 0 [vvar]
2ab98000-2ab99000 r-xp 00000000 00:00 0 [vdso]
2ab99000-2ab9d000 rwxp 00000000 00:00 0
...

Resolve this by adjusting STACK_TOP to reserve space for the VDSO &
providing an address hint to get_unmapped_area() causing it to use this
space even when using the legacy mmap layout.

We reserve enough space for the VDSO, plus 1MB or 256MB for 32 bit & 64
bit systems respectively within which we randomize the VDSO base
address. Previously this randomization was taken care of by the mmap
base address randomization performed by arch_mmap_rnd(). The 1MB & 256MB
sizes are somewhat arbitrary but chosen such that we have some
randomization without taking up too much of the user's virtual address
space, which is often in short supply for 32 bit systems.

With this the VDSO is always mapped at a high address, leaving lots of
space for statically linked programs to make use of brk:

# ulimit -s unlimited; cat /proc/self/maps
00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils
004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils
004fd000-0050f000 rwxp 00000000 00:00 0
00c28000-00c49000 rwxp 00000000 00:00 0 [heap]
...
7f67c000-7f69d000 rwxp 00000000 00:00 0 [stack]
7f7fc000-7f7fd000 rwxp 00000000 00:00 0
7fcf1000-7fcf3000 r--p 00000000 00:00 0 [vvar]
7fcf3000-7fcf4000 r-xp 00000000 00:00 0 [vdso]

Signed-off-by: Paul Burton <paul.burton@mips.com>
Reported-by: Huacai Chen <chenhc@lemote.com>
Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO")
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.4+


# b63e132b 22-Jun-2018 Paul Burton <paulburton@kernel.org>

MIPS: Use async IPIs for arch_trigger_cpumask_backtrace()

The current MIPS implementation of arch_trigger_cpumask_backtrace() is
broken because it attempts to use synchronous IPIs despite the fact that
it may be run with interrupts disabled.

This means that when arch_trigger_cpumask_backtrace() is invoked, for
example by the RCU CPU stall watchdog, we may:

- Deadlock due to use of synchronous IPIs with interrupts disabled,
causing the CPU that's attempting to generate the backtrace output
to hang itself.

- Not succeed in generating the desired output from remote CPUs.

- Produce warnings about this from smp_call_function_many(), for
example:

[42760.526910] INFO: rcu_sched detected stalls on CPUs/tasks:
[42760.535755] 0-...!: (1 GPs behind) idle=ade/140000000000000/0 softirq=526944/526945 fqs=0
[42760.547874] 1-...!: (0 ticks this GP) idle=e4a/140000000000000/0 softirq=547885/547885 fqs=0
[42760.559869] (detected by 2, t=2162 jiffies, g=266689, c=266688, q=33)
[42760.568927] ------------[ cut here ]------------
[42760.576146] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:416 smp_call_function_many+0x88/0x20c
[42760.587839] Modules linked in:
[42760.593152] CPU: 2 PID: 1216 Comm: sh Not tainted 4.15.4-00373-gee058bb4d0c2 #2
[42760.603767] Stack : 8e09bd20 8e09bd20 8e09bd20 fffffff0 00000007 00000006 00000000 8e09bca8
[42760.616937] 95b2b379 95b2b379 807a0080 00000007 81944518 0000018a 00000032 00000000
[42760.630095] 00000000 00000030 80000000 00000000 806eca74 00000009 8017e2b8 000001a0
[42760.643169] 00000000 00000002 00000000 8e09baa4 00000008 808b8008 86d69080 8e09bca0
[42760.656282] 8e09ad50 805e20aa 00000000 00000000 00000000 8017e2b8 00000009 801070ca
[42760.669424] ...
[42760.673919] Call Trace:
[42760.678672] [<27fde568>] show_stack+0x70/0xf0
[42760.685417] [<84751641>] dump_stack+0xaa/0xd0
[42760.692188] [<699d671c>] __warn+0x80/0x92
[42760.698549] [<68915d41>] warn_slowpath_null+0x28/0x36
[42760.705912] [<f7c76c1c>] smp_call_function_many+0x88/0x20c
[42760.713696] [<6bbdfc2a>] arch_trigger_cpumask_backtrace+0x30/0x4a
[42760.722216] [<f845bd33>] rcu_dump_cpu_stacks+0x6a/0x98
[42760.729580] [<796e7629>] rcu_check_callbacks+0x672/0x6ac
[42760.737476] [<059b3b43>] update_process_times+0x18/0x34
[42760.744981] [<6eb94941>] tick_sched_handle.isra.5+0x26/0x38
[42760.752793] [<478d3d70>] tick_sched_timer+0x1c/0x50
[42760.759882] [<e56ea39f>] __hrtimer_run_queues+0xc6/0x226
[42760.767418] [<e88bbcae>] hrtimer_interrupt+0x88/0x19a
[42760.775031] [<6765a19e>] gic_compare_interrupt+0x2e/0x3a
[42760.782761] [<0558bf5f>] handle_percpu_devid_irq+0x78/0x168
[42760.790795] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
[42760.798117] [<1b6d462c>] gic_handle_local_int+0x38/0x86
[42760.805545] [<b2ada1c7>] gic_irq_dispatch+0xa/0x14
[42760.812534] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
[42760.820086] [<c7521934>] do_IRQ+0x16/0x20
[42760.826274] [<9aef3ce6>] plat_irq_dispatch+0x62/0x94
[42760.833458] [<6a94b53c>] except_vec_vi_end+0x70/0x78
[42760.840655] [<22284043>] smp_call_function_many+0x1ba/0x20c
[42760.848501] [<54022b58>] smp_call_function+0x1e/0x2c
[42760.855693] [<ab9fc705>] flush_tlb_mm+0x2a/0x98
[42760.862730] [<0844cdd0>] tlb_flush_mmu+0x1c/0x44
[42760.869628] [<cb259b74>] arch_tlb_finish_mmu+0x26/0x3e
[42760.877021] [<1aeaaf74>] tlb_finish_mmu+0x18/0x66
[42760.883907] [<b3fce717>] exit_mmap+0x76/0xea
[42760.890428] [<c4c8a2f6>] mmput+0x80/0x11a
[42760.896632] [<a41a08f4>] do_exit+0x1f4/0x80c
[42760.903158] [<ee01cef6>] do_group_exit+0x20/0x7e
[42760.909990] [<13fa8d54>] __wake_up_parent+0x0/0x1e
[42760.917045] [<46cf89d0>] smp_call_function_many+0x1a2/0x20c
[42760.924893] [<8c21a93b>] syscall_common+0x14/0x1c
[42760.931765] ---[ end trace 02aa09da9dc52a60 ]---
[42760.938342] ------------[ cut here ]------------
[42760.945311] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:291 smp_call_function_single+0xee/0xf8
...

This patch switches MIPS' arch_trigger_cpumask_backtrace() to use async
IPIs & smp_call_function_single_async() in order to resolve this
problem. We ensure use of the pre-allocated call_single_data_t
structures is serialized by maintaining a cpumask indicating that
they're busy, and refusing to attempt to send an IPI when a CPU's bit is
set in this mask. This should only happen if a CPU hasn't responded to a
previous backtrace IPI - ie. if it's hung - and we print a warning to
the console in this case.

I've marked this for stable branches as far back as v4.9, to which it
applies cleanly. Strictly speaking the faulty MIPS implementation can be
traced further back to commit 856839b76836 ("MIPS: Add
arch_trigger_all_cpu_backtrace() function") in v3.19, but kernel
versions v3.19 through v4.8 will require further work to backport due to
the rework performed in commit 9a01c3ed5cdb ("nmi_backtrace: add more
trigger_*_cpu_backtrace() methods").

Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19597/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
Fixes: 856839b76836 ("MIPS: Add arch_trigger_all_cpu_backtrace() function")
Fixes: 9a01c3ed5cdb ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods")


# 5a267832 22-Jun-2018 Paul Burton <paulburton@kernel.org>

MIPS: Call dump_stack() from show_regs()

The generic nmi_cpu_backtrace() function calls show_regs() when a struct
pt_regs is available, and dump_stack() otherwise. If we were to make use
of the generic nmi_cpu_backtrace() with MIPS' current implementation of
show_regs() this would mean that we see only register data with no
accompanying stack information, in contrast with our current
implementation which calls dump_stack() regardless of whether register
state is available.

In preparation for making use of the generic nmi_cpu_backtrace() to
implement arch_trigger_cpumask_backtrace(), have our implementation of
show_regs() call dump_stack() and drop the explicit dump_stack() call in
arch_dump_stack() which is invoked by arch_trigger_cpumask_backtrace().

This will allow the output we produce to remain the same after a later
patch switches to using nmi_cpu_backtrace(). It may mean that we produce
extra stack output in other uses of show_regs(), but this:

1) Seems harmless.
2) Is good for consistency between arch_trigger_cpumask_backtrace()
and other users of show_regs().
3) Matches the behaviour of the ARM & PowerPC architectures.

Marked for stable back to v4.9 as a prerequisite of the following patch
"MIPS: Call dump_stack() from show_regs()".

Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19596/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+


# 8c8d953c 19-Dec-2017 Paul Burton <paulburton@kernel.org>

MIPS: Schedule on CPUs we need to lose FPU for a mode switch

Commit 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:

1) The IPI may arrive at a point where the FPU is in the process of
being enabled, but that process is not yet complete leading to a
state we aren't prepared to handle. For example:

[ 370.215903] do_cpu invoked from kernel context![#1]:
[ 370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
[ 370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
[ 370.235399] $ 0 : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
[ 370.243882] $ 4 : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
[ 370.252317] $ 8 : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
[ 370.260753] $12 : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
[ 370.269179] $16 : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
[ 370.277612] $20 : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
[ 370.286038] $24 : 0000000000000000 900000001612048c
[ 370.294476] $28 : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
[ 370.302909] Hi : 0000000000000000
[ 370.306595] Lo : 0000000000000200
[ 370.310376] epc : ffffffff80115d38 _save_fp+0x10/0xa0
[ 370.315784] ra : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.322707] Status: 140084e2 KX SX UX KERNEL EXL
[ 370.327980] Cause : 1080002c (ExcCode 0b)
[ 370.332091] PrId : 0001a428 (MIPS P6600)
[ 370.336179] Modules linked in:
[ 370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
[ 370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
[ 370.358161] 0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
[ 370.366575] ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
[ 370.374989] ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
[ 370.383395] 0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
[ 370.391817] 000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
[ 370.400230] a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
[ 370.408644] ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
[ 370.417066] ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
[ 370.425472] ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
[ 370.433885] ...
[ 370.436562] Call Trace:
[ 370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
[ 370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
[ 370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
[ 370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
[ 370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
[ 370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
[ 370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
[ 370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
[ 370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
[ 370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
[ 370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
[ 370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
[ 370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10

2) The IPI may arrive during kernel use of the FPU, since we generally
only disable preemption around use of the FPU & leave interrupts
enabled. This can lead to us unexpectedly losing access to the FPU
in places where it previously had not been possible. For example:

do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82
#2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10

At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:

3) The IPI may arrive whilst the kernel is running code that will lead
to a preempt_disable() call & FPU usage soon. If this happens then
the IPI will be serviced & we'll proceed to enable an FPU whilst the
mode switch is in progress, leading to strange & inconsistent
behaviour.

Further to all of this is a separate but related problem:

4) There are various paths through which we may enable the FPU without
the user having triggered a coprocessor 1 disabled exception. These
paths are those in which we emulate instructions & then enable the
FPU with the expectation that the user might execute an FP
instruction shortly afterwards. However these paths have not
previously checked whether an FP mode switch is underway for the
task, and therefore could enable the FPU whilst such a mode switch
is in progress leading to strange & inconsistent behaviour for user
code.

This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:

a) Prevent any threads that are part of the affected process from being
able to obtain ownership of the FPU.

b) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.

c) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.

d) Allow threads to obtain ownership of the FPU again.

This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:

A) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.

B) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.

We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.

The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org> # v4.0+


# 050e9baa 13-Jun-2018 Linus Torvalds <torvalds@linux-foundation.org>

Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables

The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.

That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.

HOWEVER.

It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like

CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_NONE is not set
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_STACKPROTECTOR_AUTO=y

and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:

CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.

The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one. This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).

This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:

CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.

Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 28e4213d 15-May-2018 Maciej W. Rozycki <macro@mips.com>

MIPS: prctl: Disallow FRE without FR with PR_SET_FP_MODE requests

Having PR_FP_MODE_FRE (i.e. Config5.FRE) set without PR_FP_MODE_FR (i.e.
Status.FR) is not supported as the lone purpose of Config5.FRE is to
emulate Status.FR=0 handling on FPU hardware that has Status.FR=1
hardwired[1][2]. Also we do not handle this case elsewhere, and assume
throughout our code that TIF_HYBRID_FPREGS and TIF_32BIT_FPREGS cannot
be set both at once for a task, leading to inconsistent behaviour if
this does happen.

Return unsuccessfully then from prctl(2) PR_SET_FP_MODE calls requesting
PR_FP_MODE_FRE to be set with PR_FP_MODE_FR clear. This corresponds to
modes allowed by `mips_set_personality_fp'.

References:

[1] "MIPS Architecture For Programmers, Vol. III: MIPS32 / microMIPS32
Privileged Resource Architecture", Imagination Technologies,
Document Number: MD00090, Revision 6.02, July 10, 2015, Table 9.69
"Config5 Register Field Descriptions", p. 262

[2] "MIPS Architecture For Programmers, Volume III: MIPS64 / microMIPS64
Privileged Resource Architecture", Imagination Technologies,
Document Number: MD00091, Revision 6.03, December 22, 2015, Table
9.72 "Config5 Register Field Descriptions", p. 288

Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.0+
Patchwork: https://patchwork.linux-mips.org/patch/19327/
Signed-off-by: James Hogan <jhogan@kernel.org>


# 6887a56b 15-Mar-2018 Peter Zijlstra <peterz@infradead.org>

sched/wait, arch/mips: Fix and convert wait_on_atomic_t() usage to the new wait_var_event() API

The old wait_on_atomic_t() is going to get removed, use the more
flexible wait_var_event() API instead.

And while there, fix a bug and add the missing wakeup...

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# b67336ee 27-Nov-2017 Maciej W. Rozycki <macro@mips.com>

MIPS: Validate PR_SET_FP_MODE prctl(2) requests against the ABI of the task

Fix an API loophole introduced with commit 9791554b45a2 ("MIPS,prctl:
add PR_[GS]ET_FP_MODE prctl options for MIPS"), where the caller of
prctl(2) is incorrectly allowed to make a change to CP0.Status.FR or
CP0.Config5.FRE register bits even if CONFIG_MIPS_O32_FP64_SUPPORT has
not been enabled, despite that an executable requesting the mode
requested via ELF file annotation would not be allowed to run in the
first place, or for n64 and n64 ABI tasks which do not have non-default
modes defined at all. Add suitable checks to `mips_set_process_fp_mode'
and bail out if an invalid mode change has been requested for the ABI in
effect, even if the FPU hardware or emulation would otherwise allow it.

Always succeed however without taking any further action if the mode
requested is the same as one already in effect, regardless of whether
any mode change, should it be requested, would actually be allowed for
the task concerned.

Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Reviewed-by: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <james.hogan@mips.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org # 4.0+
Patchwork: https://patchwork.linux-mips.org/patch/17800/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 508c5757 18-Sep-2017 Tobias Klauser <tklauser@distanz.ch>

MIPS: make thread_saved_pc static

The only user of thread_saved_pc() in non-arch-specific code was removed
in commit 8243d5597793 ("sched/core: Remove pointless printout in
sched_show_task()"), so it no longer needs to be globally defined for
MIPS and can be made static.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/17303/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 56dfb700 08-Aug-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: Refactor handling of stack pointer in get_frame_info

Commit 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
added handling of microMIPS instructions to manipulate the stack
pointer. The code that was added violates code style rules with long
lines caused by lots of nested conditionals.

The added code interprets (inline) any known stack pointer manipulation
instruction to find the stack frame size. Handling the microMIPS cases
added quite a bit of complication to this function.

Refactor is_sp_move_ins to perform the interpretation of the immediate
as the instruction manipulating the stack pointer is found. This reduces
the amount of indentation required in get_frame_info, and more closely
matches the operation of is_ra_save_ins.

Suggested-by: Maciej W. Rozycki <macro@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16958/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 41885b02 08-Aug-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: Stacktrace: Fix microMIPS stack unwinding on big endian systems

The stack unwinding code uses the mips_instuction union to decode the
instructions it finds. That union uses the __BITFIELD_FIELD macro to
reorder depending on endianness. The stack unwinding code always places
16bit instructions in halfword 1 of the union. This makes the union
accesses correct for little endian systems. Similarly, 32bit
instructions are reordered such that they are correct for little endian
systems. This handling leaves unwinding the stack on big endian systems
broken, as the mips_instruction union will then look for the fields in
the wrong halfword.

To fix this, use a logical shift to place the 16bit instruction into the
correct position in the word field of the union. Use the same shifting
to order the 2 halfwords of 32bit instuctions. Then replace accesses to
the halfword with accesses to the shifted word.
In the case of the ADDIUS5 instruction, switch to using the
mm16_r5_format union member to avoid the need for a 16bit shift.

Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16956/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# cea8cd49 08-Aug-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: microMIPS: Fix decoding of swsp16 instruction

When the immediate encoded in the instruction is accessed, it is sign
extended due to being a signed value being assigned to a signed integer.
The ISA specifies that this operation is an unsigned operation.
The sign extension leads us to incorrectly decode:

801e9c8e: cbf1 sw ra,68(sp)

As having an immediate of 1073741809.

Since the instruction format does not specify signed/unsigned, and this
is currently the only location to use this instuction format, change it
to an unsigned immediate.

Fixes: bb9bc4689b9c ("MIPS: Calculate microMIPS ra properly when unwinding the stack")
Suggested-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Miodrag Dinic <miodrag.dinic@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16957/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a0ae2b08 08-Aug-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: microMIPS: Fix decoding of addiusp instruction

Commit 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
added handling of microMIPS instructions to manipulate the stack
pointer. Unfortunately the decoding of the addiusp instruction was
incorrect, and performed a left shift by 2 bits to the raw immediate,
rather than decoding the immediate and then performing the shift, as
documented in the ISA.

This led to incomplete stack traces, due to incorrect frame sizes being
calculated. For example the instruction:
801faee0 <do_sys_poll>:
801faee0: 4e25 addiu sp,sp,-952

As decoded by objdump, would be interpreted by the existing code as
having manipulated the stack pointer by +1096.

Fix this by changing the order of decoding the immediate and applying
the left shift. Also change to accessing the instuction through the
union to avoid the endianness problem of accesing halfword[0], which
will fail on big endian systems.

Cope with the special behaviour of immediates 0x0, 0x1, 0x1fe and 0x1ff
by XORing with 0x100 again if mod(immediate) < 4. This logic was tested
with the following test code:

int main(int argc, char **argv)
{
unsigned int enc;
int imm;

for (enc = 0; enc < 512; ++enc) {
int tmp = enc << 2;
imm = -(signed short)(tmp | ((tmp & 0x100) ? 0xfe00 : 0));
unsigned short tmp = enc;
tmp = (tmp ^ 0x100) - 0x100;
if ((unsigned short)(tmp + 2) < 4)
tmp ^= 0x100;
imm = -(signed short)(tmp << 2);
printf("%#x\t%d\t->\t(%#x\t%d)\t%#x\t%d\n",
enc, enc,
(short)tmp, (short)tmp,
imm, imm);
}
return EXIT_SUCCESS;
}

Which generates the table:

input encoding -> tmp (matching manual) frame size
-----------------------------------------------------------------------
0 0 -> (0x100 256) 0xfffffc00 -1024
0x1 1 -> (0x101 257) 0xfffffbfc -1028
0x2 2 -> (0x2 2) 0xfffffff8 -8
0x3 3 -> (0x3 3) 0xfffffff4 -12
...
0xfe 254 -> (0xfe 254) 0xfffffc08 -1016
0xff 255 -> (0xff 255) 0xfffffc04 -1020
0x100 256 -> (0xffffff00 -256) 0x400 1024
0x101 257 -> (0xffffff01 -255) 0x3fc 1020
...
0x1fc 508 -> (0xfffffffc -4) 0x10 16
0x1fd 509 -> (0xfffffffd -3) 0xc 12
0x1fe 510 -> (0xfffffefe -258) 0x408 1032
0x1ff 511 -> (0xfffffeff -257) 0x404 1028

Thanks to James Hogan for the test code & verifying the logic.

Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Suggested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16955/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b332fec0 08-Aug-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: microMIPS: Fix detection of addiusp instruction

The addiusp instruction uses the pool16d opcode, with bit 0 of the
immediate set. The test for the addiusp opcode erroneously did a logical
and of the immediate with mm_addiusp_func, which has value 1, so this
test always passes when the immediate is non-zero.

Fix the test by replacing the logical and with a bitwise and.

Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16954/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 11887ed1 08-Aug-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: Handle non word sized instructions when examining frame

Commit 34c2f668d0f6b ("MIPS: microMIPS: Add unaligned access support.")
added fairly broken support for handling 16bit microMIPS instructions in
get_frame_info(). It adjusts the instruction pointer by 16bits in the
case of a 16bit sp move instruction, but not any other 16bit
instruction.

Commit b6c7a324df37 ("MIPS: Fix get_frame_info() handling of microMIPS
function size") goes some way to fixing get_frame_info() to iterate over
microMIPS instuctions, but the instruction pointer is still manipulated
using a postincrement, and is of union mips_instruction type. Since the
union is sized to the largest member (a word), but microMIPS
instructions are a mix of halfword and word sizes, the function does not
always iterate correctly, ending up misaligned with the instruction
stream and interpreting it incorrectly.

Since the instruction modifying the stack pointer is usually the first
in the function, that one is usually handled correctly. But the
instruction which saves the return address to the sp is some variable
number of instructions into the frame and is frequently missed due to
not being on a word boundary, leading to incomplete walking of the
stack.

Fix this by incrementing the instruction pointer based on the size of
the previously decoded instruction (& remove the hack introduced by
commit 34c2f668d0f6b ("MIPS: microMIPS: Add unaligned access support.")
which adjusts the instruction pointer in the case of a 16bit sp move
instruction, but not any other).

Fixes: 34c2f668d0f6b ("MIPS: microMIPS: Add unaligned access support.")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16953/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# aee16625 10-Aug-2017 Corey Minyard <cminyard@mvista.com>

MIPS: Fix issues in backtraces

I saw two problems when doing backtraces:

The compiler was putting a "fast return" at the top of some
functions, before it set up the frame. The backtrace code
would stop when it saw a jump instruction, so it would never
get to the stack frame setup and would thus misinterpret it.
To fix this, don't look for jump instructions until the
frame setup has been seen.

The assembly code here is:

ffffffff80b885a0 <serial8250_handle_irq>:
ffffffff80b885a0: c8a00003 bbit0 a1,0x0,ffffffff80b885b0 <serial8250_handle_irq+0x10>
ffffffff80b885a4: 0000102d move v0,zero
ffffffff80b885a8: 03e00008 jr ra
ffffffff80b885ac: 00000000 nop
ffffffff80b885b0: 67bdffd0 daddiu sp,sp,-48
ffffffff80b885b4: ffb00008 sd s0,8(sp)

The second problem was the compiler was putting the last
instruction of the frame save in the delay slot of the
jump instruction. If it saved the RA in there, the
backtrace could would miss it and misinterpret the frame.
To fix this, make sure to process the instruction after
the first jump seen.

The assembly code for this is:

ffffffff80806fd0 <plat_irq_dispatch>:
ffffffff80806fd0: 67bdffd0 daddiu sp,sp,-48
ffffffff80806fd4: ffb30020 sd s3,32(sp)
ffffffff80806fd8: 24130018 li s3,24
ffffffff80806fdc: ffb20018 sd s2,24(sp)
ffffffff80806fe0: 3c12811c lui s2,0x811c
ffffffff80806fe4: ffb10010 sd s1,16(sp)
ffffffff80806fe8: 3c11811c lui s1,0x811c
ffffffff80806fec: ffb00008 sd s0,8(sp)
ffffffff80806ff0: 3c10811c lui s0,0x811c
ffffffff80806ff4: 08201c03 j ffffffff8080700c <plat_irq_dispa
tch+0x3c>
ffffffff80806ff8: ffbf0028 sd ra,40(sp)

Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16992/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b0f5a8f3 29-May-2017 Vegard Nossum <vegard.nossum@oracle.com>

kthread: fix boot hang (regression) on MIPS/OpenRISC

This fixes a regression in commit 4d6501dce079 where I didn't notice
that MIPS and OpenRISC were reinitialising p->{set,clear}_child_tid to
NULL after our initialisation in copy_process().

We can simply get rid of the arch-specific initialisation here since it
is now always done in copy_process() before hitting copy_thread{,_tls}().

Review notes:

- As far as I can tell, copy_process() is the only user of
copy_thread_tls(), which is the only caller of copy_thread() for
architectures that don't implement copy_thread_tls().

- After this patch, there is no arch-specific code touching
p->set_child_tid or p->clear_child_tid whatsoever.

- It may look like MIPS/OpenRISC wanted to always have these fields be
NULL, but that's not true, as copy_process() would unconditionally
set them again _after_ calling copy_thread_tls() before commit
4d6501dce079.

Fixes: 4d6501dce079c1eb6bf0b1d8f528a5e81770109e ("kthread: Fix use-after-free if kthread fork fails")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net> # MIPS only
Acked-by: Stafford Horne <shorne@gmail.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: openrisc@lists.librecores.org
Cc: Jamie Iles <jamie.iles@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# f9c4e3a6 31-Mar-2017 James Cowgill <James.Cowgill@imgtec.com>

MIPS: Opt into HAVE_COPY_THREAD_TLS

This the mips version of commit c1bd55f922a2d ("x86: opt into
HAVE_COPY_THREAD_TLS, for both 32-bit and 64-bit").

Simply use the tls system call argument instead of extracting the tls
argument by magic from the pt_regs structure.

See commit 3033f14ab78c3 ("clone: support passing tls argument via C
rather than pt_regs magic") for more background.

Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15855/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# db8466c5 21-Mar-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: IRQ Stack: Unwind IRQ stack onto task stack

When the separate IRQ stack was introduced, stack unwinding only
proceeded as far as the top of the IRQ stack, leading to kernel
backtraces being less useful, lacking the trace of what was interrupted.

Fix this by providing a means for the kernel to unwind the IRQ stack
onto the interrupted task stack. The processor state is saved to the
kernel task stack on interrupt. The IRQ_STACK_START macro reserves an
unsigned long at the top of the IRQ stack where the interrupted task
stack pointer can be saved. After the active stack is switched to the
IRQ stack, save the interrupted tasks stack pointer to the reserved
location.

Fix the stack unwinding code to look for the frame being the top of the
IRQ stack and if so get the next frame from the saved location. The
existing test does not work with the separate stack since the ra is no
longer pointed at ret_from_{irq,exception}.

The test to stop unwinding the stack 32 bytes from the top of a stack
must be modified to allow unwinding to continue up to the location of
the saved task stack pointer when on the IRQ stack. The low / high marks
of the stack are set depending on whether the sp is on an irq stack or
not.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Masanari Iida <standby24x7@gmail.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jason A. Donenfeld <jason@zx2c4.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/15788/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 68db0cf1 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h>

We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task_stack.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 29930025 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task.h>

We are going to split <linux/sched/task.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# b17b0153 08-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h>

We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/debug.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 08c941bf 21-Nov-2016 Marcin Nowakowski <marcin.nowakowski@mips.com>

MIPS: Move register dump routines out of ptrace code

Current register dump methods for MIPS are implemented inside ptrace
methods, but there will be other uses in the kernel for them, so keep
them separately in process.c and use those definitions for ptrace
instead.

Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14587/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a00eeede 04-Nov-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Use a completion event to signal CPU up

If a secondary CPU failed to start, for any reason, the CPU requesting
the secondary to start would get stuck in the loop waiting for the
secondary to be present in the cpu_callin_map.

Rather than that, use a completion event to signal that the secondary
CPU has started and is waiting to synchronise counters.

Since the CPU presence will no longer be marked in cpu_callin_map,
remove the redundant test from arch_cpu_idle_dead().

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Maciej W. Rozycki <macro@imgtec.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14502/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 096a0de4 07-Nov-2016 Paul Burton <paulburton@kernel.org>

MIPS: Handle microMIPS jumps in the same way as MIPS32/MIPS64 jumps

is_jump_ins() checks for plain jump ("j") instructions since commit
e7438c4b893e ("MIPS: Fix sibling call handling in get_frame_info") but
that commit didn't make the same change to the microMIPS code, leaving
it inconsistent with the MIPS32/MIPS64 code. Handle the microMIPS
encoding of the jump instruction too such that it behaves consistently.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: e7438c4b893e ("MIPS: Fix sibling call handling in get_frame_info")
Cc: Tony Wu <tung7970@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14533/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# bb9bc468 07-Nov-2016 Paul Burton <paulburton@kernel.org>

MIPS: Calculate microMIPS ra properly when unwinding the stack

get_frame_info() calculates the offset of the return address within a
stack frame simply by dividing a the bottom 16 bits of the instruction,
treated as a signed integer, by the size of a long. Whilst this works
for MIPS32 & MIPS64 ISAs where the sw or sd instructions are used, it's
incorrect for microMIPS where encodings differ. The result is that we
typically completely fail to unwind the stack on microMIPS.

Fix this by adjusting is_ra_save_ins() to calculate the return address
offset, and take into account the various different encodings there in
the same place as we consider whether an instruction is storing the
ra/$31 register.

With this we are now able to unwind the stack for kernels targetting the
microMIPS ISA, for example we can produce:

Call Trace:
[<80109e1f>] show_stack+0x63/0x7c
[<8011ea17>] __warn+0x9b/0xac
[<8011ea45>] warn_slowpath_fmt+0x1d/0x20
[<8013fe53>] register_console+0x43/0x314
[<8067c58d>] of_setup_earlycon+0x1dd/0x1ec
[<8067f63f>] early_init_dt_scan_chosen_stdout+0xe7/0xf8
[<8066c115>] do_early_param+0x75/0xac
[<801302f9>] parse_args+0x1dd/0x308
[<8066c459>] parse_early_options+0x25/0x28
[<8066c48b>] parse_early_param+0x2f/0x38
[<8066e8cf>] setup_arch+0x113/0x488
[<8066c4f3>] start_kernel+0x57/0x328
---[ end trace 0000000000000000 ]---

Whereas previously we only produced:

Call Trace:
[<80109e1f>] show_stack+0x63/0x7c
---[ end trace 0000000000000000 ]---

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14532/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 67c75057 07-Nov-2016 Paul Burton <paulburton@kernel.org>

MIPS: Fix is_jump_ins() handling of 16b microMIPS instructions

is_jump_ins() checks 16b instruction fields without verifying that the
instruction is indeed 16b, as is done by is_ra_save_ins() &
is_sp_move_ins(). Add the appropriate check.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14531/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b6c7a324 07-Nov-2016 Paul Burton <paulburton@kernel.org>

MIPS: Fix get_frame_info() handling of microMIPS function size

get_frame_info() is meant to iterate over up to the first 128
instructions within a function, but for microMIPS kernels it will not
reach that many instructions unless the function is 512 bytes long since
we calculate the maximum number of instructions to check by dividing the
function length by the 4 byte size of a union mips_instruction. In
microMIPS kernels this won't do since instructions are variable length.

Fix this by instead checking whether the pointer to the current
instruction has reached the end of the function, and use max_insns as a
simple constant to check the number of iterations against.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14530/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a3552dac 07-Nov-2016 Paul Burton <paulburton@kernel.org>

MIPS: Prevent unaligned accesses during stack unwinding

During stack unwinding we call a number of functions to determine what
type of instruction we're looking at. The union mips_instruction pointer
provided to them may be pointing at a 2 byte, but not 4 byte, aligned
address & we thus cannot directly access the 4 byte wide members of the
union mips_instruction. To avoid this is_ra_save_ins() copies the
required half-words of the microMIPS instruction to a correctly aligned
union mips_instruction on the stack, which it can then access safely.
The is_jump_ins() & is_sp_move_ins() functions do not correctly perform
this temporary copy, and instead attempt to directly dereference 4 byte
fields which may be misaligned and lead to an address exception.

Fix this by copying the instruction halfwords to a temporary union
mips_instruction in get_frame_info() such that we can provide a 4 byte
aligned union mips_instruction to the is_*_ins() functions and they do
not need to deal with misalignment themselves.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14529/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# ccaf7caf 07-Nov-2016 Paul Burton <paulburton@kernel.org>

MIPS: Clear ISA bit correctly in get_frame_info()

get_frame_info() can be called in microMIPS kernels with the ISA bit
already clear. For example this happens when unwind_stack_by_address()
is called because we begin with a PC that has the ISA bit set & subtract
the (odd) offset from the preceding symbol (which does not have the ISA
bit set). Since get_frame_info() unconditionally subtracts 1 from the PC
in microMIPS kernels it incorrectly misaligns the address it then
attempts to access code at, leading to an address error exception.

Fix this by using msk_isa16_mode() to clear the ISA bit, which allows
get_frame_info() to function regardless of whether it is provided with a
PC that has the ISA bit set or not.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 34c2f668d0f6 ("MIPS: microMIPS: Add unaligned access support.")
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # v3.10+
Patchwork: https://patchwork.linux-mips.org/patch/14528/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# d42d8d10 19-Dec-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: Stack unwinding while on IRQ stack

Within unwind stack, check if the stack pointer being unwound is within
the CPU's irq_stack and if so use that page rather than the task's stack
page.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Acked-by: Jason A. Donenfeld <jason@zx2c4.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Maciej W. Rozycki <macro@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14741/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7c0f6ba6 24-Dec-2016 Linus Torvalds <torvalds@linux-foundation.org>

Replace <asm/uaccess.h> with <linux/uaccess.h> globally

This was entirely automated, using the script by Al:

PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)

to do the replacement at the end of the merge window.

Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 9a01c3ed 07-Oct-2016 Chris Metcalf <cmetcalf@mellanox.com>

nmi_backtrace: add more trigger_*_cpu_backtrace() methods

Patch series "improvements to the nmi_backtrace code" v9.

This patch series modifies the trigger_xxx_backtrace() NMI-based remote
backtracing code to make it more flexible, and makes a few small
improvements along the way.

The motivation comes from the task isolation code, where there are
scenarios where we want to be able to diagnose a case where some cpu is
about to interrupt a task-isolated cpu. It can be helpful to see both
where the interrupting cpu is, and also an approximation of where the
cpu that is being interrupted is. The nmi_backtrace framework allows us
to discover the stack of the interrupted cpu.

I've tested that the change works as desired on tile, and build-tested
x86, arm, mips, and sparc64. For x86 I confirmed that the generic
cpuidle stuff as well as the architecture-specific routines are in the
new cpuidle section. For arm, mips, and sparc I just build-tested it
and made sure the generic cpuidle routines were in the new cpuidle
section, but I didn't attempt to figure out which the platform-specific
idle routines might be. That might be more usefully done by someone
with platform experience in follow-up patches.

This patch (of 4):

Currently you can only request a backtrace of either all cpus, or all
cpus but yourself. It can also be helpful to request a remote backtrace
of a single cpu, and since we want that, the logical extension is to
support a cpumask as the underlying primitive.

This change modifies the existing lib/nmi_backtrace.c code to take a
cpumask as its basic primitive, and modifies the linux/nmi.h code to use
the new "cpumask" method instead.

The existing clients of nmi_backtrace (arm and x86) are converted to
using the new cpumask approach in this change.

The other users of the backtracing API (sparc64 and mips) are converted
to use the cpumask approach rather than the all/allbutself approach.
The mips code ignored the "include_self" boolean but with this change it
will now also dump a local backtrace if requested.

Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.com
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
Reviewed-by: Aaron Tomlin <atomlin@redhat.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# b244614a 30-Aug-2016 Marcin Nowakowski <marcin.nowakowski@mips.com>

MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)

cpu_has_fpu macro uses smp_processor_id() and is currently executed
with preemption enabled, that triggers the warning at runtime.

It is assumed throughout the kernel that if any CPU has an FPU, then all
CPUs would have an FPU as well, so it is safe to perform the check with
preemption enabled - change the code to use raw_ variant of the check to
avoid the warning.

Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 4.0+
Patchwork: https://patchwork.linux-mips.org/patch/14125/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 432c6bac 08-Jul-2016 Paul Burton <paulburton@kernel.org>

MIPS: Use per-mm page to execute branch delay slot instructions

In some cases the kernel needs to execute an instruction from the delay
slot of an emulated branch instruction. These cases include:

- Emulated floating point branch instructions (bc1[ft]l?) for systems
which don't include an FPU, or upon which the kernel is run with the
"nofpu" parameter.

- MIPSr6 systems running binaries targeting older revisions of the
architecture, which may include branch instructions whose encodings
are no longer valid in MIPSr6.

Executing instructions from such delay slots is done by writing the
instruction to memory followed by a trap, as part of an "emuframe", and
executing it. This avoids the requirement of an emulator for the entire
MIPS instruction set. Prior to this patch such emuframes are written to
the user stack and executed from there.

This patch moves FP branch delay emuframes off of the user stack and
into a per-mm page. Allocating a page per-mm leaves userland with access
to only what it had access to previously, and compared to other
solutions is relatively simple.

When a thread requires a delay slot emulation, it is allocated a frame.
A thread may only have one frame allocated at any one time, since it may
only ever be executing one instruction at any one time. In order to
ensure that we can free up allocated frame later, its index is recorded
in struct thread_struct. In the typical case, after executing the delay
slot instruction we'll execute a break instruction with the BRK_MEMU
code. This traps back to the kernel & leads to a call to do_dsemulret
which frees the allocated frame & moves the user PC back to the
instruction that would have executed following the emulated branch.
In some cases the delay slot instruction may be invalid, such as a
branch, or may trigger an exception. In these cases the BRK_MEMU break
instruction will not be hit. In order to ensure that frames are freed
this patch introduces dsemul_thread_cleanup() and calls it to free any
allocated frame upon thread exit. If the instruction generated an
exception & leads to a signal being delivered to the thread, or indeed
if a signal simply happens to be delivered to the thread whilst it is
executing from the struct emuframe, then we need to take care to exit
the frame appropriately. This is done by either rolling back the user PC
to the branch or advancing it to the continuation PC prior to signal
delivery, using dsemul_thread_rollback(). If this were not done then a
sigreturn would return to the struct emuframe, and if that frame had
meanwhile been used in response to an emulated branch instruction within
the signal handler then we would execute the wrong user code.

Whilst a user could theoretically place something like a compact branch
to self in a delay slot and cause their thread to become stuck in an
infinite loop with the frame never being deallocated, this would:

- Only affect the users single process.

- Be architecturally invalid since there would be a branch in the
delay slot, which is forbidden.

- Be extremely unlikely to happen by mistake, and provide a program
with no more ability to harm the system than a simple infinite loop
would.

If a thread requires a delay slot emulation & no frame is available to
it (ie. the process has enough other threads that all frames are
currently in use) then the thread joins a waitqueue. It will sleep until
a frame is freed by another thread in the process.

Since we now know whether a thread has an allocated frame due to our
tracking of its index, the cookie field of struct emuframe is removed as
we can be more certain whether we have a valid frame. Since a thread may
only ever have a single frame at any given time, the epc field of struct
emuframe is also removed & the PC to continue from is instead stored in
struct thread_struct. Together these changes simplify & shrink struct
emuframe somewhat, allowing twice as many frames to fit into the page
allocated for them.

The primary benefit of this patch is that we are now free to mark the
user stack non-executable where that is possible.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: Maciej Rozycki <maciej.rozycki@imgtec.com>
Cc: Faraz Shahbazker <faraz.shahbazker@imgtec.com>
Cc: Raghu Gandham <raghu.gandham@imgtec.com>
Cc: Matthew Fortune <matthew.fortune@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13764/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a90c59e6 21-May-2016 Andrea Gelmini <andrea.gelmini@gelma.net>

MIPS: kernel: Fix typo

Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net>
Cc: paul.burton@imgtec.com
Cc: macro@imgtec.com
Cc: james.hogan@imgtec.com
Cc: jslaby@suse.cz
Cc: adam.buchbinder@gmail.com
Cc: trivial@kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13330/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5f56a5df 20-May-2016 Jiri Slaby <jirislaby@kernel.org>

exit_thread: remove empty bodies

Define HAVE_EXIT_THREAD for archs which want to do something in
exit_thread. For others, let's define exit_thread as an empty inline.

This is a cleanup before we change the prototype of exit_thread to
accept a task parameter.

[akpm@linux-foundation.org: fix mips]
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 6b832257 20-Apr-2016 Paul Burton <paulburton@kernel.org>

MIPS: Force CPUs to lose FP context during mode switches

Commit 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options
for MIPS") added support for the PR_SET_FP_MODE prctl, which allows a
userland program to modify its FP mode at runtime. This is most notably
required if dynamic linking leads to the FP mode requirement changing at
runtime from that indicated in the initial executable's ELF header. In
order to avoid overhead in the general FP context restore code, it aimed
to have threads in the process become unable to enable the FPU during a
mode switch & have the thread calling the prctl syscall wait for all
other threads in the process to be context switched at least once. Once
that happens we can know that no thread in the process whose mode will
be switched has live FP context, and it's safe to perform the mode
switch. However in the (rare) case of modeswitches occurring in
multithreaded programs this can lead to indeterminate delays for the
thread invoking the prctl syscall, and the code monitoring for those
context switches was woefully inadequate for all but the simplest cases.

Fix this by broadcasting an IPI if other CPUs may have live FP context
for an affected thread, with a handler causing those CPUs to relinquish
their FPU ownership. Threads will then be allowed to continue running
but will stall on the wait_on_atomic_t in enable_restore_fp_context if
they attempt to use FP again whilst the mode switch is still in
progress. The end result is less fragile poking at scheduler context
switch counts & a more expedient completion of the mode switch.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Reviewed-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.0+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13145/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# bd239f1e 20-Apr-2016 Paul Burton <paulburton@kernel.org>

MIPS: Disable preemption during prctl(PR_SET_FP_MODE, ...)

Whilst a PR_SET_FP_MODE prctl is performed there are decisions made
based upon whether the task is executing on the current CPU. This may
change if we're preempted, so disable preemption to avoid such changes
for the lifetime of the mode switch.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Reviewed-by: Maciej W. Rozycki <macro@imgtec.com>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.0+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13144/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 04cc89d1 26-Mar-2016 Ralf Baechle <ralf@linux-mips.org>

MIPS: Make flush_thread

Avoids function calls to an empty function.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a816b306 04-Dec-2015 James Hogan <jhogan@kernel.org>

MIPS: Don't unwind to user mode with EVA

When unwinding through IRQs and exceptions, the unwinding only continues
if the PC is a kernel text address, however since EVA it is possible for
user and kernel address ranges to overlap, potentially allowing
unwinding to continue to user mode if the user PC happens to be in the
kernel text address range.

Adjust the check to also ensure that the register state from before the
exception is actually running in kernel mode, i.e. !user_mode(regs).

I don't believe any harm can come of this problem, since the PC is only
output, the stack pointer is checked to ensure it resides within the
task's stack page before it is dereferenced in search of the return
address, and the return address register is similarly only output (if
the PC is in a leaf function or the beginning of a non-leaf function).

However unwind_stack() is only meant for unwinding kernel code, so to be
correct the unwind should stop there.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/11700/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 92a76f6d 25-Feb-2016 Adam Buchbinder <adam.buchbinder@gmail.com>

MIPS: Fix misspellings in comments.

Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12617/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 76e5846d 01-Feb-2016 James Hogan <jhogan@kernel.org>

MIPS: Properly disable FPU in start_thread()

start_thread() (called for execve(2)) clears the TIF_USEDFPU flag
without atomically disabling the FPU. With a preemptive kernel, an
unfortunately timed preemption after this could result in another
task (or KVM guest) being scheduled in with the FPU still enabled, since
lose_fpu_inatomic() only turns it off if TIF_USEDFPU is set.

Use lose_fpu(0) instead of the separate FPU / MSA management, which
should do the right thing (drop FPU properly and atomically without
saving state) and will be more future proof.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12302/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e2c5aaa5 13-Mar-2015 Alex Dowad <alexinbeijing@gmail.com>

mips: copy_thread(): rename 'arg' argument to 'kthread_arg'

The 'arg' argument to copy_thread() is only ever used when forking a new
kernel thread. Hence, rename it to 'kthread_arg' for clarity (and consistency
with do_fork() and other arch-specific implementations of copy_thread()).

Signed-off-by: Alex Dowad <alexinbeijing@gmail.com>
Cc: linux-kernel@vger.kernel.org
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Alex Smith <alex@alex-smith.me.uk>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Eunbong Song <eunb.song@samsung.com>
Cc: linux-mips@linux-mips.org (open list:MIPS)
Patchwork: https://patchwork.linux-mips.org/patch/9546/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 8dd92891 04-Mar-2015 Rusty Russell <rusty@rustcorp.com.au>

mips: fix up obsolete cpu function usage.

Thanks to spatch, plus manual removal of "&*". Then a sweep for
for_each_cpu_mask => for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-mips@linux-mips.org


# 13e45f09 13-Jan-2015 Markos Chandras <markos.chandras@imgtec.com>

MIPS: kernel: process: Do not allow FR=0 on MIPS R6

A prctl() call to set FR=0 for MIPS R6 should not be allowed
since FR=1 is the only option for R6 cores.

Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Matthew Fortune <matthew.fortune@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>


# 9791554b 07-Jan-2015 Paul Burton <paulburton@kernel.org>

MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS

Userland code may be built using an ABI which permits linking to objects
that have more restrictive floating point requirements. For example,
userland code may be built to target the O32 FPXX ABI. Such code may be
linked with other FPXX code, or code built for either one of the more
restrictive FP32 or FP64. When linking with more restrictive code, the
overall requirement of the process becomes that of the more restrictive
code. The kernel has no way to know in advance which mode the process
will need to be executed in, and indeed it may need to change during
execution. The dynamic loader is the only code which will know the
overall required mode, and so it needs to have a means to instruct the
kernel to switch the FP mode of the process.

This patch introduces 2 new options to the prctl syscall which provide
such a capability. The FP mode of the process is represented as a
simple bitmask combining a number of mode bits mirroring those present
in the hardware. Userland can either retrieve the current FP mode of
the process:

mode = prctl(PR_GET_FP_MODE);

or modify the current FP mode of the process:

err = prctl(PR_SET_FP_MODE, new_mode);

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Matthew Fortune <matthew.fortune@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8899/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 39148e94 19-Jan-2015 James Hogan <jhogan@kernel.org>

MIPS: fork: Fix MSA/FPU/DSP context duplication race

There is a race in the MIPS fork code which allows the child to get a
stale copy of parent MSA/FPU/DSP state that is active in hardware
registers when the fork() is called. This is because copy_thread() saves
the live register state into the child context only if the hardware is
currently in use, apparently on the assumption that the hardware state
cannot have been saved and disabled since the initial duplication of the
task_struct. However preemption is certainly possible during this
window.

An example sequence of events is as follows:

1) The parent userland process puts important data into saved floating
point registers ($f20-$f31), which are then dirty compared to the
process' stored context.

2) The parent process calls fork() which does a clone system call.

3) In the kernel, do_fork() -> copy_process() -> dup_task_struct() ->
arch_dup_task_struct() (which uses the weakly defined default
implementation). This duplicates the parent process' task context,
which includes a stale version of its FP context from when it was
last saved, probably some time before (1).

4) At some point before copy_process() calls copy_thread(), such as when
duplicating the memory map, the process is desceduled. Perhaps it is
preempted asynchronously, or perhaps it sleeps while blocked on a
mutex. The dirty FP state in the FP registers is saved to the parent
process' context and the FPU is disabled.

5) When the process is rescheduled again it continues copying state
until it gets to copy_thread(), which checks whether the FPU is in
use, so that it can copy that dirty state to the child process' task
context. Because of the deschedule however the FPU is not in use, so
the child process' context is left with stale FP context from the
last time the parent saved it (some time before (1)).

6) When the new child process is scheduled it reads the important data
from the saved floating point register, and ends up doing a NULL
pointer dereference as a result of the stale data.

This use of saved floating point registers across function calls can be
triggered fairly easily by explicitly using inline asm with a current
(MIPS R2) compiler, but is far more likely to happen unintentionally
with a MIPS R6 compiler where the FP registers are more likely to get
used as scratch registers for storing non-fp data.

It is easily fixed, in the same way that other architectures do it, by
overriding the implementation of arch_dup_task_struct() to sync the
dirty hardware state to the parent process' task context *prior* to
duplicating it, rather than copying straight to the child process' task
context in copy_thread(). Note, the FPU hardware is not disabled so the
parent process may continue executing with the live register context,
but now the child process is guaranteed to have an identical copy of it
at that point.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reported-by: Matthew Fortune <matthew.fortune@imgtec.com>
Tested-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9075/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 856839b7 22-Oct-2014 Eunbong Song <eunb.song@samsung.com>

MIPS: Add arch_trigger_all_cpu_backtrace() function

Currently, arch_trigger_all_cpu_backtrace() is defined in only x86 and
sparc which have an NMI. But in case of softlockup, it could be possible
to dump backtrace of all cpus. and this could be helpful for debugging.

for example, if system has 2 cpus.

CPU 0 CPU 1
acquire read_lock()

try to do write_lock()

,,,
missing read_unlock()

In this case, softlockup will occur becasuse CPU 0 does not call
read_unlock(). And dump_stack() print only backtrace for "CPU 0". If
CPU1's backtrace is printed it's very helpful.

[ralf@linux-mips.org: Fixed whitespace and formatting issues.]

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/8200/


# 635c9907 21-Oct-2014 Ralf Baechle <ralf@linux-mips.org>

MIPS: Remove useless parentheses

Based on the spatch

@@
expression e;
@@
- return (e);
+ return e;

with heavy hand editing because some of the changes are either whitespace
or identation only or result in excessivly long lines.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7daef8f2 11-Jul-2014 Paul Burton <paulburton@kernel.org>

MIPS: consistently clear MSA flags when starting & copying threads

The TIF_MSA_CTX_LIVE flag (indicating that a task has MSA context which
needs to be preserved) was being cleared in start_thread, but the
TIF_USEDMSA flag (indicating that a task has used MSA in this timeslice)
was not. In copy_thread neither flag was cleared, but both need to be.
Without clearing these flags the kernel will proceed to attempt to save
MSA context when the task is context switched out, and if the task had
not used MSA in the meantime then it will fail because MSA or the FPU
are disabled. The end result is typically:

do_cpu invoked from kernel context![#1]:
CPU: 0 PID: 99 Comm: sh Not tainted 3.16.0-rc4-00025-g6dc9476-dirty #88
task: 8f23dc60 ti: 8f1d8000 task.ti: 8f1d8000
...
Call Trace:
[<8010edbc>] resume+0x5c/0x280
[<80481e0c>] __schedule+0x370/0x800
[<80104838>] work_resched+0x8/0x2c

Fix by consistently clearing both flags in both functions.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7309/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 60be939c 23-Jul-2014 Alex Smith <alex@alex-smith.me.uk>

MIPS: Remove asm/user.h

The struct user definition in this file is not used anywhere (the ELF
core dumper does not use that format). Therefore, remove the header and
instead enable the asm-generic user.h which is an empty header to
satisfy a few generic headers which still try to include user.h.

Signed-off-by: Alex Smith <alex@alex-smith.me.uk>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7459/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 30852ad0 23-Jul-2014 Alex Smith <alex@alex-smith.me.uk>

MIPS: Remove old core dump functions

Since the core dumper now uses regsets, the old core dump functions are
now unused. Remove them.

Signed-off-by: Alex Smith <alex@alex-smith.me.uk>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7456/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b633648c 23-May-2014 Ralf Baechle <ralf@linux-mips.org>

MIPS: MT: Remove SMTC support

Nobody is maintaining SMTC anymore and there also seems to be no userbase.
Which is a pity - the SMTC technology primarily developed by Kevin D.
Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT
ASE's power and elegance.

Based on Markos Chandras <Markos.Chandras@imgtec.com> patch
https://patchwork.linux-mips.org/patch/6719/ which while very similar did
no longer apply cleanly when I tried to merge it plus some additional
post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to
merge once upon a time.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1db1af84 27-Jan-2014 Paul Burton <paulburton@kernel.org>

MIPS: Basic MSA context switching support

This patch adds support for context switching the MSA vector registers.
These 128 bit vector registers are aliased with the FP registers - an
FP register accesses the least significant bits of the vector register
with which it is aliased (ie. the register with the same index). Due to
both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
enabled at the same time as MSA the kernel will enable MSA & scalar FP
at the same time for tasks which use MSA. If we restore the MSA vector
context then we might as well enable the scalar FPU since the reason it
was left disabled was to allow for lazy FP context restoring - but we
just restored the FP context as it's a subset of the vector context. If
we restore the FP context and have previously used MSA then we have to
restore the whole vector context anyway (see comment in
enable_restore_fp_context for details) so similarly we might as well
enable MSA.

Thus if a task does not use MSA then it will continue to behave as
without this patch - the scalar FP context will be saved & restored as
usual. But if a task executes an MSA instruction then it will save &
restore the vector context forever more.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6431/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6cec7c4a 27-Jan-2014 Paul Burton <paulburton@kernel.org>

MIPS: Don't assume 64-bit FP registers for dump_{,task_}fpu

This code assumed that saved FP registers are 64 bits wide, an
assumption which will no longer be true once MSA is introduced. This
patch modifies the code to copy the lower 64 bits of each register in
turn, which is safe for any FP register width >= 64 bits.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6425/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a3056b1c 19-Nov-2013 Paul Burton <paulburton@kernel.org>

MIPS: replace open-coded init_dsp

There is already an init_dsp function which checks cpu_has_dsp & calls
__init_dsp if it does. Make use of it instead of duplicating the same
code.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Qais Yousef <qais.yousef@imgtec.com>
Signed-off-by: John Crispin <blogic@openwrt.org>
Patchwork: http://patchwork.linux-mips.org/patch/6148/


# 597ce172 22-Nov-2013 Paul Burton <paulburton@kernel.org>

MIPS: Support for 64-bit FP with O32 binaries

CPUs implementing MIPS32 R2 may include a 64-bit FPU, just as MIPS64 CPUs
do. In order to preserve backwards compatibility a 64-bit FPU will act
like a 32-bit FPU (by accessing doubles from the least significant 32
bits of an even-odd pair of FP registers) when the Status.FR bit is
zero, again just like a mips64 CPU. The standard O32 ABI is defined
expecting a 32-bit FPU, however recent toolchains support use of a
64-bit FPU from an O32 MIPS32 executable. When an ELF executable is
built to use a 64-bit FPU a new flag (EF_MIPS_FP64) is set in the ELF
header.

With this patch the kernel will check the EF_MIPS_FP64 flag when
executing an O32 binary, and set Status.FR accordingly. The addition
of O32 64-bit FP support lessens the opportunity for optimisation in
the FPU emulator, so a CONFIG_MIPS_O32_FP64_SUPPORT Kconfig option is
introduced to allow this support to be disabled for those that don't
require it.

Inspired by an earlier patch by Leonid Yegoshin, but implemented more
cleanly & correctly.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Paul Burton <paul.burton@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/6154/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 42a11179 21-Jun-2013 Tony Wu <tung7970@gmail.com>

MIPS: Fix typos and cleanup comment

Signed-off-by: Tony Wu <tung7970@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5535/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 36ecafc5 12-Jun-2013 Gregory Fong <gregory.0xf0@gmail.com>

MIPS: initial stack protector support

Implements basic stack protector support based on ARM version in
c743f38013aeff58ef6252601e397b5ba281c633 , with Kconfig option,
constant canary value set at boot time, and script to check if
compiler actually supports stack protector.

Tested by creating a kernel module that writes past end of char[].

Signed-off-by: Gregory Fong <gregory.0xf0@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: Filippo Arcidiacono <filippo.arcidiacono@st.com>
Cc: Carmelo Amoroso <carmelo.amoroso@st.com>
Patchwork: https://patchwork.linux-mips.org/patch/5448/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 49f2ec91 21-May-2013 Ralf Baechle <ralf@linux-mips.org>

MIPS: Consolidate idle loop / WAIT instruction support in a single file.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5000653e 12-May-2013 Tony Wu <tung7970@gmail.com>

MIPS: Extract schedule_mfi info from __schedule

schedule_mfi is supposed to be extracted from schedule(), and
is used in thread_saved_pc and get_wchan.

But, after optimization, schedule() is reduced to a sibling
call to __schedule(), and no real frame info can be extracted.

One solution is to compile schedule() with -fno-omit-frame-pointer
and -fno-optimize-sibling-calls, but that will incur performance
degradation.

Another solution is to extract info from the real scheduler,
__schedule, and this is the approache adopted here.

This patch reads the __schedule address by either following
the 'j' call in schedule if KALLSYMS is disabled or by using
kallsyms_lookup_name to lookup __schedule if KALLSYMS is
available, then, extracts schedule_mfi from __schedule frame info.

This patch also fixes the "Can't analyze schedule() prologue"
warning at boot time.

Signed-off-by: Tony Wu <tung7970@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5237/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e7438c4b 12-May-2013 Tony Wu <tung7970@gmail.com>

MIPS: Fix sibling call handling in get_frame_info

Given a function, get_frame_info() analyzes its instructions
to figure out frame size and return address. get_frame_info()
works as follows:

1. analyze up to 128 instructions if the function size is unknown
2. search for 'addiu/daddiu sp,sp,-immed' for frame size
3. search for 'sw ra,offset(sp)' for return address
4. end search when it sees jr/jal/jalr

This leads to an issue when the given function is a sibling
call, example shown as follows.

801ca110 <schedule>:
801ca110: 8f820000 lw v0,0(gp)
801ca114: 8c420000 lw v0,0(v0)
801ca118: 080726f0 j 801c9bc0 <__schedule>
801ca11c: 00000000 nop

801ca120 <io_schedule>:
801ca120: 27bdffe8 addiu sp,sp,-24
801ca124: 3c028022 lui v0,0x8022
801ca128: afbf0014 sw ra,20(sp)

In this case, get_frame_info() cannot properly detect schedule's
frame info, and eventually returns io_schedule's instead.

This patch adds 'j' to the end search condition to workaround
sibling call cases.

Signed-off-by: Tony Wu <tung7970@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5236/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 34c2f668 25-Mar-2013 Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>

MIPS: microMIPS: Add unaligned access support.

Add logic needed to handle unaligned accesses in microMIPS mode.

Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>


# cdbedc61 21-Mar-2013 Thomas Gleixner <tglx@linutronix.de>

mips: Use generic idle loop

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Link: http://lkml.kernel.org/r/20130321215234.754954871@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 64b3122d 27-Dec-2012 Al Viro <viro@zeniv.linux.org.uk>

mips: take the "zero newsp means inherit the parent's one" to copy_thread()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 70342287 21-Jan-2013 Ralf Baechle <ralf@linux-mips.org>

MIPS: Whitespace cleanup.

Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 8add1ecb 13-Aug-2012 Huacai Chen <chenhuacai@kernel.org>

MIPS: Fix poweroff failure when HOTPLUG_CPU configured.

When poweroff machine, kernel_power_off() call disable_nonboot_cpus().
And if we have HOTPLUG_CPU configured, disable_nonboot_cpus() is not an
empty function but attempt to actually disable the nonboot cpus. Since
system state is SYSTEM_POWER_OFF, play_dead() won't be called and thus
disable_nonboot_cpus() hangs. Therefore, we make this patch to avoid
poweroff failure.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Hongliang Tao <taohl@lemote.com>
Signed-off-by: Hua Yan <yanh@lemote.com>
Cc: Yong Zhang <yong.zhang@windriver.com>
Cc: stable@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/4211/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# afa86fc4 22-Oct-2012 Al Viro <viro@zeniv.linux.org.uk>

flagday: don't pass regs to copy_thread()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# 8f54bcac 09-Oct-2012 Al Viro <viro@zeniv.linux.org.uk>

mips: switch to generic kernel_thread()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>


# b81947c6 28-Mar-2012 David Howells <dhowells@redhat.com>

Disintegrate asm/system.h for MIPS

Disintegrate asm/system.h for MIPS.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
cc: linux-mips@linux-mips.org


# bd2f5536 20-Mar-2011 Thomas Gleixner <tglx@linutronix.de>

sched/rt: Use schedule_preempt_disabled()

Coccinelle based conversion.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-24swm5zut3h9c4a6s46x8rws@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 1268fbc7 17-Nov-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu()

Those two APIs were provided to optimize the calls of
tick_nohz_idle_enter() and rcu_idle_enter() into a single
irq disabled section. This way no interrupt happening in-between would
needlessly process any RCU job.

Now we are talking about an optimization for which benefits
have yet to be measured. Let's start simple and completely decouple
idle rcu and dyntick idle logics to simplify.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


# 2bbb6817 08-Oct-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Allow rcu extended quiescent state handling seperately from tick stop

It is assumed that rcu won't be used once we switch to tickless
mode and until we restart the tick. However this is not always
true, as in x86-64 where we dereference the idle notifiers after
the tick is stopped.

To prepare for fixing this, add two new APIs:
tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().

If no use of RCU is made in the idle loop between
tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
must instead call the new *_norcu() version such that the arch doesn't
need to call rcu_idle_enter() and rcu_idle_exit().

Otherwise the arch must call tick_nohz_enter_idle() and
tick_nohz_exit_idle() and also call explicitly:

- rcu_idle_enter() after its last use of RCU before the CPU is put
to sleep.
- rcu_idle_exit() before the first use of RCU after the CPU is woken
up.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


# 280f0677 07-Oct-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Separate out irq exit and idle loop dyntick logic

The tick_nohz_stop_sched_tick() function, which tries to delay
the next timer tick as long as possible, can be called from two
places:

- From the idle loop to start the dytick idle mode
- From interrupt exit if we have interrupted the dyntick
idle mode, so that we reprogram the next tick event in
case the irq changed some internal state that requires this
action.

There are only few minor differences between both that
are handled by that function, driven by the ts->inidle
cpu variable and the inidle parameter. The whole guarantees
that we only update the dyntick mode on irq exit if we actually
interrupted the dyntick idle mode, and that we enter in RCU extended
quiescent state from idle loop entry only.

Split this function into:

- tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
dynticks idle mode unconditionally if it can, and enters into RCU
extended quiescent state.

- tick_nohz_irq_exit() which only updates the dynticks idle mode
when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
into tick_nohz_idle_exit().

This simplifies the code and micro-optimize the irq exit path (no need
for local_irq_save there). This also prepares for the split between
dynticks and rcu extended quiescent state logics. We'll need this split to
further fix illegal uses of RCU in extended quiescent states in the idle
loop.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>


# cae39d13 28-Jul-2011 Paul Gortmaker <paul.gortmaker@windriver.com>

mips: add export.h to files using EXPORT_SYMBOL/THIS_MODULE

Or else we get lots of variations on this:

arch/mips/pci/pci.c:330: warning: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'

scattered throughout the build.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>


# 848484e2 23-Jul-2011 Paul Gortmaker <paul.gortmaker@windriver.com>

mips: remove needless include of module.h from core kernel files.

None of these files are using modular infrastructure, and build
tests reveal that none of these files are really relying on any
implicit inclusions via. module.h either. So delete them.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>


# 3a713660 10-Jun-2011 Mathias Krause <minipli@googlemail.com>

MIPS: Remove redundant addr_limit assignment on exec.

The address limit is already set in flush_old_exec() via set_fs(USER_DS)
so this assignment is redundant.

[ralf@linux-mips.org: also see dac853ae89043f1b7752875300faf614de43c74b for
further explanation.]

Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/2466/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 94ea09c6 13-May-2011 Daniel Kalmar <kalmard@homejinni.com>

MIPS: Add new unwind_stack variant

The unwind_stack_by_address variant supports unwinding based
on any kernel code address.
This symbol is also exported so it can be called from modules.

Signed-off-by: Daniel Kalmar <kalmard@homejinni.com>
Signed-off-by: Gergely Kis <gergely@homejinni.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>


# 25985edc 30-Mar-2011 Lucas De Marchi <lucas.demarchi@profusion.mobi>

Fix common misspellings

Fixes generated by 'codespell' and manually reviewed.

Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>


# a989ff89 04-Nov-2010 Al Viro <viro@ZenIV.linux.org.uk>

MIPS: Don't stomp on caller's ->regs[2] in copy_thread()

We never needed that (->regs[2] is overwritten on return from syscall paths
with return value of syscall, so storing it there early made no sense) and
with new restart logics since d27240bf7e61d2656de18e158ec910a902030847 it
has become really bad - we lose the original syscall number before the
place where we decide that we might need a syscall restart.

Note that for child we do need the assignment to regs[2] - it won't go
through the normal return from syscall path.

[Ralf: Issue found and reported by Lluís; initial investigations by me;
bug finally found and patch by Al; testing by me and Lluís.]

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Lluís Batlle i Rossell <viriketo@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7a7ac952 09-Mar-2010 Wu Zhangjin <wuzhangjin@gmail.com>

MIPS: Trace: Don't trace irqsoff for the idle process

Like x86 did in arch/x86/kernel/{process_32.c,process_64.c}, also don't
trace irqsoff for idle.

If there's no useful work to be done, we don't care about the irqsoff
duration. If we trace the idle process, the max duration of irqsoff will
be the idle time and make the irqsoff tracer useless.

Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Patchwork: http://patchwork.linux-mips.org/patch/1044/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5a0e3ad6 24-Mar-2010 Tejun Heo <tj@kernel.org>

include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h

percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.

2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).

* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>


# 484889fc 08-Jul-2009 David Daney <ddaney@caviumnetworks.com>

MIPS: Avoid clobbering struct pt_regs in kthreads

The resume() implementation octeon_switch.S examines the saved cp0_status
register. We were clobbering the entire pt_regs structure in kernel
threads leading to random crashes.

When switching away from a kernel thread, the saved cp0_status is examined
and if bit 30 is set it is cleared and the CP2 state saved into the pt_regs
structure. Since the kernel thread stack overlaid the pt_regs structure
this resulted in a corrupt stack. When the kthread with the corrupt stack
was resumed, it could crash if it used any of the data in the stack that was
clobbered.

We fix it by moving the kernel thread stack down so it doesn't overlay
pt_regs.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1b2bc75c 23-Jun-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: Add arch generic CPU hotplug

Each platform has to add support for CPU hotplugging itself by providing
suitable definitions for the cpu_disable and cpu_die of the smp_ops
methods and setting SYS_SUPPORTS_HOTPLUG_CPU. A platform should only set
SYS_SUPPORTS_HOTPLUG_CPU once all it's smp_ops definitions have the
necessary changes. This patch contains the changes to the dummy smp_ops
definition for uni-processor systems.

Parts of the code contributed by Cavium Inc.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6f2c55b8 02-Apr-2009 Alexey Dobriyan <adobriyan@gmail.com>

Simplify copy_thread()

First argument unused since 2.3.11.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 9cc12363 09-Sep-2008 Kevin D. Kissell <kevink@paralogos.com>

[MIPS] SMTC: Fix holes in SMTC and FPU affinity support.

Signed-off-by: Kevin D. Kissell <kevink@paralogos.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6657fe0a 09-Sep-2008 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMTC: Clear TIF_FPUBOUND on clone / fork.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 9d5a9e74 27-Jun-2008 Adrian Bunk <bunk@kernel.org>

Remove asm/a.out.h files for all architectures without a.out support.

This patch also includes the required removal of (unused) inclusion of
<asm/a.out.h> <linux/a.out.h>'s in the arch/ code for these
architectures.

[dwmw2: updated for 2.6.27-rc]
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>


# b8f8c3cf 18-Jul-2008 Thomas Gleixner <tglx@linutronix.de>

nohz: prevent tick stop outside of the idle loop

Jack Ren and Eric Miao tracked down the following long standing
problem in the NOHZ code:

scheduler switch to idle task
enable interrupts

Window starts here

----> interrupt happens (does not set NEED_RESCHED)
irq_exit() stops the tick

----> interrupt happens (does set NEED_RESCHED)

return from schedule()

cpu_idle(): preempt_disable();

Window ends here

The interrupts can happen at any point inside the race window. The
first interrupt stops the tick, the second one causes the scheduler to
rerun and switch away from idle again and we end up with the tick
disabled.

The fact that it needs two interrupts where the first one does not set
NEED_RESCHED and the second one does made the bug obscure and extremly
hard to reproduce and analyse. Kudos to Jack and Eric.

Solution: Limit the NOHZ functionality to the idle loop to make sure
that we can not run into such a situation ever again.

cpu_idle()
{
preempt_disable();

while(1) {
tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
are in the idle loop

while (!need_resched())
halt();

tick_nohz_restart_sched_tick(); <- disables NOHZ mode
preempt_enable_no_resched();
schedule();
preempt_disable();
}
}

In hindsight we should have done this forever, but ...

/me grabs a large brown paperbag.

Debugged-by: Jack Ren <jack.ren@marvell.com>,
Debugged-by: eric miao <eric.y.miao@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 2957c9e6 15-Jul-2008 Ralf Baechle <ralf@linux-mips.org>

[MIPS] IRIX: Goodbye and thanks for all the fish

Never terribly functional or popular, plagued by hard to fix bugs the time
to say goodbye has more than arrived.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# bbaf238b 13-Dec-2007 Chris Dearman <chris@mips.com>

[MIPS] Ensure that ST0_FR is never set on a 32 bit kernel

Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 49a89efb 11-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Fix "no space between function name and open parenthesis" warnings.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7bcf7717 11-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Implement clockevents for R4000-style cp0 count/compare interrupt

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e5d77754 18-Sep-2007 Maciej W. Rozycki <macro@linux-mips.org>

[MIPS] R3000 setup for kernel_thread()

Match the R4000 semantics for the initial state of interrupt/kernel
status register flags for the R3000 in kernel_thread().

Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 293c5bd1 25-Jul-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Fixup secure computing stuff.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 94109102 19-Jul-2007 Franck Bui-Huu <fbuihuu@gmail.com>

[MIPS] User stack pointer randomisation

Signed-off-by: Franck Bui-Huu <fbuihuu@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b3f6df9f 25-May-2007 Robert P. J. Day <rpjday@mindspring.com>

[MIPS] Transform old-style macros to newer "__noreturn"

Convert old/obsolete NORET_TYPE and ATTRIB_NORET macros to use the
newer standard of "__noreturn" as defined in compiler-gcc.h.

Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# c68644d3 26-Feb-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Make SMTC_IDLE_HOOK_DEBUG a proper option in Kconfig.debug.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# db0b937d 18-Feb-2007 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] Make kernel_thread_helper() static

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 151fd6ac 15-Feb-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] signals: Share even more code.

native and compat do_signal and handle_signal are identical and can easily
be unified.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 447deafb 04-Feb-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMTC: Cleanup idle hook invocation.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 55b74283 13-Oct-2006 Franck Bui-Huu <fbuihuu@gmail.com>

[MIPS] Use kallsyms_lookup_size_offset() instead of kallsyms_lookup()

This new routine doesn't lookup for symbol names. So we needn't
to pass any char buffers or pointer since we don't care about
names.

Signed-off-by: Franck Bui-Huu <fbuihuu@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e04582b7 08-Oct-2006 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] Make sure cpu_has_fpu is used only in atomic context

Make sure cpu_has_fpu (which uses smp_processor_id()) is used only in
atomic context.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1924600c 29-Sep-2006 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] Make unwind_stack() can dig into interrupted context

If the PC was ret_from_irq or ret_from_exception, there will be no
more normal stackframe. Instead of stopping the unwinding, use PC and
RA saved by an exception handler to continue unwinding into the
interrupted context. This also simplifies the CONFIG_STACKTRACE code.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1df0f0ff 26-Sep-2006 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] lockdep: Add STACKTRACE_SUPPORT and enable LOCKDEP_SUPPORT

Implement stacktrace interface by using unwind_stack() and enable lockdep
support in Kconfig.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b5943182 18-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] get_wchan(): remove uses of mfinfo[64]

This array was used to 'cache' some frame info about scheduler
functions to speed up get_wchan(). This array was 1Ko size and
was only used when CONFIG_KALLSYMS was set but declared for all
configs.

Rather than make the array statement conditional, this patches
removes this array and its uses. Indeed the common case doesn't
seem to use this array and get_wchan() is not a critical path
anyways.

It results in a smaller bss and a smaller/cleaner code:

text data bss dec hex filename
2543808 254148 139296 2937252 2cd1a4 vmlinux-new-get-wchan
2544080 254148 143392 2941620 2ce2b4 vmlinux~old

Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 29b376ff 18-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] get_frame_info(): null function size means size is unknown

This patch adds 2 sanity checks.

The first one test that the start address of the function to analyze has been
set by the caller. If not return an error since nothing usefull can be done
without.

The second one checks that the function's size has been set. A null size can
happen if CONFIG_KALLSYMS is not set and it means that we don't know the size
of the function to analyze. In this case, we make it equal to 128 instructions
by default.

Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1fd69098 18-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] unwind_stack(): return ra if an exception occured at the first instruction

Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4d157d5e 03-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] Improve unwind_stack()

This patch allows unwind_stack() to return ra for leaf function.
But it tries to detects cases where get_frame_info() wrongly
consider nested function as a leaf one.

It also pass 'unsinged long *sp' instead of 'unsigned long **sp'
as second parameter. The code looks cleaner.

Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 0cceb4aa 03-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] Make get_frame_info() more robust

Now get_frame_info() wants to detect move sp instruction first. It
assumes that the save ra in the stack instruction can't happen
before allocating frame size space into the stack.

Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6057a798 03-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] Make frame_info_init() more readable.

Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# c0efbb6d 03-Aug-2006 Franck Bui-Huu <vagabon.xyz@gmail.com>

[MIPS] Make get_frame_info() more readable.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Franck Bui-Huu <vagabon.xyz@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f66686f7 29-Jul-2006 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] dump_stack() based on prologue code analysis

Instead of dump all possible address in the stack, unwind the stack frame
based on prologue code analysis, as like as get_wchan() does. While the
code analysis might fail for some reason, there is a new kernel option
"raw_show_trace" to disable this feature.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6ab3d562 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de>

Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>


# f088fc84 05-Apr-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] FPU affinity for MT ASE.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 41c594ab 05-Apr-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] MT: Improved multithreading support.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 0cb3463f 31-Mar-2006 Adrian Bunk <bunk@stusta.de>

[PATCH] unexport get_wchan

The only user of get_wchan is the proc fs - and proc can't be built modular.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 9c6031cc 19-Feb-2006 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] Signal cleanup

Move function prototypes to asm/signal.h to detect trivial errors and
add some __user tags to get rid of sparse warnings. Generated code
should not be changed.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 63077519 07-Feb-2006 Atsushi Nemoto <anemo@mba.ocn.ne.jp>

[MIPS] Rewrite get_wchan and its helper functions using kallsyms_lookup.

Implement get_wchan() and frame_info_init() using kallsyms_lookup().
This fixes problem with static sched/lock functions and mfinfo[]
maintenance issue. If CONFIG_KALLSYMS was disabled, get_wchan() just
returns thread_saved_pc() value.

Also unwind stackframe based on "addiu sp,-imm" analysis instead of
frame pointer. This fixes problem with functions compiled without
-fomit-frame-pointer.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 40ac5d47 08-Feb-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Make do_signal return void.

It's return value is ignored everywhere.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>

---


# 7b3e2fc8 07-Feb-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Add support for TIF_RESTORE_SIGMASK.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>

---


# 75bb07e7 12-Jan-2006 Al Viro <viro@ftp.linux.org.uk>

[PATCH] mips: task_stack_page()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 40bc9c67 12-Jan-2006 Al Viro <viro@ftp.linux.org.uk>

[PATCH] mips: task_pt_regs()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# d56efda4 16-Dec-2005 Al Viro <viro@ftp.linux.org.uk>

MIPS: Namespace pollution: dump_regs() -> elf_dump_regs()

dump_regs() is used by a bunch of drivers for their internal stuff;
renamed mips instance (one that is seen in system-wide headers)
to elf_dump_regs()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5bfb5d69 08-Nov-2005 Nick Piggin <nickpiggin@yahoo.com.au>

[PATCH] sched: disable preempt in idle tasks

Run idle threads with preempt disabled.

Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
How did it ever work before?

Might fix the CPU hotplugging hang which Nigel Cunningham noted.

We think the bug hits if the idle thread is preempted after checking
need_resched() and before going to sleep, then the CPU offlined.

After calling stop_machine_run, the CPU eventually returns from preemption and
into the idle thread and goes to sleep. The CPU will continue executing
previous idle and have no chance to call play_dead.

By disabling preemption until we are ready to explicitly schedule, this bug is
fixed and the idle threads generally become more robust.

From: alexs <ashepard@u.washington.edu>

PPC build fix

From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>

MIPS build fix

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 129bc8f7 11-Jul-2005 Ralf Baechle <ralf@linux-mips.org>

Setup_frame is now returning a success value.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e50c0a8f 31-May-2005 Ralf Baechle <ralf@linux-mips.org>

Support the MIPS32 / MIPS64 DSP ASE.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 3c37026d 13-Apr-2005 Ralf Baechle <ralf@linux-mips.org>

NPTL, round one.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 71e0e556 14-Mar-2005 Ralf Baechle <ralf@linux-mips.org>

Multithreaded core dumps.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# fe00f943 01-Mar-2005 Ralf Baechle <ralf@linux-mips.org>

Sparseify MIPS.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# dc953df1 21-Feb-2005 Thiemo Seufer <ths@networkno.de>

Fix wchan implementation, based on earlier by from Atsushi Nemoto.

Signed-off-by: Thiemo Seufer <ths@networkno.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 875d43e7 03-Sep-2005 Ralf Baechle <ralf@linux-mips.org>

[PATCH] mips: clean up 32/64-bit configuration

Start cleaning 32-bit vs. 64-bit configuration.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 1da177e4 16-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org>

Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!