History log of /linux-master/arch/mips/kernel/smp.c
Revision Date Author Comments
# d1f4b2b8 03-Dec-2023 Arnd Bergmann <arnd@arndb.de>

mips: smp: fix setup_profiling_timer() prototype

The function is unconditionally defined in smp.c but is conditionally
declared in a header that is not included here.

arch/mips/kernel/smp.c:473:5: error: no previous prototype for 'setup_profiling_timer' [-Werror=missing-prototypes]

Add the missing #include and #ifdef to match the declaration.

Link: https://lkml.kernel.org/r/20231204115710.2247097-20-arnd@kernel.org
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Stephen Rothwell <sfr@rothwell.id.au>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 55702ec9 06-Nov-2023 Stefan Wiehler <stefan.wiehler@nokia.com>

mips/smp: Call rcutree_report_cpu_starting() earlier

rcutree_report_cpu_starting() must be called before
clockevents_register_device() to avoid the following lockdep splat triggered by
calling list_add() when CONFIG_PROVE_RCU_LIST=y:

WARNING: suspicious RCU usage
...
-----------------------------
kernel/locking/lockdep.c:3680 RCU-list traversed in non-reader section!!

other info that might help us debug this:

RCU used illegally from offline CPU!
rcu_scheduler_active = 1, debug_locks = 1
no locks held by swapper/1/0.
...
Call Trace:
[<ffffffff8012a434>] show_stack+0x64/0x158
[<ffffffff80a93d98>] dump_stack_lvl+0x90/0xc4
[<ffffffff801c9e9c>] __lock_acquire+0x1404/0x2940
[<ffffffff801cbf3c>] lock_acquire+0x14c/0x448
[<ffffffff80aa4260>] _raw_spin_lock_irqsave+0x50/0x88
[<ffffffff8021e0c8>] clockevents_register_device+0x60/0x1e8
[<ffffffff80130ff0>] r4k_clockevent_init+0x220/0x3a0
[<ffffffff801339d0>] start_secondary+0x50/0x3b8

raw_smp_processor_id() is required in order to avoid calling into lockdep
before RCU has declared the CPU to be watched for readers.

See also commit 29368e093921 ("x86/smpboot: Move rcu_cpu_starting() earlier"),
commit de5d9dae150c ("s390/smp: move rcu_cpu_starting() earlier") and commit
99f070b62322 ("powerpc/smp: Call rcu_cpu_starting() earlier").

Signed-off-by: Stefan Wiehler <stefan.wiehler@nokia.com>
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 96cb8ae2 21-May-2023 Jiaxun Yang <jiaxun.yang@flygoat.com>

MIPS: Rework smt cmdline parameters

Provide a generic smt parameters interface aligned with s390
to allow users to limit smt usage and threads per core.

It replaced previous undocumented "nothreads" parameter for
smp-cps which is ambiguous and does not cover smp-mt.

Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# c8d2bcc4 12-May-2023 Thomas Gleixner <tglx@linutronix.de>

MIPS: SMP_CPS: Switch to hotplug core state synchronization

Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Helge Deller <deller@gmx.de> # parisc
Tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com> # Steam Deck
Link: https://lore.kernel.org/r/20230512205256.803238859@linutronix.de


# 84595f45 10-May-2022 Mao Bibo <maobibo@loongson.cn>

MIPS: smp: optimization for flush_tlb_mm when exiting

When process exits or execute new binary, it will call function
exit_mmap with old mm, there is such function call trace:
exit_mmap(struct mm_struct *mm)
--> tlb_finish_mmu(&tlb, 0, -1)
--> arch_tlb_finish_mmu(tlb, start, end, force)
--> tlb_flush_mmu(tlb);
--> tlb_flush(struct mmu_gather *tlb)
--> flush_tlb_mm(tlb->mm)

It is not necessary to flush tlb since oldmm is not used anymore
by the process, there is similar operations on IA64/ARM64 etc,
this patch adds such optimization on MIPS.

Signed-off-by: Mao Bibo <maobibo@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# f2703def 12-Feb-2022 Alexander Lobakin <alobakin@pm.me>

MIPS: smp: fill in sibling and core maps earlier

After enabling CONFIG_SCHED_CORE (landed during 5.14 cycle),
2-core 2-thread-per-core interAptiv (CPS-driven) started emitting
the following:

[ 0.025698] CPU1 revision is: 0001a120 (MIPS interAptiv (multi))
[ 0.048183] ------------[ cut here ]------------
[ 0.048187] WARNING: CPU: 1 PID: 0 at kernel/sched/core.c:6025 sched_core_cpu_starting+0x198/0x240
[ 0.048220] Modules linked in:
[ 0.048233] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.17.0-rc3+ #35 b7b319f24073fd9a3c2aa7ad15fb7993eec0b26f
[ 0.048247] Stack : 817f0000 00000004 327804c8 810eb050 00000000 00000004 00000000 c314fdd1
[ 0.048278] 830cbd64 819c0000 81800000 817f0000 83070bf4 00000001 830cbd08 00000000
[ 0.048307] 00000000 00000000 815fcbc4 00000000 00000000 00000000 00000000 00000000
[ 0.048334] 00000000 00000000 00000000 00000000 817f0000 00000000 00000000 817f6f34
[ 0.048361] 817f0000 818a3c00 817f0000 00000004 00000000 00000000 4dc33260 0018c933
[ 0.048389] ...
[ 0.048396] Call Trace:
[ 0.048399] [<8105a7bc>] show_stack+0x3c/0x140
[ 0.048424] [<8131c2a0>] dump_stack_lvl+0x60/0x80
[ 0.048440] [<8108b5c0>] __warn+0xc0/0xf4
[ 0.048454] [<8108b658>] warn_slowpath_fmt+0x64/0x10c
[ 0.048467] [<810bd418>] sched_core_cpu_starting+0x198/0x240
[ 0.048483] [<810c6514>] sched_cpu_starting+0x14/0x80
[ 0.048497] [<8108c0f8>] cpuhp_invoke_callback_range+0x78/0x140
[ 0.048510] [<8108d914>] notify_cpu_starting+0x94/0x140
[ 0.048523] [<8106593c>] start_secondary+0xbc/0x280
[ 0.048539]
[ 0.048543] ---[ end trace 0000000000000000 ]---
[ 0.048636] Synchronize counters for CPU 1: done.

...for each but CPU 0/boot.
Basic debug printks right before the mentioned line say:

[ 0.048170] CPU: 1, smt_mask:

So smt_mask, which is sibling mask obviously, is empty when entering
the function.
This is critical, as sched_core_cpu_starting() calculates
core-scheduling parameters only once per CPU start, and it's crucial
to have all the parameters filled in at that moment (at least it
uses cpu_smt_mask() which in fact is `&cpu_sibling_map[cpu]` on
MIPS).

A bit of debugging led me to that set_cpu_sibling_map() performing
the actual map calculation, was being invocated after
notify_cpu_start(), and exactly the latter function starts CPU HP
callback round (sched_core_cpu_starting() is basically a CPU HP
callback).
While the flow is same on ARM64 (maps after the notifier, although
before calling set_cpu_online()), x86 started calculating sibling
maps earlier than starting the CPU HP callbacks in Linux 4.14 (see
[0] for the reference). Neither me nor my brief tests couldn't find
any potential caveats in calculating the maps right after performing
delay calibration, but the WARN splat is now gone.
The very same debug prints now yield exactly what I expected from
them:

[ 0.048433] CPU: 1, smt_mask: 0-1

[0] https://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git/commit/?id=76ce7cfe35ef

Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# f1a0a376 12-May-2021 Valentin Schneider <valentin.schneider@arm.com>

sched/core: Initialize the idle task with preemption disabled

As pointed out by commit

de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread")

init_idle() can and will be invoked more than once on the same idle
task. At boot time, it is invoked for the boot CPU thread by
sched_init(). Then smp_init() creates the threads for all the secondary
CPUs and invokes init_idle() on them.

As the hotplug machinery brings the secondaries to life, it will issue
calls to idle_thread_get(), which itself invokes init_idle() yet again.
In this case it's invoked twice more per secondary: at _cpu_up(), and at
bringup_cpu().

Given smp_init() already initializes the idle tasks for all *possible*
CPUs, no further initialization should be required. Now, removing
init_idle() from idle_thread_get() exposes some interesting expectations
with regards to the idle task's preempt_count: the secondary startup always
issues a preempt_disable(), requiring some reset of the preempt count to 0
between hot-unplug and hotplug, which is currently served by
idle_thread_get() -> idle_init().

Given the idle task is supposed to have preemption disabled once and never
see it re-enabled, it seems that what we actually want is to initialize its
preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove
init_idle() from idle_thread_get().

Secondary startups were patched via coccinelle:

@begone@
@@

-preempt_disable();
...
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com


# a056aacd 02-Feb-2021 Bhaskar Chowdhury <unixbhaskar@gmail.com>

arch: mips: kernel: Fix two spelling in smp.c

s/logcal/logical/
s/intercpu/inter-CPU/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 545b8c8d 15-Jun-2020 Peter Zijlstra <peterz@infradead.org>

smp: Cleanup smp_call_function*()

Get rid of the __call_single_node union and cleanup the API a little
to avoid external code relying on the structure layout as much.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>


# e188f0a5 16-Dec-2019 Peter Xu <peterx@redhat.com>

MIPS: smp: Remove tick_broadcast_count

Now smp_call_function_single_async() provides the protection that
we'll return with -EBUSY if the csd object is still pending, then we
don't need the tick_broadcast_count counter any more.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191216213125.9536-3-peterx@redhat.com


# ac8fd122 05-Mar-2020 afzal mohammed <afzal.mohd.ma@gmail.com>

MIPS: Replace setup_irq() by request_irq()

request_irq() is preferred over setup_irq(). Invocations of setup_irq()
occur after memory allocators are ready.

Per tglx[1], setup_irq() existed in olden days when allocators were not
ready by the time early interrupts were initialized.

Hence replace setup_irq() by request_irq().

remove_irq() has been replaced by free_irq() as well.

There were build error's during previous version, couple of which was
reported by kbuild test robot <lkp@intel.com> of which one was reported
by Thomas Bogendoerfer <tsbogend@alpha.franken.de> as well. There were a
few more issues including build errors, those also have been fixed.

[1] https://lkml.kernel.org/r/alpine.DEB.2.20.1710191609480.1971@nanos

Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 1a59d1b8 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not write to the free software foundation inc
59 temple place suite 330 boston ma 02111 1307 usa

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 1334 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 2c865620 19-Feb-2019 Thomas Bogendoerfer <tbogendoerfer@suse.de>

MIPS: SGI-IP27: do boot CPU init later

To make use of per_cpu variables in interrupt code per_cpu_init() must
be done after setup_per_cpu_areas(). This is achieved by calling it
in smp_prepare_boot_cpu() via a new smp_ops method.

Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org


# c8790d65 01-Feb-2019 Paul Burton <paulburton@kernel.org>

MIPS: MemoryMapID (MMID) Support

Introduce support for using MemoryMapIDs (MMIDs) as an alternative to
Address Space IDs (ASIDs). The major difference between the two is that
MMIDs are global - ie. an MMID uniquely identifies an address space
across all coherent CPUs. In contrast ASIDs are non-global per-CPU IDs,
wherein each address space is allocated a separate ASID for each CPU
upon which it is used. This global namespace allows a new GINVT
instruction be used to globally invalidate TLB entries associated with a
particular MMID across all coherent CPUs in the system, removing the
need for IPIs to invalidate entries with separate ASIDs on each CPU.

The allocation scheme used here is largely borrowed from arm64 (see
arch/arm64/mm/context.c). In essence we maintain a bitmap to track
available MMIDs, and MMIDs in active use at the time of a rollover to a
new MMID version are preserved in the new version. The allocation scheme
requires efficient 64 bit atomics in order to perform reasonably, so
this support depends upon CONFIG_GENERIC_ATOMIC64=n (ie. currently it
will only be included in MIPS64 kernels).

The first, and currently only, available CPU with support for MMIDs is
the MIPS I6500. This CPU supports 16 bit MMIDs, and so for now we cap
our MMIDs to 16 bits wide in order to prevent the bitmap growing to
absurd sizes if any future CPU does implement 32 bit MMIDs as the
architecture manuals suggest is recommended.

When MMIDs are in use we also make use of GINVT instruction which is
available due to the global nature of MMIDs. By executing a sequence of
GINVT & SYNC 0x14 instructions we can avoid the overhead of an IPI to
each remote CPU in many cases. One complication is that GINVT will
invalidate wired entries (in all cases apart from type 0, which targets
the entire TLB). In order to avoid GINVT invalidating any wired TLB
entries we set up, we make sure to create those entries using a reserved
MMID (0) that we never associate with any address space.

Also of note is that KVM will require further work in order to support
MMIDs & GINVT, since KVM is involved in allocating IDs for guests & in
configuring the MMU. That work is not part of this patch, so for now
when MMIDs are in use KVM is disabled.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org


# 0b317c38 01-Feb-2019 Paul Burton <paulburton@kernel.org>

MIPS: mm: Add set_cpu_context() for ASID assignments

When we gain MMID support we'll be storing MMIDs as atomic64_t values
and accessing them via atomic64_* functions. This necessitates that we
don't use cpu_context() as the left hand side of an assignment, ie. as a
modifiable lvalue. In preparation for this introduce a new
set_cpu_context() function & replace all assignments with cpu_context()
on their left hand side with an equivalent call to set_cpu_context().

To enforce that cpu_context() should not be used for assignments, we
rewrite it as a static inline function.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org


# 558ec8ad 01-Feb-2019 Paul Burton <paulburton@kernel.org>

MIPS: mm: Remove local_flush_tlb_mm()

All 3 variants of local_flush_tlb_mm() are now effectively simple calls
to drop_mmu_context(). Remove them and use drop_mmu_context() directly.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org


# 7820b84b 27-Sep-2017 David Daney <david.daney@cavium.com>

MIPS: Allow __cpu_number_map to be larger than NR_CPUS

In systems where the CPU id space is sparse, this allows a smaller
NR_CPUS to be chosen, thus keeping internal data structures smaller.

Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: Carlos Munoz <cmunoz@caviumnetworks.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17388/
[jhogan@kernel.org: Add depends on SMP to fix
"warning: symbol value '' invalid for MIPS_NR_CPU_NR_MAP"]
Signed-off-by: James Hogan <jhogan@kernel.org>


# 9e8c399a 27-Sep-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Fix deadlock & online race

Commit 6f542ebeaee0 ("MIPS: Fix race on setting and getting
cpu_online_mask") effectively reverted commit 8f46cca1e6c06 ("MIPS: SMP:
Fix possibility of deadlock when bringing CPUs online") and thus has
reinstated the possibility of deadlock.

The commit was based on testing of kernel v4.4, where the CPU hotplug
core code issued a BUG() if the starting CPU is not marked online when
the boot CPU returns from __cpu_up. The commit fixes this race (in
v4.4), but re-introduces the deadlock situation.

As noted in the commit message, upstream differs in this area. Commit
8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu bring itself fully up")
adds a completion event in the CPU hotplug core code, making this race
impossible. However, people were unhappy with relying on the core code
to do the right thing.

To address the issues both commits were trying to fix, add a second
completion event in the MIPS smp hotplug path. It removes the
possibility of a race, since the MIPS smp hotplug code now synchronises
both the boot and secondary CPUs before they return to the hotplug core
code. It also addresses the deadlock by ensuring that the secondary CPU
is not marked online before it's counters are synchronised.

This fix should also be backported to fix the race condition introduced
by the backport of commit 8f46cca1e6c06 ("MIPS: SMP: Fix possibility of
deadlock when bringing CPUs online"), through really that race only
existed before commit 8df3e07e7f21f ("cpu/hotplug: Let upcoming cpu
bring itself fully up").

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Fixes: 6f542ebeaee0 ("MIPS: Fix race on setting and getting cpu_online_mask")
CC: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
Cc: <stable@vger.kernel.org> # v4.1+: 8f46cca1e6c0: "MIPS: SMP: Fix possibility of deadlock when bringing CPUs online"
Cc: <stable@vger.kernel.org> # v4.1+: a00eeede507c: "MIPS: SMP: Use a completion event to signal CPU up"
Cc: <stable@vger.kernel.org> # v4.1+: 6f542ebeaee0: "MIPS: Fix race on setting and getting cpu_online_mask"
Cc: <stable@vger.kernel.org> # v4.1+
Patchwork: https://patchwork.linux-mips.org/patch/17376/
Signed-off-by: James Hogan <jhogan@kernel.org>


# 7f005f11 16-Oct-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: generic: Fix compilation error from include asm/mips-cpc.h

Commit e83f7e02af50c ("MIPS: CPS: Have asm/mips-cps.h include CM & CPC
headers") adds a #error to arch/mips/include/asm/mips-cpc.h if it is
included directly. While this commit replaced almost all direct includes
of mips-cm.h and mips-cpc.h, 2 remain.

With some defconfigs, mips-cps.h is indirectly included before
mips-cpc.h, but in others this results in compilation errors:

In file included from arch/mips/generic/init.c:23:0:
./arch/mips/include/asm/mips-cpc.h:12:3: error: #error Please include
asm/mips-cps.h rather than asm/mips-cpc.h
# error Please include asm/mips-cps.h rather than asm/mips-cpc.h

In file included from arch/mips/kernel/smp.c:23:0:
./arch/mips/include/asm/mips-cpc.h:12:3: error: #error Please include
asm/mips-cps.h rather than asm/mips-cpc.h
# error Please include asm/mips-cps.h rather than asm/mips-cpc.h

In both cases, fix this by including mips-cps.h instead.

Fixes: e83f7e02af50c ("MIPS: CPS: Have asm/mips-cps.h include CM & CPC headers")
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/17492/
Signed-off-by: James Hogan <jhogan@kernel.org>


# d595d423 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: SMP: Allow boot_secondary SMP op to return errors

Allow the boot_secondary SMP op to return an error to __cpu_up(), which
will in turn return it to its caller.

This will allow SMP implementations to return errors quickly in cases
they they know have failed, rather than relying upon __cpu_up()
eventually timing out waiting for the cpu_running completion.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17014/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 68923cdc 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: CM: Add cluster & block args to mips_cm_lock_other()

With CM >= 3.5 we have the notion of multiple clusters & can access
their CM, CPC & GIC registers via the apporpriate redirect/other
register blocks. In order to allow for this introduce cluster & block
arguments to mips_cm_lock_other() which configures the redirect/other
region to point at the appropriate cluster, core, VP & register block.

Since we now have 4 arguments to mips_cm_lock_other() & a common use is
likely to be to target the cluster, core & VP corresponding to a
particular Linux CPU number we also add a new mips_cm_lock_other_cpu()
helper function which handles that without the caller needing to
manually pull out the cluster, core & VP numbers.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17013/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# fe7a38c6 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: Unify checks for sibling CPUs

Up until now we have open-coded checks for whether CPUs are siblings,
with slight variations on whether we consider the package ID or not.

This will only get more complex when we introduce cluster support, so in
preparation for that this patch introduces a cpus_are_siblings()
function which can be used to check whether or not 2 CPUs are siblings
in a consistent manner.

By checking globalnumber with the VP ID masked out this also has the
neat side effect of being ready for multi-cluster systems already.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Acked-by: Rafael J. Wysocki <rjw@rjwysocki.net>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17011/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f875a832 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: Abstract CPU core & VP(E) ID access through accessor functions

We currently have fields in struct cpuinfo_mips for the core & VP(E) ID
of a particular CPU, and various pieces of code directly access those
fields. This patch abstracts such access by introducing accessor
functions cpu_core(), cpu_set_core(), cpu_vpe_id() & cpu_set_vpe_id()
and having code that needs to access these values call those functions
rather than directly accessing the struct cpuinfo_mips fields. This
prepares us for changes to the way in which those values are stored in
later patches.

The cpu_vpe_id() function is introduced even though we already had a
cpu_vpe_id() macro for a couple of reasons:

1) It's more consistent with the core, and future cluster, accessors.

2) It ensures a sensible return type without explicit casts.

3) It's generally preferable to use functions rather than macros.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17009/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# ff2c8252 19-Jul-2017 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Constify smp ops

smp_ops providers do not modify their ops structures, so they should be
made const for robustness. Since currently the MIPS kernel is not mapped
with memory protection, this does not in itself provide any security
benefit, but it still makes sense to make this change.

There are also slight code size efficincies from the structure being
made read-only, saving 128 bytes of kernel text on a
pistachio_defconfig.
Before:
text data bss dec hex filename
7187239 1772752 470224 9430215 8fe4c7 vmlinux
After:
text data bss dec hex filename
7187111 1772752 470224 9430087 8fe447 vmlinux

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Doug Ledford <dledford@redhat.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Joe Perches <joe@perches.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Steven J. Hill <steven.hill@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16784/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 966a9671 07-Aug-2017 Ying Huang <ying.huang@intel.com>

smp: Avoid using two cache lines for struct call_single_data

struct call_single_data is used in IPIs to transfer information between
CPUs. Its size is bigger than sizeof(unsigned long) and less than
cache line size. Currently it is not allocated with any explicit alignment
requirements. This makes it possible for allocated call_single_data to
cross two cache lines, which results in double the number of the cache lines
that need to be transferred among CPUs.

This can be fixed by requiring call_single_data to be aligned with the
size of call_single_data. Currently the size of call_single_data is the
power of 2. If we add new fields to call_single_data, we may need to
add padding to make sure the size of new definition is the power of 2
as well.

Fortunately, this is enforced by GCC, which will report bad sizes.

To set alignment requirements of call_single_data to the size of
call_single_data, a struct definition and a typedef is used.

To test the effect of the patch, I used the vm-scalability multiple
thread swap test case (swap-w-seq-mt). The test will create multiple
threads and each thread will eat memory until all RAM and part of swap
is used, so that huge number of IPIs are triggered when unmapping
memory. In the test, the throughput of memory writing improves ~5%
compared with misaligned call_single_data, because of faster IPIs.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Huang, Ying <ying.huang@intel.com>
[ Add call_single_data_t and align with size of call_single_data. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/87bmnqd6lz.fsf@yhuang-mobile.sh.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 6f542ebe 03-Aug-2017 Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>

MIPS: Fix race on setting and getting cpu_online_mask

While testing cpu hoptlug (cpu down and up in loops) on kernel 4.4, it was
observed that occasionally check for cpu online will fail in kernel/cpu.c,
_cpu_up:

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/kernel/cpu.c?h=v4.4.79#n485
518 /* Arch-specific enabling code. */
519 ret = __cpu_up(cpu, idle);
520
521 if (ret != 0)
522 goto out_notify;
523 BUG_ON(!cpu_online(cpu));

Reason is race between start_secondary and _cpu_up. cpu_callin_map is set
before cpu_online_mask. In __cpu_up, cpu_callin_map is waited for, but cpu
online mask is not, resulting in race in which secondary processor started
and set cpu_callin_map, but not yet set the online mask,resulting in above
BUG being hit.

Upstream differs in the area. cpu_online check is in bringup_wait_for_ap,
which is after cpu reached AP_ONLINE_IDLE,where secondary passed its start
function. Nonetheless, fix makes start_secondary safe and not depending on
other locks throughout the code. It protects as well against cpu_online
checks put in between sometimes in the future.

Fix this by moving completion after all flags are set.

Signed-off-by: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16925/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 9b03d8ab 02-Jun-2017 Paul Burton <paulburton@kernel.org>

MIPS: Skip IPI setup if we only have 1 CPU

If we're running on a system with only 1 possible CPU then it makes no
sense to reserve or initialise IPIs since we'll never use them. Avoid
doing so.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16192/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e6488982 30-Mar-2017 Paul Burton <paulburton@kernel.org>

MIPS: Stengthen IPI IRQ domain sanity check

Commit fbde2d7d8290 ("MIPS: Add generic SMP IPI support") introduced a
sanity check that an IPI IRQ domain can be found during boot, in order
to ensure that IPIs are able to be set up in systems using such domains.
However it was added at a point where systems may have used an IPI IRQ
domain in some situations but not others, and we could not know which
were the case until runtime, so commit 578bffc82ec5 ("MIPS: Don't BUG_ON
when no IPI domain is found") made that check simply skip IPI init if no
domain were found in order to fix the boot for systems such as QEMU
Malta.

We now use IPI IRQ domains for the MIPS CPU interrupt controller, which
means systems which make use of IPI IRQ domains will always do so when
running on multiple CPUs. As a result we now strengthen the sanity check
to ensure that an IPI IRQ domain is found when multiple CPUs are present
in the system.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15838/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 589ee628 03-Feb-2017 Ingo Molnar <mingo@kernel.org>

sched/headers: Prepare to remove the <linux/mm_types.h> dependency from <linux/sched.h>

Update code that relied on sched.h including various MM types for them.

This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# c83c2eed 02-Dec-2016 Marcin Nowakowski <marcin.nowakowski@mips.com>

MIPS: kexec: remove SMP_DUMP

SMP_DUMP has been added as a new IPI signal when kexec support was added
for Cavium Octeon CPUs ('commit 7aa1c8f47e7e ("MIPS: kdump: Add support")'.
However, the new signal doesn't appear to ever have a proper handler
added (octeon_message_functions[] array has an empty handler for it),
and generic IPI handlers now trigger a BUG() on unhandled signal.

As the method is unused remove it completely and replace its only
invocation with a smp_call_function().

[ralf@linux-mips.org: Renumber SMP_ASK_C0COUNT to avoid numbering gaps.]

Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14630/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5892d6a6 04-Nov-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Remove cpu_callin_map

The previous commit made cpu_callin_map redundant, since it is no longer
used to signal secondary CPUs starting, or going offline. Remove it now.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Anna-Maria Gleixner <anna-maria@linutronix.de>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Yang Shi <yang.shi@windriver.com>
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14503/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a00eeede 04-Nov-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Use a completion event to signal CPU up

If a secondary CPU failed to start, for any reason, the CPU requesting
the secondary to start would get stuck in the loop waiting for the
secondary to be present in the cpu_callin_map.

Rather than that, use a completion event to signal that the secondary
CPU has started and is waiting to synchronise counters.

Since the CPU presence will no longer be marked in cpu_callin_map,
remove the redundant test from arch_cpu_idle_dead().

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Maciej W. Rozycki <macro@imgtec.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14502/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# d9d54177 21-Aug-2016 Paul Gortmaker <paul.gortmaker@windriver.com>

MIPS: kernel: Audit and remove any unnecessary uses of module.h

Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends. That changed
when we forked out support for the latter into the export.h file.

This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig. The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.

Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.

In the case of the n32/o32 files, we have to get rid of a couple
no-op MODULE_ tags to facilitate the module.h removal. They piggy
back off the fs/ elf binary support, which is also a bool Kconfig.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14032/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7688c539 20-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: smp.c: Introduce mechanism for freeing and allocating IPIs

For the MIPS remote processor implementation, we need additional IPIs to
talk to the remote processor. Since MIPS GIC reserves exactly the right
number of IPI IRQs required by Linux for the number of VPs in the
system, this is not possible without releasing some recources.

This commit introduces mips_smp_ipi_allocate() which allocates IPIs to a
given cpumask. It is called as normal with the cpu_possible_mask at
bootup to initialise IPIs to all CPUs. mips_smp_ipi_free() may then be
used to free IPIs to a subset of those CPUs so that their hardware
resources can be reused.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Ohad Ben-Cohen <ohad@wizery.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Lisa Parratt <Lisa.Parratt@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-remoteproc@vger.kernel.org
Cc: lisa.parratt@imgtec.com
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14285/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4b640136 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Wrap call to mips_cpc_lock_other in mips_cm_lock_other

All calls to mips_cpc_lock_other should be wrapped in
mips_cm_lock_other. This only matters if the system has CM3 and is using
cpu idle, since otherwise a) the CPC lock is sufficent for CM < 3 and b)
any systems with CM > 3 have not been able to use cpu idle until now.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/14227/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 8f46cca1 22-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: SMP: Fix possibility of deadlock when bringing CPUs online

This patch fixes the possibility of a deadlock when bringing up
secondary CPUs.
The deadlock occurs because the set_cpu_online() is called before
synchronise_count_slave(). This can cause a deadlock if the boot CPU,
having scheduled another thread, attempts to send an IPI to the
secondary CPU, which it sees has been marked online. The secondary is
blocked in synchronise_count_slave() waiting for the boot CPU to enter
synchronise_count_master(), but the boot cpu is blocked in
smp_call_function_many() waiting for the secondary to respond to it's
IPI request.

Fix this by marking the CPU online in cpu_callin_map and synchronising
counters before declaring the CPU online and calculating the maps for
IPIs.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reported-by: Justin Chen <justinpopo6@gmail.com>
Tested-by: Justin Chen <justinpopo6@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org # v4.1+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14302/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 640511ae 13-Jul-2016 James Hogan <jhogan@kernel.org>

MIPS: c-r4k: Exclude sibling CPUs in SMP calls

When performing SMP calls to foreign cores, exclude sibling CPUs from
the provided map, as we already handle the local core on the current
CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0
on the same core.

In the process the cpu_foreign_map cpumask is turned into an array of
cpumasks, so that each CPU has its own version of it which excludes
sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache
management SMP calls are not needed when all CPUs are siblings (i.e.
there are no foreign CPUs according to the new cpu_foreign_map[]
semantics which exclude siblings).

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Jayachandran C. <jchandra@broadcom.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13801/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 92696316 13-Jul-2016 James Hogan <jhogan@kernel.org>

MIPS: SMP: Drop stop_this_cpu() cpu_foreign_map hack

Commit cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores")
added the cpu_foreign_map cpumask containing a single VPE from each
online core, and recalculated it when secondary CPUs are brought up.

stop_this_cpu() was also updated to recalculate cpu_foreign_map, but
with an additional hack before marking the CPU as offline to copy
cpu_online_mask into cpu_foreign_map and perform an SMP memory barrier.

This appears to have been intended to prevent cache management IPIs
being missed when the VPE representing the core in cpu_foreign_map is
taken offline while other VPEs remain online. Unfortunately there is
nothing in this hack to prevent r4k_on_each_cpu() from reading the old
cpu_foreign_map, and smp_call_function_many() from reading that new
cpu_online_mask with the core's representative VPE marked offline. It
then wouldn't send an IPI to any online VPEs of that core.

stop_this_cpu() is only actually called in panic and system shutdown /
halt / reboot situations, in which case all CPUs are going down and we
don't really need to care about cache management, so drop this hack.

Note that the __cpu_disable() case for CPU hotplug is handled in the
previous commit, and no synchronisation is needed there due to the use
of stop_machine() which prevents hotplug from taking place while any CPU
has disabled preemption (as r4k_on_each_cpu() does).

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13796/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 826e99be 13-Jul-2016 James Hogan <jhogan@kernel.org>

MIPS: SMP: Update cpu_foreign_map on CPU disable

When a CPU is disabled via CPU hotplug, cpu_foreign_map is not updated.
This could result in cache management SMP calls being sent to offline
CPUs instead of online siblings in the same core.

Add a call to calculate_cpu_foreign_map() in the various MIPS cpu
disable callbacks after set_cpu_online(). All cases are updated for
consistency and to keep cpu_foreign_map strictly up to date, not just
those which may support hardware multithreading.

Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Hongliang Tao <taohl@lemote.com>
Cc: Hua Yan <yanh@lemote.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13799/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# a05c3920 13-Jul-2016 James Hogan <jhogan@kernel.org>

MIPS: SMP: Clear ASID without confusing has_valid_asid()

The SMP flush_tlb_*() functions may clear the memory map's ASIDs for
other CPUs if the mm has only a single user (the current CPU) in order
to avoid SMP calls. However this makes it appear to has_valid_asid(),
which is used by various cache flush functions, as if the CPUs have
never run in the mm, and therefore can't have cached any of its memory.

For flush_tlb_mm() this doesn't sound unreasonable.

flush_tlb_range() corresponds to flush_cache_range() which does do full
indexed cache flushes, but only on the icache if the specified mapping
is executable, otherwise it doesn't guarantee that there are no cache
contents left for the mm.

flush_tlb_page() corresponds to flush_cache_page(), which will perform
address based cache ops on the specified page only, and also only
touches the icache if the page is executable. It does not guarantee that
there are no cache contents left for the mm.

For example, this affects flush_cache_range() which uses the
has_valid_asid() optimisation. It is required to flush the icache when
mappings are made executable (e.g. using mprotect) so they are
immediately usable. If some code is changed to non executable in order
to be modified then it will not be flushed from the icache during that
time, but the ASID on other CPUs may still be cleared for TLB flushing.
When the code is changed back to executable, flush_cache_range() will
assume the code hasn't run on those other CPUs due to the zero ASID, and
won't invalidate the icache on them.

This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the
above two flush_tlb_*() functions when the corresponding cache flushes
are likely to be incomplete (non executable range flush, or any page
flush). This ASID appears valid to has_valid_asid(), but still triggers
ASID regeneration due to the upper ASID version bits being 0, which is
less than the minimum ASID version of 1 and so always treated as stale.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13795/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 578bffc8 04-Apr-2016 Paul Burton <paulburton@kernel.org>

MIPS: Don't BUG_ON when no IPI domain is found

Commit fbde2d7d8290 ("MIPS: Add generic SMP IPI support") introduced
code that BUG_ON's in the case of a kernel that supports IPI domains but
does not have one at runtime. This case is possible on Malta where for
IPIs we may use either the GIC (which has an IPI IRQ domain
implementation) or core-local software interrupts between VPEs (which do
not currently have an IPI IRQ domain implementation). We can not know
which will be used until runtime when we know whether a GIC is actually
present, and if we run on a system with multiple VPEs and no GIC then
the BUG_ON is hit.

Commit 19fb5818ed60 ("IPS: Fix broken malta qemu") worked around this
for the single-core single-VPE case typically seen using QEMU, but does
not catch the multi-VPE case. This patch removes the insufficient CPU
presence check that was added and works around the bug differently,
effectively reverting that commit.

A simple way to reproduce this bug is by using QEMU, which partially
implements the MT ASE but does not implement the GIC as of version 2.5.
Using "-cpu 34Kf -smp 2" will present a system with 2 VPEs in one core &
no GIC, hitting the BUG_ON.

Given that we're post-merge-window on the way to v4.6, avoid this by
just returning from mips_smp_ipi_init when no IPI IRQ domain is found.
Ideally at some point all IPI implementations would be converted to the
same IPI IRQ domain interface & we'd be able to restore the check.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Fixes: fbde2d7d8290 ("MIPS: Add generic SMP IPI support")
Fixes: 19fb5818ed60 ("IPS: Fix broken malta qemu")
Reverts: 19fb5818ed60 ("IPS: Fix broken malta qemu")
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13007/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 19fb5818 17-Mar-2016 Qais Yousef <qsyousef@gmail.com>

MIPS: Fix broken malta qemu

Malta defconfig compiles with GIC on. Hence when compiling for SMP it causes
the new IPI code to be activated. But on qemu malta there's no GIC causing a
BUG_ON(!ipidomain) to be hit in mips_smp_ipi_init().

Since in that configuration one can only run a single core SMP (!), skip IPI
initialisation if we detect that this is the case. It is a sensible
behaviour to introduce and should keep such possible configuration to run
rather than die hard unnecessarily.

Signed-off-by: Qais Yousef <qsyousef@gmail.com>
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12892/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# d825c06b 04-Mar-2016 James Hogan <jhogan@kernel.org>

MIPS: smp.c: Fix uninitialised temp_foreign_map

When calculate_cpu_foreign_map() recalculates the cpu_foreign_map
cpumask it uses the local variable temp_foreign_map without initialising
it to zero. Since the calculation only ever sets bits in this cpumask
any existing bits at that memory location will remain set and find their
way into cpu_foreign_map too. This could potentially lead to cache
operations suboptimally doing smp calls to multiple VPEs in the same
core, even though the VPEs share primary caches.

Therefore initialise temp_foreign_map using cpumask_clear() before use.

Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12759/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# fc6d73d6 26-Feb-2016 Thomas Gleixner <tglx@linutronix.de>

arch/hotplug: Call into idle with a proper state

Let the non boot cpus call into idle with the corresponding hotplug state, so
the hotplug core can handle the further bringup. That's a first step to
convert the boot side of the hotplugged cpus to do all the synchronization
with the other side through the state machine. For now it'll only start the
hotplug thread and kick the full bringup of the cpu.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: Rik van Riel <riel@redhat.com>
Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# fbde2d7d 08-Dec-2015 Qais Yousef <qsyousef@gmail.com>

MIPS: Add generic SMP IPI support

Use the new generic IPI layer to provide generic SMP IPI support if the irqchip
supports it.

Signed-off-by: Qais Yousef <qais.yousef@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: <jason@lakedaemon.net>
Cc: <marc.zyngier@arm.com>
Cc: <jiang.liu@linux.intel.com>
Cc: <linux-mips@linux-mips.org>
Cc: <lisa.parratt@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Link: http://lkml.kernel.org/r/1449580830-23652-17-git-send-email-qais.yousef@imgtec.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# e060f6ed 25-Sep-2015 Paul Burton <paulburton@kernel.org>

MIPS: Initialise MAARs on secondary CPUs

MAARs should be initialised on each CPU (or rather, core) in the system
in order to achieve consistent behaviour & performance. Previously they
have only been initialised on the boot CPU which leads to performance
problems if tasks are later scheduled on a secondary CPU, particularly
if those tasks make use of unaligned vector accesses where some CPUs
don't handle any cases in hardware for non-speculative memory regions.
Fix this by recording the MAAR configuration from the boot CPU and
applying it to secondary CPUs as part of their bringup.

Reported-by: Doug Gilmore <doug.gilmore@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Andrew Bresticker <abrestic@chromium.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Hemmo Nieminen <hemmo.nieminen@iki.fi>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Patchwork: https://patchwork.linux-mips.org/patch/11239/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4ace6139 24-Jul-2015 Alex Smith <alex.smith@imgtec.com>

MIPS: SMP: Don't increment irq_count multiple times for call function IPIs

The majority of SMP platforms handle their IPIs through do_IRQ()
which calls irq_{enter/exit}(). When a call function IPI is received,
smp_call_function_interrupt() is called which also calls
irq_{enter,exit}(), meaning irq_count is raised twice.

When tick broadcasting is used (which is implemented via a call
function IPI), this incorrectly causes all CPU idle time on the core
receiving broadcast ticks to be accounted as time spent servicing
IRQs, as account_process_tick() will account as such if irq_count is
greater than 1. This results in 100% CPU usage being reported on a
core which receives its ticks via broadcast.

This patch removes the SMP smp_call_function_interrupt() wrapper which
calls irq_{enter,exit}(). Platforms which handle their IPIs through
do_IRQ() now call generic_smp_call_function_interrupt() directly to
avoid incrementing irq_count a second time. Platforms which don't
(loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt()
wrapped in irq_{enter,exit}().

Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10770/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# cccf34e9 10-Jul-2015 Markos Chandras <markos.chandras@imgtec.com>

MIPS: c-r4k: Fix cache flushing for MT cores

MT_SMP is not the only SMP option for MT cores. The MT_SMP option
allows more than one VPE per core to appear as a secondary CPU in the
system. Because of how CM works, it propagates the address-based
cache ops to the secondary cores but not the index-based ones.
Because of that, the code does not use IPIs to flush the L1 caches on
secondary cores because the CM would have done that already. However,
the CM functionality is independent of the type of SMP kernel so even in
non-MT kernels, IPIs are not necessary. As a result of which, we change
the conditional to depend on the CM presence. Moreover, since VPEs on
the same core share the same L1 caches, there is no need to send an
IPI on all of them so we calculate a suitable cpumask with only one
VPE per core.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10654/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# cafb45b2 11-May-2015 Ralf Baechle <ralf@linux-mips.org>

MIPS: SMP: Fix build error.

CC arch/mips/kernel/smp.o
arch/mips/kernel/smp.c: In function ‘start_secondary’:
arch/mips/kernel/smp.c:149:2: error: passing argument 2 of ‘cpumask_set_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror]
cpumask_set_cpu(cpu, &cpu_callin_map);
^
In file included from ./arch/mips/include/asm/processor.h:14:0,
from ./arch/mips/include/asm/thread_info.h:15,
from include/linux/thread_info.h:54,
from include/asm-generic/preempt.h:4,
from arch/mips/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:18,
from include/linux/interrupt.h:8,
from arch/mips/kernel/smp.c:24:
include/linux/cpumask.h:272:91: note: expected ‘struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’
static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp)
^
arch/mips/kernel/smp.c: In function ‘smp_prepare_boot_cpu’:
arch/mips/kernel/smp.c:211:2: error: passing argument 2 of ‘cpumask_set_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror]
cpumask_set_cpu(0, &cpu_callin_map);
^
In file included from ./arch/mips/include/asm/processor.h:14:0,
from ./arch/mips/include/asm/thread_info.h:15,
from include/linux/thread_info.h:54,
from include/asm-generic/preempt.h:4,
from arch/mips/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:18,
from include/linux/interrupt.h:8,
from arch/mips/kernel/smp.c:24:
include/linux/cpumask.h:272:91: note: expected ‘struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’
static inline void cpumask_set_cpu(unsigned int cpu, struct cpumask *dstp)
^
arch/mips/kernel/smp.c: In function ‘__cpu_up’:
arch/mips/kernel/smp.c:221:10: error: passing argument 2 of ‘cpumask_test_cpu’ discards ‘volatile’ qualifier from pointer target type [-Werror]
while (!cpumask_test_cpu(cpu, &cpu_callin_map))
^
In file included from ./arch/mips/include/asm/processor.h:14:0,
from ./arch/mips/include/asm/thread_info.h:15,
from include/linux/thread_info.h:54,
from include/asm-generic/preempt.h:4,
from arch/mips/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:18,
from include/linux/interrupt.h:8,
from arch/mips/kernel/smp.c:24:
include/linux/cpumask.h:294:90: note: expected ‘const struct cpumask *’ but argument is of type ‘volatile struct cpumask_t *’
static inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask)
^
cc1: all warnings being treated as errors
make[2]: *** [arch/mips/kernel/smp.o] Error 1
make[1]: *** [arch/mips/kernel] Error 2
make: *** [arch/mips] Error 2

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# ea925a72 25-Mar-2015 Andrew Bresticker <abrestic@chromium.org>

MIPS: smp: Make stop_this_cpu() actually stop the CPU

Since cpu_wait() enables interrupts upon return, CPUs which have
entered stop_this_cpu() may still end up handling interrupts.
This can lead to the softlockup detector firing on a panic or
restart/poweroff/halt. Just disable interrupts and spin to ensure
nothing else runs on the CPU once it has entered stop_this_cpu().

Signed-off-by: Andrew Bresticker <abrestic@chromium.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9601/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 8dd92891 04-Mar-2015 Rusty Russell <rusty@rustcorp.com.au>

mips: fix up obsolete cpu function usage.

Thanks to spatch, plus manual removal of "&*". Then a sweep for
for_each_cpu_mask => for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-mips@linux-mips.org


# c7754e75 15-Jan-2015 Hemmo Nieminen <hemmo.nieminen@iki.fi>

MIPS: Fix kernel lockup or crash after CPU offline/online

As printk() invocation can cause e.g. a TLB miss, printk() cannot be
called before the exception handlers have been properly initialized.
This can happen e.g. when netconsole has been loaded as a kernel module
and the TLB table has been cleared when a CPU was offline.

Call cpu_report() in start_secondary() only after the exception handlers
have been initialized to fix this.

Without the patch the kernel will randomly either lockup or crash
after a CPU is onlined and the console driver is a module.

Signed-off-by: Hemmo Nieminen <hemmo.nieminen@iki.fi>
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: stable@vger.kernel.org
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/8953/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# bda4584c 25-Jun-2014 Huacai Chen <chenhuacai@kernel.org>

MIPS: Support CPU topology files in sysfs

This patch is prepared for Loongson's NUMA support, it offer meaningful
sysfs files such as physical_package_id, core_id, core_siblings and
thread_siblings in /sys/devices/system/cpu/cpu?/topology.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Reviewed-by: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/7184/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1461df59 27-May-2014 Ralf Baechle <ralf@linux-mips.org>

MIPS: SMP: Remove plat_smp_ops cpus_done method.

Nothing was using the method and there isn't any need for this hook. This
leaves smp_cpus_done() empty for the moment.

As suggested by Paul Bolle <pebolle@tiscali.nl>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b633648c 23-May-2014 Ralf Baechle <ralf@linux-mips.org>

MIPS: MT: Remove SMTC support

Nobody is maintaining SMTC anymore and there also seems to be no userbase.
Which is a pity - the SMTC technology primarily developed by Kevin D.
Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT
ASE's power and elegance.

Based on Markos Chandras <Markos.Chandras@imgtec.com> patch
https://patchwork.linux-mips.org/patch/6719/ which while very similar did
no longer apply cleanly when I tried to merge it plus some additional
post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to
merge once upon a time.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 76306f42 14-Feb-2014 Paul Burton <paulburton@kernel.org>

MIPS: introduce cpu_coherent_mask

Add a mask of CPUs which are currently known to be operating coherently.
This is setup initially to be all present CPUs, but in a subsequent
patch CPUs in a MIPS Coherent Processing System will be cleared in this
mask as they enter non-coherent idle states. This will be used in order
to determine when a CPU within a CPS system may need to be powered back
up, but may also be used in future to optimise away wakeups for cache
operations or TLB invalidations.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>


# cc7964af 14-Feb-2014 Paul Burton <paulburton@kernel.org>

MIPS: support for generic clockevents broadcast

This patch adds support for generic clockevents broadcast using the a
dummy clockevent device and the tick_broadcast function introduced by
commit 12ad10004645 "clockevents: Add generic timer broadcast function".

Signed-off-by: Paul Burton <paul.burton@imgtec.com>


# dbee7169 11-Sep-2013 Jiang Liu <jiang.liu@huawei.com>

MIPS: SMP: kill redundant call of generic_smp_call_function_single_interrupt()

Since commit 9a46ad6d6df3b54 "smp: make smp_call_function_many() use
logic similar to smp_call_function_single()",
generic_smp_call_function_single_interrupt() is an alias of
generic_smp_call_function_interrupt(), so kill the redundant call.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Jiri Kosina <trivial@kernel.org>
Cc: Wang YanQing <udknight@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/5820/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 078a55fc 18-Jun-2013 Paul Gortmaker <paul.gortmaker@windriver.com>

MIPS: Delete __cpuinit/__CPUINIT usage from MIPS code

commit 3747069b25e419f6b51395f48127e9812abc3596 upstream.

The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.

After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.

Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.

Here, we remove all the MIPS __cpuinit from C code and __CPUINIT
from asm files. MIPS is interesting in this respect, because there
are also uasm users hiding behind their own renamed versions of the
__cpuinit macros.

[1] https://lkml.org/lkml/2013/5/20/589

[ralf@linux-mips.org: Folded in Paul's followup fix.]

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5494/
Patchwork: https://patchwork.linux-mips.org/patch/5495/
Patchwork: https://patchwork.linux-mips.org/patch/5509/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# bdc92d74 21-May-2013 Ralf Baechle <ralf@linux-mips.org>

MIPS: Idle: Consolidate all declarations in <asm/idle.h>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 82d45de6 21-Nov-2012 Sanjay Lal <sanjayl@kymasys.com>

MIPS: Export symbols used by KVM/MIPS module

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
Cc: kvm@vger.kernel.org
Cc: linux-mips@linux-mips.org
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# cdbedc61 21-Mar-2013 Thomas Gleixner <tglx@linutronix.de>

mips: Use generic idle loop

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Link: http://lkml.kernel.org/r/20130321215234.754954871@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 28eb0e46 21-Dec-2012 Greg Kroah-Hartman <gregkh@linuxfoundation.org>

MIPS: drivers: remove __dev* attributes.

CONFIG_HOTPLUG is going away as an option. As a result, the __dev*
markings need to be removed.

This change removes the use of __devinit, __devexit_p, __devinitdata,
and __devexit from these drivers.

Based on patches originally written by Bill Pemberton, but redone by me
in order to handle some of the coding style issues better, by hand.

Cc: Bill Pemberton <wfp5p@virginia.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# 7aa1c8f4 11-Oct-2012 Ralf Baechle <ralf@linux-mips.org>

MIPS: kdump: Add support

[ralf@linux-mips.org: Original patch by Maxim Uvarov <muvarov@gmail.com>
with plenty of further shining, polishing, debugging and testing by me.]

Signed-off-by: Maxim Uvarov <muvarov@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: kexec@lists.infradead.org
Cc: horms@verge.net.au
Patchwork: https://patchwork.linux-mips.org/patch/1025/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# cf9bfe55 14-Aug-2012 Jayachandran C <c.jayachandran@gmail.com>

MIPS: Synchronize MIPS count one CPU at a time

The current implementation of synchronise_count_{master,slave} blocks
slave CPUs in early boot until all of them come up. This no longer
works because blocking a CPU with interrupts off after notifying the
CPU to be online causes problems with the current kernel.

Specifically, after the workqueue changes
(commit a08489c569dc1 "Pull workqueue changes from Tejun Heo")
the CPU_ONLINE notification callback workqueue_cpu_up_callback()
will hang on wait_for_completion(&idle_rebind.done), if the slave
CPUs are blocked for synchronize_count_slave().

The changes are to update synchronize_count_{master,slave}() to handle
one CPU at a time and to call synchronise_count_master() in __cpu_up()
so that the CPU_ONLINE notification goes out only after the COP0 COUNT
register is synchronized.

[ralf@linux-mips.org: This matter only to those few platforms which are
using the cp0 counter as their clocksource which are XLP, XLR and MIPS'
CMP solution.]

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/4216/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 889a4c7b 09-Apr-2012 Steven J. Hill <sjhill@mips.com>

MIPS: SMTC: Support for Multi-threaded FPUs

Signed-off-by: Steven J. Hill <sjhill@mips.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3603/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b789ad63 19-Jul-2012 Yong Zhang <yong.zhang@windriver.com>

MIPS: smp: Warn on too early irq enable

Just to catch a potential issue.

Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: Sergei Shtylyov <sshtylyov@mvista.com>
Cc: David Daney <david.daney@cavium.com>
Acked-by: David Daney <david.daney@cavium.com>
Patchwork: https://patchwork.linux-mips.org/patch/3852/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b9a09a06 19-Jul-2012 Yong Zhang <yong.zhang@windriver.com>

MIPS: call set_cpu_online() on cpu being brought up with irq disabled

To prevent a problem as commit 5fbd036b [sched: Cleanup cpu_active madness]
and commit 2baab4e9 [sched: Fix select_fallback_rq() vs cpu_active/cpu_online]
try to resolve, move set_cpu_online() to the brought up CPU and with irq
disabled.

Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: Sergei Shtylyov <sshtylyov@mvista.com>
Cc: David Daney <david.daney@cavium.com>
Acked-by: David Daney <david.daney@cavium.com>
Patchwork: https://patchwork.linux-mips.org/patch/3851/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5309bdac 19-Jul-2012 Yong Zhang <yong.zhang@windriver.com>

MIPS: call ->smp_finish() a little late

We have move irq enable to ->smp_finish. Place ->smp_finish() a little
late to prepare for move set_cpu_online() into start_secondary.
And it's not necessary to call cpu_set(cpu, cpu_callin_map) and
synchronise_count_slave() with irq enabled.

Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: Sergei Shtylyov <sshtylyov@mvista.com>
Cc: David Daney <david.daney@cavium.com>
Acked-by: David Daney <david.daney@cavium.com>
Patchwork: https://patchwork.linux-mips.org/patch/3850/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6650df3c 15-May-2012 David Daney <david.daney@cavium.com>

MIPS: Move cache setup to setup_arch().

commit 97ce2c88f9ad42e3c60a9beb9fca87abf3639faa (jump-label: initialize
jump-label subsystem much earlier) breaks MIPS. The jump_label_init()
call was moved before trap_init() which is where we initialize
flush_icache_range().

In order to be good citizens, we move cache initialization earlier so
that we don't jump through a null flush_icache_range function pointer
when doing the jump label initialization.

Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3822/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 360014a3 20-Apr-2012 Thomas Gleixner <tglx@linutronix.de>

mips: Use generic idle thread allocation

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Link: http://lkml.kernel.org/r/20120420124557.512158271@linutronix.de


# 8239c25f 20-Apr-2012 Thomas Gleixner <tglx@linutronix.de>

smp: Add task_struct argument to __cpu_up()

Preparatory patch to make the idle thread allocation for secondary
cpus generic.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/20120420124556.964170564@linutronix.de


# 0b5f9c00 28-Mar-2012 Rusty Russell <rusty@rustcorp.com.au>

remove references to cpu_*_map in arch/

This has been obsolescent for a while; time for the final push.

In adjacent context, replaced old cpus_* with cpumask_*.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: David S. Miller <davem@davemloft.net> (arch/sparc)
Acked-by: Chris Metcalf <cmetcalf@tilera.com> (arch/tile)
Cc: user-mode-linux-devel@lists.sourceforge.net
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: linux-hexagon@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: sparclinux@vger.kernel.org


# b81947c6 28-Mar-2012 David Howells <dhowells@redhat.com>

Disintegrate asm/system.h for MIPS

Disintegrate asm/system.h for MIPS.

Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
cc: linux-mips@linux-mips.org


# 60063497 26-Jul-2011 Arun Sharma <asharma@fb.com>

atomic: use <linux/atomic.h>

This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>

Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 6667deb6 12-Feb-2011 Maksim Rayskiy <mrayskiy@broadcom.com>

MIPS: Move idle task creation to work queue

To avoid forking usermode thread when creating an idle task, move fork_idle
to a work queue.

If kernel starts with maxcpus= option which does not bring all available
cpus online at boot time, idle tasks for offline cpus are not created. If
later offline cpus are hotplugged through sysfs, __cpu_up is called in
the context of the user task, and fork_idle copies its non-zero mm
pointer. This causes BUG() in per_cpu_trap_init.

This also avoids issues with resource limits of the CPU writing to sysfs,
containers, maybe others.

Signed-off-by: Maksim Rayskiy <mrayskiy@broadcom.com>
To: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2070/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 2dc2ae34 23-Jul-2010 David Daney <ddaney@caviumnetworks.com>

MIPS: Export __cpu_number_map and __cpu_logical_map.

The forthcoming Octeon watchdog driver will use them.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
To: wim@iguana.be
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/1499/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 8f99a162 20-Nov-2009 Wu Zhangjin <wuzhangjin@gmail.com>

MIPS: Tracing: Add IRQENTRY_EXIT section for MIPS

This patch add a new section for MIPS to record the block of the hardirq
handling for function graph tracer(print_graph_irq) via adding the
__irq_entry annotation to the the entrypoints of the hardirqs(the block
with irq_enter()...irq_exit()).

Thanks goes to Steven & Frederic Weisbecker for their feedbacks.

Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Nicholas Mc Guire <der.herr@hofr.at>
Cc: zhangfx@lemote.com
Cc: Wu Zhangjin <wuzhangjin@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>
Patchwork: http://patchwork.linux-mips.org/patch/676/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# eef34ec5 25-Sep-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: SMP: Inline arch_send_call_function_{single_ipi,ipi_mask}

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b533e652 25-Sep-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: SMP: Fix build.

commit 48a048fed82a8e5fdd8618574f6d3de1a0d67a50
Author: Rusty Russell <rusty@rustcorp.com.au>
Date: Thu Sep 24 09:34:44 2009 -0600

apparently only passed the "looks good" level of QA ;-)

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 742db5d1 18-Sep-2009 Huang Weiyi <weiyi.huang@gmail.com>

MIPS: Remove duplicated #include

Remove duplicated #include in arch/mips/kernel/smp.c.

Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4037ac6e 24-Sep-2009 Rusty Russell <rusty@rustcorp.com.au>

cpumask: Use accessors for cpu_*_mask: mips

Use the accessors rather than frobbing bits directly (the new versions
are const).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>


# 48a048fe 24-Sep-2009 Rusty Russell <rusty@rustcorp.com.au>

cpumask: arch_send_call_function_ipi_mask: mips

We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.

We also take the chance to wean the implementations off the
obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer
seemed the most natural way to ensure all implementations used
for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>


# 7e17615c 15-Sep-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: Get rid of duplicate cpu_idle() prototype.

Since 2.6.11-rc1 there is a prototype in <linux/smp.h>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1b2bc75c 23-Jun-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: Add arch generic CPU hotplug

Each platform has to add support for CPU hotplugging itself by providing
suitable definitions for the cpu_disable and cpu_die of the smp_ops
methods and setting SYS_SUPPORTS_HOTPLUG_CPU. A platform should only set
SYS_SUPPORTS_HOTPLUG_CPU once all it's smp_ops definitions have the
necessary changes. This patch contains the changes to the dummy smp_ops
definition for uni-processor systems.

Parts of the code contributed by Cavium Inc.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 631330f5 19-Jun-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: Build fix - include <linux/smp.h> into all smp_processor_id() users.

Some of the were relying into smp.h being dragged in by another header
which of course is fragile. <asm/cpu-info.h> uses smp_processor_id()
only in macros and including smp.h there leads to an include loop, so
don't change cpu-info.h.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 76544504 22-Mar-2009 Dmitri Vorobiev <dmitri.vorobiev@movial.com>

MIPS: Make a needlessly global symbol static in arch/mips/kernel/smp.c

The variable cpu_callin_map is needlessly defined global, so let's
make it static now.

Build-tested using malta_defconfig.

Signed-off-by: Dmitri Vorobiev <dmitri.vorobiev@movial.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 98a79d6a 13-Dec-2008 Rusty Russell <rusty@rustcorp.com.au>

cpumask: centralize cpu_online_map and cpu_possible_map

Impact: cleanup

Each SMP arch defines these themselves. Move them to a central
location.

Twists:
1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a
CONFIG_INIT_ALL_POSSIBLE for this rather than break them.

2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
Those archs simply have phys_cpu_present_map replaced everywhere.

3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky
so I just manipulate them both in sync.

4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map'
declarations.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Grant Grundler <grundler@parisc-linux.org>
Tested-by: Tony Luck <tony.luck@intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Mike Travis <travis@sgi.com>
Cc: ink@jurassic.park.msu.ru
Cc: rmk@arm.linux.org.uk
Cc: starvik@axis.com
Cc: tony.luck@intel.com
Cc: takata@linux-m32r.org
Cc: ralf@linux-mips.org
Cc: grundler@parisc-linux.org
Cc: paulus@samba.org
Cc: schwidefsky@de.ibm.com
Cc: lethal@linux-sh.org
Cc: wli@holomorphy.com
Cc: davem@davemloft.net
Cc: jdike@addtoit.com
Cc: mingo@redhat.com


# 076c6e4f 28-Oct-2008 Ralf Baechle <ralf@linux-mips.org>

MIPS: SMP: Do not initialize __cpu_number_map/__cpu_logical_map for CPU 0.

A system isn't necessarily booted on physical processor 0 as this code
assumes. Also the array happens to be allocated in .bss so it's zero
initialized anyway. Systems which need to override this can do so in
their mp_ops->smp_setup() method.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7920c4d6 18-Oct-2008 Ralf Baechle <ralf@linux-mips.org>

MIPS: SMP: Don't reenable interrupts in stop_this_cpu; use WAIT instruction.

Noticed by Anirban Sinha <ASinha@zeugmasystems.com>; patch by me.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e545a614 07-Sep-2008 Manfred Spraul <manfred@colorfullife.com>

kernel/cpu.c: create a CPU_STARTING cpu_chain notifier

Right now, there is no notifier that is called on a new cpu, before the new
cpu begins processing interrupts/softirqs.
Various kernel function would need that notification, e.g. kvm works around
by calling smp_call_function_single(), rcu polls cpu_online_map.

The patch adds a CPU_STARTING notification. It also adds a helper function
that sends the message to all cpu_chain handlers.

Tested on x86-64.
All other archs are untested. Especially on sparc, I'm not sure if I got
it right.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 15c8b6c1 09-May-2008 Jens Axboe <jens.axboe@oracle.com>

on_each_cpu(): kill unused 'retry' parameter

It's not even passed on to smp_call_function() anymore, since that
was removed. So kill it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>


# 8691e5a8 06-Jun-2008 Jens Axboe <jens.axboe@oracle.com>

smp_call_function: get rid of the unused nonatomic/retry argument

It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>


# 2f304c0a 17-Jun-2008 Jens Axboe <jens.axboe@oracle.com>

mips: convert to generic helpers for IPI function calls

This converts mips to use the new helpers for smp_call_function() and
friends, and adds support for smp_call_function_single(). Not tested,
but it compiles.

mips shares the same IPI for smp_call_function() and
smp_call_function_single(), since not all mips platforms have enough
available IPIs to support seperate setups.

Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>


# a9ad02bd 16-May-2008 Zenon Fortuna <zenon@mips.com>

[MIPS] Export smp_call_function and smp_call_function_single.

Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 83738e30 06-May-2008 Thiemo Seufer <ths@networkno.de>

[MIPS] fix warning message on SMP kernels

This patch fixes a (harmless) warning message.

Signed-off-by: Thiemo Seufer <ths@networkno.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 39b8d525 28-Apr-2008 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Add support for MIPS CMP platform.

Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 6c81c32f 06-Feb-2008 Adrian Bunk <bunk@kernel.org>

calibrate_delay() must be __cpuinit

calibrate_delay() must be __cpuinit, not __{dev,}init.

I've verified that this is correct for all users.

While doing the latter, I also did the following cleanups:
- remove pointless additional prototypes in C files
- ensure all users #include <linux/delay.h>

This fixes the following section mismatches with CONFIG_HOTPLUG=n,
CONFIG_HOTPLUG_CPU=y:

WARNING: vmlinux.o(.text+0x1128d): Section mismatch: reference to .init.text.1:calibrate_delay (between 'check_cx686_slop' and 'set_cx86_reorder')
WARNING: vmlinux.o(.text+0x25102): Section mismatch: reference to .init.text.1:calibrate_delay (between 'smp_callin' and 'cpu_coregroup_map')

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Richard Henderson <rth@twiddle.net>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Christian Zankel <chris@zankel.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 87353d8a 18-Nov-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Call platform methods via ops structure.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 0ab7aefc 02-Mar-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] MT: Scheduler support for SMT

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# ece8a9e4 12-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Fix use of cpumasks.

Noticed by Nick Piggin <nickpiggin@yahoo.com.au>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 89a8a5a6 04-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Use ISO C struct initializer for local structs.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# c50cade9 04-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Kill useless casts.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b5eb5511 03-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Kill num_online_cpus() loops.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# bd6aeeff 02-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Implement smp_call_function_mask().

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7bcf7717 11-Oct-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Implement clockevents for R4000-style cp0 count/compare interrupt

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b4b2917c 30-Jul-2007 Peter Watkins <pwatkins@sicortex.com>

[MIPS] Add smp_call_function_single()

In the other archs, there is more factoring of smp call code, and more care
in the use of get_cpu(). That can be a follow-up MIPS patch.

Signed-off-by: Peter Watkins <pwatkins@sicortex.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4e950f6f 29-Jul-2007 Alexey Dobriyan <adobriyan@gmail.com>

Remove fs.h from mm.h

Remove fs.h from mm.h. For this,
1) Uninline vma_wants_writenotify(). It's pretty huge anyway.
2) Add back fs.h or less bloated headers (err.h) to files that need it.

As result, on x86_64 allyesconfig, fs.h dependencies cut down from 3929 files
rebuilt down to 3444 (-12.3%).

Cross-compile tested without regressions on my two usual configs and (sigh):

alpha arm-mx1ads mips-bigsur powerpc-ebony
alpha-allnoconfig arm-neponset mips-capcella powerpc-g5
alpha-defconfig arm-netwinder mips-cobalt powerpc-holly
alpha-up arm-netx mips-db1000 powerpc-iseries
arm arm-ns9xxx mips-db1100 powerpc-linkstation
arm-assabet arm-omap_h2_1610 mips-db1200 powerpc-lite5200
arm-at91rm9200dk arm-onearm mips-db1500 powerpc-maple
arm-at91rm9200ek arm-picotux200 mips-db1550 powerpc-mpc7448_hpc2
arm-at91sam9260ek arm-pleb mips-ddb5477 powerpc-mpc8272_ads
arm-at91sam9261ek arm-pnx4008 mips-decstation powerpc-mpc8313_rdb
arm-at91sam9263ek arm-pxa255-idp mips-e55 powerpc-mpc832x_mds
arm-at91sam9rlek arm-realview mips-emma2rh powerpc-mpc832x_rdb
arm-ateb9200 arm-realview-smp mips-excite powerpc-mpc834x_itx
arm-badge4 arm-rpc mips-fulong powerpc-mpc834x_itxgp
arm-carmeva arm-s3c2410 mips-ip22 powerpc-mpc834x_mds
arm-cerfcube arm-shannon mips-ip27 powerpc-mpc836x_mds
arm-clps7500 arm-shark mips-ip32 powerpc-mpc8540_ads
arm-collie arm-simpad mips-jazz powerpc-mpc8544_ds
arm-corgi arm-spitz mips-jmr3927 powerpc-mpc8560_ads
arm-csb337 arm-trizeps4 mips-malta powerpc-mpc8568mds
arm-csb637 arm-versatile mips-mipssim powerpc-mpc85xx_cds
arm-ebsa110 i386 mips-mpc30x powerpc-mpc8641_hpcn
arm-edb7211 i386-allnoconfig mips-msp71xx powerpc-mpc866_ads
arm-em_x270 i386-defconfig mips-ocelot powerpc-mpc885_ads
arm-ep93xx i386-up mips-pb1100 powerpc-pasemi
arm-footbridge ia64 mips-pb1500 powerpc-pmac32
arm-fortunet ia64-allnoconfig mips-pb1550 powerpc-ppc64
arm-h3600 ia64-bigsur mips-pnx8550-jbs powerpc-prpmc2800
arm-h7201 ia64-defconfig mips-pnx8550-stb810 powerpc-ps3
arm-h7202 ia64-gensparse mips-qemu powerpc-pseries
arm-hackkit ia64-sim mips-rbhma4200 powerpc-up
arm-integrator ia64-sn2 mips-rbhma4500 s390
arm-iop13xx ia64-tiger mips-rm200 s390-allnoconfig
arm-iop32x ia64-up mips-sb1250-swarm s390-defconfig
arm-iop33x ia64-zx1 mips-sead s390-up
arm-ixp2000 m68k mips-tb0219 sparc
arm-ixp23xx m68k-amiga mips-tb0226 sparc-allnoconfig
arm-ixp4xx m68k-apollo mips-tb0287 sparc-defconfig
arm-jornada720 m68k-atari mips-workpad sparc-up
arm-kafa m68k-bvme6000 mips-wrppmc sparc64
arm-kb9202 m68k-hp300 mips-yosemite sparc64-allnoconfig
arm-ks8695 m68k-mac parisc sparc64-defconfig
arm-lart m68k-mvme147 parisc-allnoconfig sparc64-up
arm-lpd270 m68k-mvme16x parisc-defconfig um-x86_64
arm-lpd7a400 m68k-q40 parisc-up x86_64
arm-lpd7a404 m68k-sun3 powerpc x86_64-allnoconfig
arm-lubbock m68k-sun3x powerpc-cell x86_64-defconfig
arm-lusl7200 mips powerpc-celleb x86_64-up
arm-mainstone mips-atlas powerpc-chrp32

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# b3f6df9f 25-May-2007 Robert P. J. Day <rpjday@mindspring.com>

[MIPS] Transform old-style macros to newer "__noreturn"

Convert old/obsolete NORET_TYPE and ATTRIB_NORET macros to use the
newer standard of "__noreturn" as defined in compiler-gcc.h.

Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 0437e109 09-Jul-2007 Ingo Molnar <mingo@elte.hu>

sched: zap the migration init / cache-hot balancing code

the SMP load-balancer uses the boot-time migration-cost estimation
code to attempt to improve the quality of balancing. The reason for
this code is that the discrete priority queues do not preserve
the order of scheduling accurately, so the load-balancer skips
tasks that were running on a CPU 'recently'.

this code is fundamental fragile: the boot-time migration cost detector
doesnt really work on systems that had large L3 caches, it caused boot
delays on large systems and the whole cache-hot concept made the
balancing code pretty undeterministic as well.

(and hey, i wrote most of it, so i can say it out loud that it sucks ;-)

under CFS the same purpose of cache affinity can be achieved without
any special cache-hot special-case: tasks are sorted in the 'timeline'
tree and the SMP balancer picks tasks from the left side of the
tree, thus the most cache-cold task is balanced automatically.

Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 4ebd5233 31-May-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Fix modpost warnings by making start_secondary __cpuinit

WARNING: arch/mips/kernel/built-in.o(.text+0x9a58): Section mismatch: reference to .init.text:cpu_report (between 'start_secondary' and 'smp_prepare_boot_cpu')
WARNING: arch/mips/kernel/built-in.o(.text+0x9a60): Section mismatch: reference to .init.text:per_cpu_trap_init (between 'start_secondary' and 'smp_prepare_boot_cpu')
WARNING: arch/mips/kernel/built-in.o(.text+0x9adc): Section mismatch: reference to .init.text:cpu_probe (between 'start_secondary' and 'smp_prepare_boot_cpu')
mipsel-linux-objcopy -S -O srec --remove-section=.reginfo --remove-section=.mdebug --remove-section=.comment --remove-section=.note --remove-section=.pdr --remove-section=.options --remove-section=.MIPS.options vmlinux arch/mips/boot/vmlinux.srec

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# de7fa296 20-Feb-2007 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Get smp_tune_scheduling to do something useful.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b282b6f8 11-Jan-2007 Gautham R Shenoy <ego@in.ibm.com>

[PATCH] Change cpu_up and co from __devinit to __cpuinit

Compiling the kernel with CONFIG_HOTPLUG = y and CONFIG_HOTPLUG_CPU = n
with CONFIG_RELOCATABLE = y generates the following modpost warnings

WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141b7d) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141b9c) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.text:__cpu_up
from .text between '_cpu_up' (at offset 0xc0141bd8) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141c05) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141c26) and 'cpu_up'
WARNING: vmlinux - Section mismatch: reference to .init.data: from
.text between '_cpu_up' (at offset 0xc0141c37) and 'cpu_up'

This is because cpu_up, _cpu_up and __cpu_up (in some architectures) are
defined as __devinit
AND
__cpu_up calls some __cpuinit functions.

Since __cpuinit would map to __init with this kind of a configuration,
we get a .text refering .init.data warning.

This patch solves the problem by converting all of __cpu_up, _cpu_up
and cpu_up from __devinit to __cpuinit. The approach is justified since
the callers of cpu_up are either dependent on CONFIG_HOTPLUG_CPU or
are of __init type.

Thus when CONFIG_HOTPLUG_CPU=y, all these cpu up functions would land up
in .text section, and when CONFIG_HOTPLUG_CPU=n, all these functions would
land up in .init section.

Tested on a i386 SMP machine running linux-2.6.20-rc3-mm1.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 0004a9df 30-Oct-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Cleanup memory barriers for weakly ordered systems.

Also the R4000 / R4600 LL/SC instructions imply a sync so no explicit sync
needed.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f5d6c63a 29-Nov-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Do topology_init even on uniprocessor kernels.

Otherwise CPU 0 doesn't show up in sysfs which breaks some software.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 9a244b95 11-Oct-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Pass NULL not 0 for pointer value.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 0e8f8f54 07-Jul-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] NUMA: Register all nodes before cpus or sysfs will barf.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 25969354 22-Jun-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Avoid interprocessor function calls.

On the 34K where multiple virtual processors are implemented in a single
core and share a single TLB, interprocessor function calls are not needed
to flush a cache, so avoid them.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 76b67ed9 27-Jun-2006 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

[PATCH] node hotplug: register cpu: remove node struct

With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.

I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.

In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.

This patch-set holds off creating symbolic link from node to cpu
until node is onlined.

This removes node arguments from register_cpu().

Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.

This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.

Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.

Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.

[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 320e6aba 22-May-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Fix SMP now that fixup_cpu_present_map is gone.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 41c594ab 05-Apr-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] MT: Improved multithreading support.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# e49ed7f5 31-Mar-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] Sort out duplicate exports.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 394e3902 23-Mar-2006 Andrew Morton <akpm@osdl.org>

[PATCH] more for_each_cpu() conversions

When we stop allocating percpu memory for not-possible CPUs we must not touch
the percpu data for not-possible CPUs at all. The correct way of doing this
is to test cpu_possible() or to use for_each_cpu().

This patch is a kernel-wide sweep of all instances of NR_CPUS. I found very
few instances of this bug, if any. But the patch converts lots of open-coded
test to use the preferred helper macros.

Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Kyle McMartin <kyle@parisc-linux.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Christian Zankel <chris@zankel.net>
Cc: Philippe Elie <phil.el@wanadoo.fr>
Cc: Nathan Scott <nathans@sgi.com>
Cc: Jens Axboe <axboe@suse.de>
Cc: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 9b6695a8 22-Feb-2006 Ralf Baechle <ralf@linux-mips.org>

[MIPS] SMP: Fix initialization order bug.

A recent change requires cpu_possible_map to be initialized before
smp_sched_init() but most MIPS platforms were initializing their
processors in the prom_prepare_cpus callback of smp_prepare_cpus. The
simple fix of calling prom_prepare_cpus from one of the earlier SMP
initialization hooks doesn't work well either since IPIs may require
init_IRQ() to have completed, so bit the bullet and split
prom_prepare_cpus into two initialization functions, plat_smp_setup
which is called early from setup_arch and plat_prepare_cpus called where
prom_prepare_cpus used to be called.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1e35aaba 20-Feb-2006 Rojhalat Ibrahim <imr@rtschenk.de>

[MIPS] Add topology_init.

A recent patch introduced cpu topology in sysfs. When you run a kernel
with SMP and sysfs enabled, you now get an Oops on boot. The following
patch fixes that by adding topology_init to arch/mips/kernel/smp.c. The
code is copied from arch/s390/kernel/smp.c.

Signed-off-by: Rojhalat Ibrahim <imr@rtschenk.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 5bfb5d69 08-Nov-2005 Nick Piggin <nickpiggin@yahoo.com.au>

[PATCH] sched: disable preempt in idle tasks

Run idle threads with preempt disabled.

Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
How did it ever work before?

Might fix the CPU hotplugging hang which Nigel Cunningham noted.

We think the bug hits if the idle thread is preempted after checking
need_resched() and before going to sleep, then the CPU offlined.

After calling stop_machine_run, the CPU eventually returns from preemption and
into the idle thread and goes to sleep. The CPU will continue executing
previous idle and have no chance to call play_dead.

By disabling preemption until we are ready to explicitly schedule, this bug is
fixed and the idle threads generally become more robust.

From: alexs <ashepard@u.washington.edu>

PPC build fix

From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>

MIPS build fix

Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>


# 3bffe736 16-Aug-2005 Ralf Baechle <ralf@linux-mips.org>

Delete old junk.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# ae1b3d51 15-Jul-2005 Ralf Baechle <ralf@linux-mips.org>

Make sure that the processor is actually online or die spectacularly.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f03da6e2 13-Apr-2005 Ralf Baechle <ralf@linux-mips.org>

Fix BogoMIPS display on UP and some minor cosmetical things.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b727a602 22-Feb-2005 Ralf Baechle <ralf@linux-mips.org>

Merge do_boot_cpu() into the new style __cpu_up().

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 57f0060b 09-Feb-2005 Ralf Baechle <ralf@linux-mips.org>

Document why calling smp_call_function will deadlock when called with
interrupts disabled.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1da177e4 16-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org>

Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!