History log of /linux-master/arch/sh/kernel/idle.c
Revision Date Author Comments
# 01eb454e 13-Jun-2023 Thomas Gleixner <tglx@linutronix.de>

sh/cpu: Switch to arch_cpu_finalize_init()

check_bugs() is about to be phased out. Switch over to the new
arch_cpu_finalize_init() implementation.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230613224545.371697797@linutronix.de


# 071c44e4 14-Feb-2023 Josh Poimboeuf <jpoimboe@kernel.org>

sched/idle: Mark arch_cpu_idle_dead() __noreturn

Before commit 076cbf5d2163 ("x86/xen: don't let xen_pv_play_dead()
return"), in Xen, when a previously offlined CPU was brought back
online, it unexpectedly resumed execution where it left off in the
middle of the idle loop.

There were some hacks to make that work, but the behavior was surprising
as do_idle() doesn't expect an offlined CPU to return from the dead (in
arch_cpu_idle_dead()).

Now that Xen has been fixed, and the arch-specific implementations of
arch_cpu_idle_dead() also don't return, give it a __noreturn attribute.

This will cause the compiler to complain if an arch-specific
implementation might return. It also improves code generation for both
caller and callee.

Also fixes the following warning:

vmlinux.o: warning: objtool: do_idle+0x25f: unreachable instruction

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/60d527353da8c99d4cf13b6473131d46719ed16d.1676358308.git.jpoimboe@kernel.org
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>


# fd49efb3 14-Feb-2023 Josh Poimboeuf <jpoimboe@kernel.org>

sh/cpu: Expose arch_cpu_idle_dead()'s prototype definition

Include <linux/cpu.h> to make sure arch_cpu_idle_dead() matches its
prototype going forward.

Link: https://lore.kernel.org/r/3d9661e97828fb464a48d4becf18f12604831903.1676358308.git.jpoimboe@kernel.org
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>


# 89b30987 12-Jan-2023 Peter Zijlstra <peterz@infradead.org>

arch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabled

Current arch_cpu_idle() is called with IRQs disabled, but will return
with IRQs enabled.

However, the very first thing the generic code does after calling
arch_cpu_idle() is raw_local_irq_disable(). This means that
architectures that can idle with IRQs disabled end up doing a
pointless 'enable-disable' dance.

Therefore, push this IRQ disabling into the idle function, meaning
that those architectures can avoid the pointless IRQ state flipping.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64]
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Guo Ren <guoren@kernel.org>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195540.618076436@infradead.org


# 58c644ba 20-Nov-2020 Peter Zijlstra <peterz@infradead.org>

sched/idle: Fix arch_cpu_idle() vs tracing

We call arch_cpu_idle() with RCU disabled, but then use
local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.

Switch all arch_cpu_idle() implementations to use
raw_local_irq_{en,dis}able() and carefully manage the
lockdep,rcu,tracing state like we do in entry.

(XXX: we really should change arch_cpu_idle() to not return with
interrupts enabled)

Reported-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20201120114925.594122626@infradead.org


# ca15ca40 07-Aug-2020 Mike Rapoport <rppt@kernel.org>

mm: remove unneeded includes of <asm/pgalloc.h>

Patch series "mm: cleanup usage of <asm/pgalloc.h>"

Most architectures have very similar versions of pXd_alloc_one() and
pXd_free_one() for intermediate levels of page table. These patches add
generic versions of these functions in <asm-generic/pgalloc.h> and enable
use of the generic functions where appropriate.

In addition, functions declared and defined in <asm/pgalloc.h> headers are
used mostly by core mm and early mm initialization in arch and there is no
actual reason to have the <asm/pgalloc.h> included all over the place.
The first patch in this series removes unneeded includes of
<asm/pgalloc.h>

In the end it didn't work out as neatly as I hoped and moving
pXd_alloc_track() definitions to <asm-generic/pgalloc.h> would require
unnecessary changes to arches that have custom page table allocations, so
I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local
to mm/.

This patch (of 8):

In most cases <asm/pgalloc.h> header is required only for allocations of
page table memory. Most of the .c files that include that header do not
use symbols declared in <asm/pgalloc.h> and do not require that header.

As for the other header files that used to include <asm/pgalloc.h>, it is
possible to move that include into the .c file that actually uses symbols
from <asm/pgalloc.h> and drop the include from the header file.

The process was somewhat automated using

sed -i -E '/[<"]asm\/pgalloc\.h/d' \
$(grep -L -w -f /tmp/xx \
$(git grep -E -l '[<"]asm/pgalloc\.h'))

where /tmp/xx contains all the symbols defined in
arch/*/include/asm/pgalloc.h.

[rppt@linux.ibm.com: fix powerpc warning]

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>
Link: http://lkml.kernel.org/r/20200627143453.31835-1-rppt@kernel.org
Link: http://lkml.kernel.org/r/20200627143453.31835-2-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 5933f6d2 28-Dec-2018 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>

sh: kernel: convert to SPDX identifiers

Update license to use SPDX-License-Identifier instead of verbose license
text.

Link: http://lkml.kernel.org/r/8736rccswn.wl-kuninori.morimoto.gx@renesas.com
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Cc: Rich Felker <dalias@libc.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# f0c5cdb5 28-Jan-2014 Nicolas Pitre <nico@fluxnic.net>

sched/idle, SH: Remove redundant cpuidle_idle_call()

The core idle loop now takes care of it.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-sh@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linaro-kernel@lists.linaro.org
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-v2c3oswyd80xk2vh7fwmu6r8@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# dc775dd8 21-Mar-2013 Thomas Gleixner <tglx@linutronix.de>

sh: Use generic idle loop

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Link: http://lkml.kernel.org/r/20130321215235.216323644@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# 3738fa5b 09-Feb-2013 Len Brown <len.brown@intel.com>

sh idle: rename global pm_idle to static sh_idle

SH idle code could use some simplification.
This patch enables that by guaranteeing
that "sh_idle" is local, and thus architecture specific.

Signed-off-by: Len Brown <len.brown@intel.com>
Cc: linux-sh@vger.kernel.org


# 86627c93 07-May-2012 Thomas Gleixner <tglx@linutronix.de>

sh: Remove cpu_idle_wait()

cpuidle uses generic kick_all_cpus_sync() now. Remove the unused code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Link: http://lkml.kernel.org/r/20120507175652.461648208@linutronix.de


# f03c4866 30-Mar-2012 Paul Mundt <lethal@linux-sh.org>

sh: fix up fallout from system.h disintegration.

Quite a bit of fallout all over the place, nothing terribly exciting.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# e839ca52 28-Mar-2012 David Howells <dhowells@redhat.com>

Disintegrate asm/system.h for SH

Disintegrate asm/system.h for SH.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-sh@vger.kernel.org


# bd2f5536 20-Mar-2011 Thomas Gleixner <tglx@linutronix.de>

sched/rt: Use schedule_preempt_disabled()

Coccinelle based conversion.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-24swm5zut3h9c4a6s46x8rws@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>


# 1268fbc7 17-Nov-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu()

Those two APIs were provided to optimize the calls of
tick_nohz_idle_enter() and rcu_idle_enter() into a single
irq disabled section. This way no interrupt happening in-between would
needlessly process any RCU job.

Now we are talking about an optimization for which benefits
have yet to be measured. Let's start simple and completely decouple
idle rcu and dyntick idle logics to simplify.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


# 2bbb6817 08-Oct-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Allow rcu extended quiescent state handling seperately from tick stop

It is assumed that rcu won't be used once we switch to tickless
mode and until we restart the tick. However this is not always
true, as in x86-64 where we dereference the idle notifiers after
the tick is stopped.

To prepare for fixing this, add two new APIs:
tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().

If no use of RCU is made in the idle loop between
tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
must instead call the new *_norcu() version such that the arch doesn't
need to call rcu_idle_enter() and rcu_idle_exit().

Otherwise the arch must call tick_nohz_enter_idle() and
tick_nohz_exit_idle() and also call explicitly:

- rcu_idle_enter() after its last use of RCU before the CPU is put
to sleep.
- rcu_idle_exit() before the first use of RCU after the CPU is woken
up.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>


# 280f0677 07-Oct-2011 Frederic Weisbecker <fweisbec@gmail.com>

nohz: Separate out irq exit and idle loop dyntick logic

The tick_nohz_stop_sched_tick() function, which tries to delay
the next timer tick as long as possible, can be called from two
places:

- From the idle loop to start the dytick idle mode
- From interrupt exit if we have interrupted the dyntick
idle mode, so that we reprogram the next tick event in
case the irq changed some internal state that requires this
action.

There are only few minor differences between both that
are handled by that function, driven by the ts->inidle
cpu variable and the inidle parameter. The whole guarantees
that we only update the dyntick mode on irq exit if we actually
interrupted the dyntick idle mode, and that we enter in RCU extended
quiescent state from idle loop entry only.

Split this function into:

- tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
dynticks idle mode unconditionally if it can, and enters into RCU
extended quiescent state.

- tick_nohz_irq_exit() which only updates the dynticks idle mode
when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
into tick_nohz_idle_exit().

This simplifies the code and micro-optimize the irq exit path (no need
for local_irq_save there). This also prepares for the split between
dynticks and rcu extended quiescent state logics. We'll need this split to
further fix illegal uses of RCU in extended quiescent states in the idle
loop.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>


# c66d3fcb 08-Aug-2011 Paul Mundt <lethal@linux-sh.org>

sh: Fix up fallout from cpuidle changes.

Fixes up the pm_idle redefinition that was introduced with the earlier
cpuidle changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# cbc158d6 04-Aug-2011 David Brown <davidb@codeaurora.org>

cpuidle: Consistent spelling of cpuidle_idle_call()

Commit a0bfa1373859e9d11dc92561a8667588803e42d8 mispells
cpuidle_idle_call() on ARM and SH code. Fix this to be consistent.

Cc: Kevin Hilman <khilman@deeprootsystems.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: x86@kernel.org
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: David Brown <davidb@codeaurora.org>
[ Also done by Mark Brown - th ebug has been around forever, and was
noticed in -next, but the idle tree never picked it up. Bad bad bad ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# a0bfa137 01-Apr-2011 Len Brown <len.brown@intel.com>

cpuidle: stop depending on pm_idle

cpuidle users should call cpuidle_call_idle() directly
rather than via (pm_idle)() function pointer.

Architecture may choose to continue using (pm_idle)(),
but cpuidle need not depend on it:

my_arch_cpu_idle()
...
if(cpuidle_call_idle())
pm_idle();

cc: Kevin Hilman <khilman@deeprootsystems.com>
cc: Paul Mundt <lethal@linux-sh.org>
cc: x86@kernel.org
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>


# 60063497 26-Jul-2011 Arun Sharma <asharma@fb.com>

atomic: use <linux/atomic.h>

This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>

Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 763142d1 26-Apr-2010 Paul Mundt <lethal@linux-sh.org>

sh: CPU hotplug support.

This adds preliminary support for CPU hotplug for SH SMP systems.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# f0ccf277 26-Apr-2010 Paul Mundt <lethal@linux-sh.org>

sh: convert online CPU map twiddling to cpumask.

This converts from cpu_set() for the online map to set_cpu_online().
The two online map modifiers were the last remaining manual map
manipulation bits, with this in place everything now goes through
cpumask.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 90851c40 23-Mar-2010 Paul Mundt <lethal@linux-sh.org>

sh: Tidy up a couple of section mismatches.

select_idle_routine() and register_sh_pmu() both needed their annotations
fixed up to silence section mismatch warnings.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# fbb82b03 20-Jan-2010 Paul Mundt <lethal@linux-sh.org>

sh: machine_ops based reboot support.

This provides a machine_ops-based reboot interface loosely cloned from
x86, and converts the native sh32 and sh64 cases over to it.

Necessary both for tying in SMP support and also enabling platforms like
SDK7786 to add support for their microcontroller-based power managers.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 73a38b83 17-Dec-2009 Paul Mundt <lethal@linux-sh.org>

sh: Only use bl bit toggling for sleeping idle.

We don't actually require this in the cpu_relax() polling case, so just
cuddle these around the sleeping version.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 3147093e 17-Dec-2009 Paul Mundt <lethal@linux-sh.org>

sh: Restore bl bit toggling in idle loop.

This fixes up some crashes with IRQs racing the need_resched() test under
QEMU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 9dbe00a5 16-Oct-2009 Paul Mundt <lethal@linux-sh.org>

sh: Fix up IRQ re-enabling for the need_resched() case.

In the case where need_resched() is set in between the cpu_idle() and
pm_idle() calls we were missing an else case for just re-enabling local
IRQs and bailing out. This was noticed by the irqs_disabled() warning,
even though IRQs were being re-enabled elsewhere.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 0e6d4986 16-Oct-2009 Paul Mundt <lethal@linux-sh.org>

sh: Make check_pgt_cache() more aggressive while idling.

This follows the x86 change and moves check_pgt_cache() up under the
!need_resched() tight loop, rather than simply calling in to it when
exiting idle.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# f533c3d3 16-Oct-2009 Paul Mundt <lethal@linux-sh.org>

sh: Idle loop chainsawing for SMP-based light sleep.

This does a bit of chainsawing of the idle loop code to get light sleep
working on SMP. Previously this was forcing secondary CPUs in to sleep
mode with them not coming back if they didn't have their own local
timers. Given that we use clockevents broadcasting by default, the CPU
managing the clockevents can't have IRQs disabled before entering its
sleep state.

This unfortunately leaves us with the age-old need_resched() race in
between local_irq_enable() and cpu_sleep(), but at present this is
unavoidable. After some more experimentation it may be possible to layer
on SR.BL bit manipulation over top of this scheme to inhibit the race
condition, but given the current potential for missing wakeups, this is
left as a future exercise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 2e046b94 18-Jun-2009 Paul Mundt <lethal@linux-sh.org>

sh: Provide cpu_idle_wait() to fix up cpuidle/SMP build.

Crib the x86 cpu_idle_wait() implementation and shove it in with the
idle code, subsequently enabling ARCH_HAS_CPU_IDLE_WAIT.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# e869a90e 01-Apr-2009 Paul Mundt <lethal@linux-sh.org>

sh: Wire up ARCH_HAS_DEFAULT_IDLE for cpuidle.

cpuidle wants ARCH_HAS_DEFAULT_IDLE defined in order to use the
default idle loop. So, make it accessible and enable it for all
sh machines.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>


# 1da1180c 25-Nov-2008 Paul Mundt <lethal@linux-sh.org>

sh: Split out the idle loop for reuse between _32/_64 variants.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>