History log of /linux-master/arch/mips/include/asm/bitops.h
Revision Date Author Comments
# f12fb73b 04-Oct-2023 Matthew Wilcox (Oracle) <willy@infradead.org>

mm: delete checks for xor_unlock_is_negative_byte()

Architectures which don't define their own use the one in
asm-generic/bitops/lock.h. Get rid of all the ifdefs around "maybe we
don't have it".

Link: https://lkml.kernel.org/r/20231004165317.1061855-15-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 8da36b26 04-Oct-2023 Matthew Wilcox (Oracle) <willy@infradead.org>

mips: implement xor_unlock_is_negative_byte

Inspired by the mips test_and_change_bit(), this will surely be more
efficient than the generic one defined in filemap.c

Link: https://lkml.kernel.org/r/20231004165317.1061855-11-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


# 47d8c156 14-Aug-2021 Yury Norov <yury.norov@gmail.com>

include: move find.h from asm_generic to linux

find_bit API and bitmap API are closely related, but inclusion paths
are different - include/asm-generic and include/linux, correspondingly.
In the past it made a lot of troubles due to circular dependencies
and/or undefined symbols. Fix this by moving find.h under include/linux.

Signed-off-by: Yury Norov <yury.norov@gmail.com>
Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>


# f0b7ddbd 15-Dec-2021 Huang Pei <huangpei@loongson.cn>

MIPS: retire "asm/llsc.h"

all that "asm/llsc.h" does is just to help inline asm, which can be
stringifyed from "asm/asm.h"

+. Since "asm/asm.h" has all we need, retire "asm/llsc.h"

+. remove unused header file

Inspired-by: Maciej W. Rozycki <macro@orcam.me.uk>
Signed-off-by: Huang Pei <huangpei@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 97c97c6a 14-Jan-2021 Alexander Lobakin <alobakin@pm.me>

MIPS: bitops: fix -Wshadow in asm/bitops.h

Solves the following repetitive warning when building with -Wshadow:

In file included from ./include/linux/bitops.h:32,
from ./include/linux/kernel.h:11,
from ./include/linux/skbuff.h:13,
from ./include/linux/if_ether.h:19,
from ./include/linux/etherdevice.h:20:
./arch/mips/include/asm/bitops.h: In function ‘test_and_set_bit_lock’:
./arch/mips/include/asm/bitops.h:46:16: warning: declaration of ‘orig’ shadows a previous local [-Wshadow]
46 | unsigned long orig, temp; \
| ^~~~
./arch/mips/include/asm/bitops.h:190:10: note: in expansion of macro ‘__test_bit_op’
190 | orig = __test_bit_op(*m, "%0",
| ^~~~~~~~~~~~~
./arch/mips/include/asm/bitops.h:185:21: note: shadowed declaration is here
185 | unsigned long res, orig;
| ^~~~
./arch/mips/include/asm/bitops.h: In function ‘test_and_clear_bit’:
./arch/mips/include/asm/bitops.h:46:16: warning: declaration of ‘orig’ shadows a previous local [-Wshadow]
46 | unsigned long orig, temp; \
| ^~~~
./arch/mips/include/asm/bitops.h:236:9: note: in expansion of macro ‘__test_bit_op’
236 | res = __test_bit_op(*m, "%1",
| ^~~~~~~~~~~~~
./arch/mips/include/asm/bitops.h:229:21: note: shadowed declaration is here
229 | unsigned long res, orig;
| ^~~~
./arch/mips/include/asm/bitops.h:46:16: warning: declaration of ‘orig’ shadows a previous local [-Wshadow]
46 | unsigned long orig, temp; \
| ^~~~
./arch/mips/include/asm/bitops.h:241:10: note: in expansion of macro ‘__test_bit_op’
241 | orig = __test_bit_op(*m, "%0",
| ^~~~~~~~~~~~~
./arch/mips/include/asm/bitops.h:229:21: note: shadowed declaration is here
229 | unsigned long res, orig;
| ^~~~
./arch/mips/include/asm/bitops.h: In function ‘test_and_change_bit’:
./arch/mips/include/asm/bitops.h:46:16: warning: declaration of ‘orig’ shadows a previous local [-Wshadow]
46 | unsigned long orig, temp; \
| ^~~~
./arch/mips/include/asm/bitops.h:273:10: note: in expansion of macro ‘__test_bit_op’
273 | orig = __test_bit_op(*m, "%0",
| ^~~~~~~~~~~~~
./arch/mips/include/asm/bitops.h:266:21: note: shadowed declaration is here
266 | unsigned long res, orig;
| ^~~~

Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 99b40ced 08-Jan-2021 Geert Uytterhoeven <geert+renesas@glider.be>

MIPS: bitops: Fix reference to ffz location

Unlike most other architectures, MIPS defines ffz() below ffs().

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# 90267377 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Use smp_mb__before_atomic in test_* ops

Use smp_mb__before_atomic() rather than smp_mb__before_llsc() in
test_and_set_bit(), test_and_clear_bit() & test_and_change_bit(). The
_atomic() versions make semantic sense in these cases, and will allow a
later patch to omit redundant barriers for Loongson3 systems that
already include a barrier within __test_bit_op().

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 5bb29275 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Emit Loongson3 sync workarounds within asm

Generate the sync instructions required to workaround Loongson3 LL/SC
errata within inline asm blocks, which feels a little safer than doing
it from C where strictly speaking the compiler would be well within its
rights to insert a memory access between the separate asm statements we
previously had, containing sync & ll instructions respectively.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# c042be02 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Use BIT_WORD() & BITS_PER_LONG

Rather than using custom SZLONG_LOG & SZLONG_MASK macros to shift & mask
a bit index to form word & bit offsets respectively, make use of the
standard BIT_WORD() & BITS_PER_LONG macros for the same purpose.

volatile is added to the definition of pointers to the long-sized word
we'll operate on, in order to prevent the compiler complaining that we
cast away the volatile qualifier of the addr argument. This should have
no effect on generated code, which in the LL/SC case is inline asm
anyway & in the non-LLSC case access is constrained by compiler barriers
provided by raw_local_irq_{save,restore}().

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# cc99987c 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Abstract LL/SC loops

Introduce __bit_op() & __test_bit_op() macros which abstract away the
implementation of LL/SC loops. This cuts down on a lot of duplicate
boilerplate code, and also allows R10000_LLSC_WAR to be handled outside
of the individual bitop functions.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# aad028ca 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Avoid redundant zero-comparison for non-LLSC

The IRQ-disabling non-LLSC fallbacks for bitops on UP systems already
return a zero or one, so there's no need to perform another comparison
against zero. Move these comparisons into the LLSC paths to avoid the
redundant work.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# d6103510 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Use the BIT() macro

Use the BIT() macro in asm/bitops.h rather than open-coding its
equivalent.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# a2e66b86 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Allow immediates in test_and_{set,clear,change}_bit

The logical operations or & xor used in the test_and_set_bit_lock(),
test_and_clear_bit() & test_and_change_bit() functions currently force
the value 1<<bit to be placed in a register. If the bit is compile-time
constant & fits within the immediate field of an or/xor instruction (ie.
16 bits) then we can make use of the ori/xori instruction variants &
avoid the use of an extra register. Add the extra "i" constraints in
order to allow use of these immediate encodings.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 6bbe043b 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Implement test_and_set_bit() in terms of _lock variant

The only difference between test_and_set_bit() & test_and_set_bit_lock()
is memory ordering barrier semantics - the former provides a full
barrier whilst the latter only provides acquire semantics.

We can therefore implement test_and_set_bit() in terms of
test_and_set_bit_lock() with the addition of the extra memory barrier.
Do this in order to avoid duplicating logic.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 27aab272 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: ins start position is always an immediate

The start position for an ins instruction is always encoded as an
immediate, so allowing registers to be used by the inline asm makes no
sense. It should never happen anyway since a bit index should always be
small enough to be treated as an immediate, but remove the nonsensical
"r" for sanity.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 59361e99 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Use MIPS_ISA_REV, not #ifdefs

Rather than #ifdef on CONFIG_CPU_* to determine whether the ins
instruction is supported we can simply check MIPS_ISA_REV to discover
whether we're targeting MIPSr2 or higher. Do so in order to clean up the
code.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 3d2920cf 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Only use ins for bit 16 or higher

set_bit() can set bits 0-15 using an ori instruction, rather than
loading the value -1 into a register & then using an ins instruction.

That is, rather than the following:

li t0, -1
ll t1, 0(t2)
ins t1, t0, 4, 1
sc t1, 0(t2)

We can have the simpler:

ll t1, 0(t2)
ori t1, t1, 0x10
sc t1, 0(t2)

The or path already allows immediates to be used, so simply restricting
the ins path to bits that don't fit in immediates is sufficient to take
advantage of this.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# fe7cd97e 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: bitops: Handle !kernel_uses_llsc first

Reorder conditions in our various bitops functions that check
kernel_uses_llsc such that they handle the !kernel_uses_llsc case first.
This allows us to avoid the need to duplicate the kernel_uses_llsc check
in all the other cases. For functions that don't involve barriers common
to the various implementations, we switch to returning from within each
if block making each case easier to read in isolation.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 42344113 13-Jun-2019 Peter Zijlstra <peterz@infradead.org>

mips/atomic: Fix smp_mb__{before,after}_atomic()

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>


# 1c6c1ca3 13-Jun-2019 Peter Zijlstra <peterz@infradead.org>

mips/atomic: Fix loongson_llsc_mb() wreckage

The comment describing the loongson_llsc_mb() reorder case doesn't
make any sense what so ever. Instruction re-ordering is not an SMP
artifact, but rather a CPU local phenomenon. Clarify the comment by
explaining that these issue cause a coherence fail.

For the branch speculation case; if futex_atomic_cmpxchg_inatomic()
needs one at the bne branch target, then surely the normal
__cmpxch_asm() implementation does too. We cannot rely on the
barriers from cmpxchg() because cmpxchg_local() is implemented with
the same macro, and branch prediction and speculation are, too, CPU
local.

Fixes: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()")
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Huang Pei <huangpei@loongson.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>


# e9ea596c 14-May-2019 Masahiro Yamada <yamada.masahiro@socionext.com>

MIPS: mark __fls() and __ffs() as __always_inline

This prepares to move CONFIG_OPTIMIZE_INLINING from x86 to a common
place. We need to eliminate potential issues beforehand.

If it is enabled for mips, the following errors are reported:

arch/mips/mm/sc-mips.o: In function `mips_sc_prefetch_enable.part.2':
sc-mips.c:(.text+0x98): undefined reference to `mips_gcr_base'
sc-mips.c:(.text+0x9c): undefined reference to `mips_gcr_base'
sc-mips.c:(.text+0xbc): undefined reference to `mips_gcr_base'
sc-mips.c:(.text+0xc8): undefined reference to `mips_gcr_base'
sc-mips.c:(.text+0xdc): undefined reference to `mips_gcr_base'
arch/mips/mm/sc-mips.o:sc-mips.c:(.text.unlikely+0x44): more undefined references to `mips_gcr_base'

Link: http://lkml.kernel.org/r/20190423034959.13525-7-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Marek Vasut <marek.vasut@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Malaterre <malat@debian.org>
Cc: Miquel Raynal <miquel.raynal@bootlin.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Stefan Agner <stefan@agner.ch>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# e02e07e3 15-Jan-2019 Huacai Chen <chenhuacai@kernel.org>

MIPS: Loongson: Introduce and use loongson_llsc_mb()

On the Loongson-2G/2H/3A/3B there is a hardware flaw that ll/sc and
lld/scd is very weak ordering. We should add sync instructions "before
each ll/lld" and "at the branch-target between ll/sc" to workaround.
Otherwise, this flaw will cause deadlock occasionally (e.g. when doing
heavy load test with LTP).

Below is the explaination of CPU designer:

"For Loongson 3 family, when a memory access instruction (load, store,
or prefetch)'s executing occurs between the execution of LL and SC, the
success or failure of SC is not predictable. Although programmer would
not insert memory access instructions between LL and SC, the memory
instructions before LL in program-order, may dynamically executed
between the execution of LL/SC, so a memory fence (SYNC) is needed
before LL/LLD to avoid this situation.

Since Loongson-3A R2 (3A2000), we have improved our hardware design to
handle this case. But we later deduce a rarely circumstance that some
speculatively executed memory instructions due to branch misprediction
between LL/SC still fall into the above case, so a memory fence (SYNC)
at branch-target (if its target is not between LL/SC) is needed for
Loongson 3A1000, 3B1500, 3A2000 and 3A3000.

Our processor is continually evolving and we aim to to remove all these
workaround-SYNCs around LL/SC for new-come processor."

Here is an example:

Both cpu1 and cpu2 simutaneously run atomic_add by 1 on same atomic var,
this bug cause both 'sc' run by two cpus (in atomic_add) succeed at same
time('sc' return 1), and the variable is only *added by 1*, sometimes,
which is wrong and unacceptable(it should be added by 2).

Why disable fix-loongson3-llsc in compiler?
Because compiler fix will cause problems in kernel's __ex_table section.

This patch fix all the cases in kernel, but:

+. the fix at the end of futex_atomic_cmpxchg_inatomic is for branch-target
of 'bne', there other cases which smp_mb__before_llsc() and smp_llsc_mb() fix
the ll and branch-target coincidently such as atomic_sub_if_positive/
cmpxchg/xchg, just like this one.

+. Loongson 3 does support CONFIG_EDAC_ATOMIC_SCRUB, so no need to touch
edac.h

+. local_ops and cmpxchg_local should not be affected by this bug since
only the owner can write.

+. mips_atomic_set for syscall.c is deprecated and rarely used, just let
it go

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Huang Pei <huangpei@loongson.cn>
[paul.burton@mips.com:
- Simplify the addition of -mno-fix-loongson3-llsc to cflags, and add
a comment describing why it's there.
- Make loongson_llsc_mb() a no-op when
CONFIG_CPU_LOONGSON3_WORKAROUNDS=n, rather than a compiler memory
barrier.
- Add a comment describing the bug & how loongson_llsc_mb() helps
in asm/barrier.h.]
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: ambrosehua@gmail.com
Cc: Steven J . Hill <Steven.Hill@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: Li Xuefeng <lixuefeng@loongson.cn>
Cc: Xu Chenghua <xuchenghua@loongson.cn>


# 3fc2579e 03-Jan-2019 Matthew Wilcox <willy@infradead.org>

fls: change parameter to unsigned int

When testing in userspace, UBSAN pointed out that shifting into the sign
bit is undefined behaviour. It doesn't really make sense to ask for the
highest set bit of a negative value, so just turn the argument type into
an unsigned int.

Some architectures (eg ppc) already had it declared as an unsigned int,
so I don't expect too many problems.

Link: http://lkml.kernel.org/r/20181105221117.31828-1-willy@infradead.org
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 378ed6f0 08-Nov-2018 Paul Burton <paulburton@kernel.org>

MIPS: Avoid using .set mips0 to restore ISA

We currently have 2 commonly used methods for switching ISA within
assembly code, then restoring the original ISA.

1) Using a pair of .set push & .set pop directives. For example:

.set push
.set mips32r2
<some_insn>
.set pop

2) Using .set mips0 to restore the ISA originally specified on the
command line. For example:

.set mips32r2
<some_insn>
.set mips0

Unfortunately method 2 does not work with nanoMIPS toolchains, where the
assembler rejects the .set mips0 directive like so:

Error: cannot change ISA from nanoMIPS to mips0

In preparation for supporting nanoMIPS builds, switch all instances of
method 2 in generic non-platform-specific code to use push & pop as in
method 1 instead. The .set push & .set pop is arguably cleaner anyway,
and if nothing else it's good to consistently use one method.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/21037/
Cc: linux-mips@linux-mips.org


# 34ae9c91 14-Sep-2017 Chad Reese <kreese@caviumnetworks.com>

MIPS: Add nudges to writes for bit unlocks.

Flushing the writes lets other CPUs waiting for the lock to get it sooner.

Signed-off-by: Chad Reese <kreese@caviumnetworks.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17289/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 05490626 15-Apr-2016 Ralf Baechle <ralf@linux-mips.org>

MIPS: Move definitions for 32/64-bit agonstic inline assembler to new file.

Inspired by Markos Chandras' patch. I just didn't want do pull bitsops.h
into pgtable.h.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
References: https://patchwork.linux-mips.org/patch/11052/


# 6f6ed482 01-Jun-2015 Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>

MIPS: Replace smp_mb with release barrier function in unlocks.

Repleace smp_mb() in arch_write_unlock() and __clear_bit_unlock() to
smp_mb__before_llsc() call which does "release" barrier functionality.

It seems like it was missed in commit f252ffd50c97dae87b45f1dbad24f71358ccfbd6
during introduction of "acquire" and "release" semantics.

[ralf@linux-mips: The original patch submission was labelled a fix but
actually it replaces a barrier with another less restrictive type of
barrier so it doesn't fix any ill behaviour but rather squeezes out a
tad better performance. Further improvments will be possible once
smp_release() has been merged.]

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: benh@kernel.crashing.org
Cc: will.deacon@arm.com
Cc: linux-kernel@vger.kernel.org
Cc: markos.chandras@imgtec.com
Cc: macro@linux-mips.org
Cc: Steven.Hill@imgtec.com
Cc: alexander.h.duyck@redhat.com
Cc: davem@davemloft.net
Patchwork: https://patchwork.linux-mips.org/patch/10507/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# cb5d4aad 03-Apr-2015 Maciej W. Rozycki <macro@linux-mips.org>

MIPS: bitops.h: Avoid inline asm for constant FLS

GCC is smart enough to substitute the final result for FLS calculations
as implemented in the fallback C code we have in `__fls' and `fls'
applied to constant values. The presence of inline asm defeats the
compiler though, forcing it to emit extraneous CLZ/DCLZ calculation for
processors that support these instructions.

Use `__builtin_constant_p' then to avoid inline asm altogether for
constants.

Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9681/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 87a927ef 20-Nov-2014 Markos Chandras <markos.chandras@imgtec.com>

MIPS: asm: bitops: Update ISA constraints for MIPS R6 support

MIPS R6 changed the opcodes for LL/SC instructions so we need to set
the correct ISA level.

Cc: Matthew Fortune <Matthew.Fortune@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>


# 94bfb75a 25-Jan-2015 Markos Chandras <markos.chandras@imgtec.com>

MIPS: asm: Rename GCC_OFF12_ASM to GCC_OFF_SMALL_ASM

The GCC_OFF12_ASM macro is used for 12-bit immediate constrains
but we will also use it for 9-bit constrains on MIPS R6 so we
rename it to something more appropriate.

Cc: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>


# b0984c43 15-Nov-2014 Maciej W. Rozycki <macro@codesourcery.com>

MIPS: Fix microMIPS LL/SC immediate offsets

In the microMIPS encoding some memory access instructions have their
immediate offset reduced to 12 bits only. That does not match the GCC
`R' constraint we use in some places to satisfy the requirement,
resulting in build failures like this:

{standard input}: Assembler messages:
{standard input}:720: Error: macro used $at after ".set noat"
{standard input}:720: Warning: macro instruction expanded into multiple instructions

Fix the problem by defining a macro, `GCC_OFF12_ASM', that expands to
the right constraint depending on whether microMIPS or standard MIPS
code is produced. Also apply the fix to where `m' is used as in the
worst case this change does nothing, e.g. where the pointer was already
in a register such as a function argument and no further offset was
requested, and in the best case it avoids an extraneous sequence of up
to two instructions to load the high 20 bits of the address in the LL/SC
loop. This reduces the risk of lock contention that is the higher the
more instructions there are in the critical section between LL and SC.

Strictly speaking we could just bulk-replace `R' with `ZC' as the latter
constraint adjusts automatically depending on the ISA selected.
However it was only introduced with GCC 4.9 and we keep supporing older
compilers for the standard MIPS configuration, hence the slightly more
complicated approach I chose.

The choice of a zero-argument function-like rather than an object-like
macro was made so that it does not look like a function call taking the
C expression used for the constraint as an argument. This is so as not
to confuse the reader or formatting checkers like `checkpatch.pl' and
follows previous practice.

Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8482/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# db873131 28-Jun-2014 Maciej W. Rozycki <macro@linux-mips.org>

MIPS: asm/bitops.h: Guard CLZ with `.set mips32'

This fixes:

{standard input}: Assembler messages:
{standard input}:145: Error: opcode not supported on this processor: vr5000 (mips4) `clz $2,$2'
{standard input}:920: Error: opcode not supported on this processor: vr5000 (mips4) `clz $7,$9'
{standard input}:1797: Error: opcode not supported on this processor: vr5000 (mips4) `clz $7,$7'
{standard input}:1851: Error: opcode not supported on this processor: vr5000 (mips4) `clz $7,$7'
{standard input}:2831: Error: opcode not supported on this processor: vr5000 (mips4) `clz $7,$7'
{standard input}:4209: Error: opcode not supported on this processor: vr5000 (mips4) `clz $7,$7'
{standard input}:4329: Error: opcode not supported on this processor: vr5000 (mips4) `clz $2,$2'
make[2]: *** [arch/mips/mm/tlbex.o] Error 1

which triggered due to a regression causing the file to be built with
`-march=r5000' rather than `-march=sb1', fixed separately. Nevertheless
the error should not happen, the other uses of CLZ are appropriately
guarded. This change copies the arrangement from one of those other
places.

Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7222/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 91bbefe6 13-Mar-2014 Peter Zijlstra <peterz@infradead.org>

arch,mips: Convert smp_mb__*()

MIPS is interesting and has hardware variants that reorder over ll/sc
as well as those that do not.

Implement the 2 new barrier functions as per the old barriers.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-9ph49jbae3hol9v721sbc2g6@git.kernel.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Maciej W. Rozycki" <macro@codesourcery.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# a809d460 30-Mar-2014 Ralf Baechle <ralf@linux-mips.org>

MIPS: Fix gigaton of warning building with microMIPS.

With binutils 2.24 the attempt to switch with microMIPS mode to MIPS III
mode through .set mips3 results in *lots* of warnings like

{standard input}: Assembler messages:
{standard input}:397: Warning: the 64-bit MIPS architecture does not support the `smartmips' extension

during a kernel build. Fixed by using .set arch=r4000 instead.

This breaks support for building the kernel with binutils 2.13 which
was supported for 32 bit kernels only anyway and 2.14 which was a bad
vintage for MIPS anyway.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 70342287 21-Jan-2013 Ralf Baechle <ralf@linux-mips.org>

MIPS: Whitespace cleanup.

Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 92d11594 06-Sep-2012 Jim Quinlan <jim2101024@gmail.com>

MIPS: Remove irqflags.h dependency from bitops.h

The "else clause" of most functions in bitops.h invoked
raw_local_irq_{save,restore}() and in doing so had a dependency on
irqflags.h. This fix moves said code to bitops.c, removing the
dependency.

Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: David Daney <ddaney.cavm@gmail.com>
Cc: Kevin Cernekee cernekee@gmail.com
Cc: Jim Quinlan <jim2101024@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/4320/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 9de79c50 06-Sep-2012 Jim Quinlan <jim2101024@gmail.com>

MIPS: bitops.h: Change use of 'unsigned short' to 'int'

[ralf@linux-mips.org: No functional change but it's consistent with how
use types elsewhere in the code.]

Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: David Daney <ddaney.cavm@gmail.com>
Cc: Kevin Cernekee cernekee@gmail.com
Cc: Jim Quinlan <jim2101024@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/4319/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 3592c3cd 18-Jul-2012 Yoichi Yuasa <yuasa@linux-mips.org>

MIPS: Fix bug.h MIPS build regression

Commit: 3777808873b0c49c5cf27e44c948dfb02675d578 [bug.h: need linux/kernel.h
for TAINT_WARN.] breaks all MIPS builds.

CC arch/mips/kernel/machine_kexec.o
In file included from include/linux/kernel.h:20:0,
from include/asm-generic/bug.h:35,
from /home/yuasa/src/linux/kernel/git/linux-2.6/arch/mips/include/asm/bug.h:41,
from /home/yuasa/src/linux/kernel/git/linux-2.6/arch/mips/include/asm/bitops.h:20,
from include/linux/bitops.h:22,
from include/linux/signal.h:38,
from include/linux/elfcore.h:5,
from include/linux/kexec.h:60,
from arch/mips/kernel/machine_kexec.c:9:
include/linux/log2.h: In function '__ilog2_u32':
include/linux/log2.h:34:2: error: implicit declaration of function 'fls' [-Werror=implicit-function-declaration]
include/linux/log2.h: In function '__ilog2_u64':
include/linux/log2.h:42:2: error: implicit declaration of function 'fls64' [-Werror=implicit-function-declaration]
include/linux/log2.h: In function '__roundup_pow_of_two':
include/linux/log2.h:63:2: error: implicit declaration of function 'fls_long' [-Werror=implicit-function-declaration]
In file included from include/linux/bitops.h:22:0,
from include/linux/signal.h:38,
from include/linux/elfcore.h:5,
from include/linux/kexec.h:60,
from arch/mips/kernel/machine_kexec.c:9:
/home/yuasa/src/linux/kernel/git/linux-2.6/arch/mips/include/asm/bitops.h: At top level:
/home/yuasa/src/linux/kernel/git/linux-2.6/arch/mips/include/asm/bitops.h:615:19: error: static declaration of 'fls' follows non-static declaration
include/linux/log2.h:34:9: note: previous implicit declaration of 'fls' was here
In file included from /home/yuasa/src/linux/kernel/git/linux-2.6/arch/mips/include/asm/bitops.h:651:0,
from include/linux/bitops.h:22,
from include/linux/signal.h:38,
from include/linux/elfcore.h:5,
from include/linux/kexec.h:60,
from arch/mips/kernel/machine_kexec.c:9:
include/asm-generic/bitops/fls64.h:18:28: error: static declaration of 'fls64' follows non-static declaration
include/linux/log2.h:42:9: note: previous implicit declaration of 'fls64' was here
In file included from include/linux/signal.h:38:0,
from include/linux/elfcore.h:5,
from include/linux/kexec.h:60,
from arch/mips/kernel/machine_kexec.c:9:
include/linux/bitops.h:160:24: error: conflicting types for 'fls_long'
include/linux/log2.h:63:16: note: previous implicit declaration of 'fls_long' was here
cc1: all warnings being treated as errors

make[2]: *** [arch/mips/kernel/machine_kexec.o] Error 1

Signed-off-by: Yoichi Yuasa <yuasa@linux-mips.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: yuasa@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: Linuxppc-dev <linuxppc-dev@ozlabs.org>
Cc: Linux MIPS Mailing List <linux-mips@linux-mips.org>
Cc: Linux-sh list <linux-sh@vger.kernel.org>
Cc: Chris Zankel <chris@zankel.net>
Patchwork: https://patchwork.linux-mips.org/patch/4000/
Tested-by: John Crispin <blogic@openwrt.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 893a0574 18-Jul-2012 Yoichi Yuasa <yuasa@linux-mips.org>

mips: fix bug.h build regression

Commit 377780887 ("bug.h: need linux/kernel.h for TAINT_WARN.") broke
all MIPS builds:

CC arch/mips/kernel/machine_kexec.o
include/linux/log2.h: In function '__ilog2_u32':
include/linux/log2.h:34:2: error: implicit declaration of function 'fls' [-Werror=implicit-function-declaration]
include/linux/log2.h: In function '__ilog2_u64':
include/linux/log2.h:42:2: error: implicit declaration of function 'fls64' [-Werror=implicit-function-declaration]
...

Signed-off-by: Yoichi Yuasa <yuasa@linux-mips.org>
Tested-by: John Crispin <blogic@openwrt.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 61f2e7b0 23-Mar-2011 Akinobu Mita <akinobu.mita@gmail.com>

bitops: remove minix bitops from asm/bitops.h

minix bit operations are only used by minix filesystem and useless by
other modules. Because byte order of inode and block bitmaps is different
on each architecture like below:

m68k:
big-endian 16bit indexed bitmaps

h8300, microblaze, s390, sparc, m68knommu:
big-endian 32 or 64bit indexed bitmaps

m32r, mips, sh, xtensa:
big-endian 32 or 64bit indexed bitmaps for big-endian mode
little-endian bitmaps for little-endian mode

Others:
little-endian bitmaps

In order to move minix bit operations from asm/bitops.h to architecture
independent code in minix filesystem, this provides two config options.

CONFIG_MINIX_FS_BIG_ENDIAN_16BIT_INDEXED is only selected by m68k.
CONFIG_MINIX_FS_NATIVE_ENDIAN is selected by the architectures which use
native byte order bitmaps (h8300, microblaze, s390, sparc, m68knommu,
m32r, mips, sh, xtensa). The architectures which always use little-endian
bitmaps do not select these options.

Finally, we can remove minix bit operations from asm/bitops.h for all
architectures.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Michal Simek <monstr@monstr.eu>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# f312eff8 23-Mar-2011 Akinobu Mita <akinobu.mita@gmail.com>

bitops: remove ext2 non-atomic bitops from asm/bitops.h

As the result of conversions, there are no users of ext2 non-atomic bit
operations except for ext2 filesystem itself. Now we can put them into
architecture independent code in ext2 filesystem, and remove from
asm/bitops.h for all architectures.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 861b5ae7 23-Mar-2011 Akinobu Mita <akinobu.mita@gmail.com>

bitops: introduce little-endian bitops for most architectures

Introduce little-endian bit operations to the big-endian architectures
which do not have native little-endian bit operations and the
little-endian architectures. (alpha, avr32, blackfin, cris, frv, h8300,
ia64, m32r, mips, mn10300, parisc, sh, sparc, tile, x86, xtensa)

These architectures can just include generic implementation
(asm-generic/bitops/le.h).

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Matthew Wilcox <willy@debian.org>
Cc: Grant Grundler <grundler@parisc-linux.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 7837314d 29-Oct-2010 Ralf Baechle <ralf@linux-mips.org>

MIPS: Get rid of branches to .subsections.

It was a nice optimization - on paper at least. In practice it results in
branches that may exceed the maximum legal range for a branch. We can
fight that problem with -ffunction-sections but -ffunction-sections again
is incompatible with -pg used by the function tracer.

By rewriting the loop around all simple LL/SC blocks to C we reduce the
amount of inline assembler and at the same time allow GCC to often fill
the branch delay slots with something sensible or whatever else clever
optimization it may have up in its sleeve.

With this optimization gone we also no longer need -ffunction-sections,
so drop it.

This optimization was originally introduced in 2.6.21, commit
5999eca25c1fd4b9b9aca7833b04d10fe4bc877d (linux-mips.org) rsp.
f65e4fa8e0c6022ad58dc88d1b11b12589ed7f9f (kernel.org).

Original fix for the issues which caused me to pull this optimization by
Paul Gortmaker <paul.gortmaker@windriver.com>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 1a403d1d 25-Jun-2010 David Daney <ddaney@caviumnetworks.com>

MIPS: Create and use asm/arch_hweight.h

Some MIPS ISA processor varients can do hweight operations
efficiently.

Split arch_hweight.h into a seperate file, and implement the
operations with __builtin_popcount{,ll} if supported.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Cc: David Daney <ddaney@caviumnetworks.com>
Patchwork: https://patchwork.linux-mips.org/patch/1430/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f252ffd5 08-Jan-2010 David Daney <ddaney@caviumnetworks.com>

MIPS: New macro smp_mb__before_llsc.

Replace some instances of smp_llsc_mb() with a new macro
smp_mb__before_llsc(). It is used before ll/sc sequences that are
documented as needing write barrier semantics.

The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(),
so there are no changes in semantics.

Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just
barrier() in the non-SMP case.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/851/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b791d119 13-Jul-2009 David Daney <ddaney@caviumnetworks.com>

MIPS: Allow kernel use of LL/SC to be separate from the presence of LL/SC.

On some CPUs, it is more efficient to disable and enable interrupts in the
kernel rather than use ll/sc for atomic operations. But if we were to set
cpu_has_llsc to false, we would break the userspace futex interface (in
asm/futex.h).

We separate the two concepts, with a new predicate kernel_uses_llsc, that
lets us disable the kernel's use of ll/sc while still allowing the futex
code to use it.

Also there were a couple of cases in bitops.h where we were using ll/sc
unconditionally even if cpu_has_llsc were false.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 47740eb8 18-Apr-2009 Ralf Baechle <ralf@linux-mips.org>

MIPS: Enable CLO / CLZ instructions via separate CPU property

This is useful for IDT RC32332, RC32334 and NEC VR5500 processors which do
not implement the full MIPS32 / MIPS64 architecture.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4816227b 28-Oct-2008 Ralf Baechle <ralf@linux-mips.org>

MIPS: Clean up MIPSxx-optimized bitop functions

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 384740dc 16-Sep-2008 Ralf Baechle <ralf@linux-mips.org>

MIPS: Move headfiles to new location below arch/mips/include

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>