History log of /linux-master/arch/powerpc/platforms/pseries/hvCall.S
Revision Date Author Comments
# dfb5f8cb 29-Sep-2023 Athira Rajeev <atrajeev@linux.vnet.ibm.com>

powerpc/pseries: Remove unused r0 in the hcall tracing code

In the plpar_hcall trace code, currently we use r0
to store the value of r4. But this value is not
used subsequently in the code. Hence remove this unused
save to r0 in plpar_hcall and plpar_hcall9

Suggested-by: Naveen N Rao <naveen@kernel.org>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230929172337.7906-2-atrajeev@linux.vnet.ibm.com


# 3b678768 29-Sep-2023 Athira Rajeev <atrajeev@linux.vnet.ibm.com>

powerpc/pseries: Fix STK_PARAM access in the hcall tracing code

In powerpc pseries system, below behaviour is observed while
enabling tracing on hcall:
# cd /sys/kernel/debug/tracing/
# cat events/powerpc/hcall_exit/enable
0
# echo 1 > events/powerpc/hcall_exit/enable

# ls
-bash: fork: Bad address

Above is from power9 lpar with latest kernel. Past this, softlockup
is observed. Initially while attempting via perf_event_open to
use "PERF_TYPE_TRACEPOINT", kernel panic was observed.

perf config used:
================
memset(&pe[1],0,sizeof(struct perf_event_attr));
pe[1].type=PERF_TYPE_TRACEPOINT;
pe[1].size=96;
pe[1].config=0x26ULL; /* 38 raw_syscalls/sys_exit */
pe[1].sample_type=0; /* 0 */
pe[1].read_format=PERF_FORMAT_TOTAL_TIME_ENABLED|PERF_FORMAT_TOTAL_TIME_RUNNING|PERF_FORMAT_ID|PERF_FORMAT_GROUP|0x10ULL; /* 1f */
pe[1].inherit=1;
pe[1].precise_ip=0; /* arbitrary skid */
pe[1].wakeup_events=0;
pe[1].bp_type=HW_BREAKPOINT_EMPTY;
pe[1].config1=0x1ULL;

Kernel panic logs:
==================

Kernel attempted to read user page (8) - exploit attempt? (uid: 0)
BUG: Kernel NULL pointer dereference on read at 0x00000008
Faulting instruction address: 0xc0000000004c2814
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
Modules linked in: nfnetlink bonding tls rfkill sunrpc dm_service_time dm_multipath pseries_rng xts vmx_crypto xfs libcrc32c sd_mod t10_pi crc64_rocksoft crc64 sg ibmvfc scsi_transport_fc ibmveth dm_mirror dm_region_hash dm_log dm_mod fuse
CPU: 0 PID: 1431 Comm: login Not tainted 6.4.0+ #1
Hardware name: IBM,8375-42A POWER9 (raw) 0x4e0202 0xf000005 of:IBM,FW950.30 (VL950_892) hv:phyp pSeries
NIP page_remove_rmap+0x44/0x320
LR wp_page_copy+0x384/0xec0
Call Trace:
0xc00000001416e400 (unreliable)
wp_page_copy+0x384/0xec0
__handle_mm_fault+0x9d4/0xfb0
handle_mm_fault+0xf0/0x350
___do_page_fault+0x48c/0xc90
hash__do_page_fault+0x30/0x70
do_hash_fault+0x1a4/0x330
data_access_common_virt+0x198/0x1f0
--- interrupt: 300 at 0x7fffae971abc

git bisect tracked this down to below commit:
'commit baa49d81a94b ("powerpc/pseries: hvcall stack frame overhead")'

This commit changed STACK_FRAME_OVERHEAD (112 ) to
STACK_FRAME_MIN_SIZE (32 ) since 32 bytes is the minimum size
for ELFv2 stack. With the latest kernel, when running on ELFv2,
STACK_FRAME_MIN_SIZE is used to allocate stack size.

During plpar_hcall_trace, first call is made to HCALL_INST_PRECALL
which saves the registers and allocates new stack frame. In the
plpar_hcall_trace code, STK_PARAM is accessed at two places.
1. To save r4: std r4,STK_PARAM(R4)(r1)
2. To access r4 back: ld r12,STK_PARAM(R4)(r1)

HCALL_INST_PRECALL precall allocates a new stack frame. So all
the stack parameter access after the precall, needs to be accessed
with +STACK_FRAME_MIN_SIZE. So the store instruction should be:
std r4,STACK_FRAME_MIN_SIZE+STK_PARAM(R4)(r1)

If the "std" is not updated with STACK_FRAME_MIN_SIZE, we will
end up with overwriting stack contents and cause corruption.
But instead of updating 'std', we can instead remove it since
HCALL_INST_PRECALL already saves it to the correct location.

similarly load instruction should be:
ld r12,STACK_FRAME_MIN_SIZE+STK_PARAM(R4)(r1)

Fix the load instruction to correctly access the stack parameter
with +STACK_FRAME_MIN_SIZE and remove the store of r4 since the
precall saves it correctly.

Cc: stable@vger.kernel.org # v6.2+
Fixes: baa49d81a94b ("powerpc/pseries: hvcall stack frame overhead")
Co-developed-by: Naveen N Rao <naveen@kernel.org>
Signed-off-by: Naveen N Rao <naveen@kernel.org>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230929172337.7906-1-atrajeev@linux.vnet.ibm.com


# 61d7ebe0 09-May-2023 Nicholas Piggin <npiggin@gmail.com>

powerpc/pseries: Remove unused hcall tracing instruction

When JUMP_LABEL=n, the tracepoint refcount test in the pre-call stores
the refcount value to the stack, so the same value can be used for the
post-call (presumably to avoid racing with the value concurrently
changing).

On little-endian (ELFv2) that might have just worked by luck, because
32(r1) is STK_PARAM(R3) there and so the value save gets clobbered by
the tracing code when it's non-zero, but fortunately r3 is the hcall
number and 0 is an invalid hcall number so it should get clobbered by
another non-zero value. In any case, commit cc1adb5f32557
("powerpc/pseries: Use jump labels for hcall tracepoints") removed the
code that actually used the value stored, so now it's just dead code.

It's fragile to be storing to the stack like this, and confusing. Better
remove it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230509091600.70994-2-npiggin@gmail.com


# 750bd41a 09-May-2023 Nicholas Piggin <npiggin@gmail.com>

powerpc/pseries: Fix hcall tracepoints with JUMP_LABEL=n

With JUMP_LABEL=n, hcall_tracepoint_refcount's address is being tested
instead of its value. This results in the tracing slowpath always being
taken unnecessarily.

Fixes: 9a10ccb29c0a2 ("powerpc/pseries: move hcall_tracepoint_refcount out of .toc")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230509091600.70994-1-npiggin@gmail.com


# 4e991e3c 07-Apr-2023 Nicholas Piggin <npiggin@gmail.com>

powerpc: add CFUNC assembly label annotation

This macro is to be used in assembly where C functions are called.
pcrel addressing mode requires branches to functions with a
localentry value of 1 to have either a trailing nop or @notoc.
This macro permits the latter without changing callers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Add dummy definitions to fix selftests build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230408021752.862660-5-npiggin@gmail.com


# baa49d81 27-Nov-2022 Nicholas Piggin <npiggin@gmail.com>

powerpc/pseries: hvcall stack frame overhead

This call may use the min size stack frame. The scratch space used is
in the caller's parameter area frame, not this function's frame.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20221127124942.1665522-6-npiggin@gmail.com


# 9a10ccb2 25-Sep-2022 Nicholas Piggin <npiggin@gmail.com>

powerpc/pseries: move hcall_tracepoint_refcount out of .toc

The .toc section is not really intended for arbitrary data. Writable
data in particular prevents making the TOC read-only after relocation.
Move hcall_tracepoint_refcount into the .data section.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220926053823.2668799-1-npiggin@gmail.com


# 59dc5bfc 17-Jun-2021 Nicholas Piggin <npiggin@gmail.com>

powerpc/64s: avoid reloading (H)SRR registers if they are still valid

When an interrupt is taken, the SRR registers are set to return to where
it left off. Unless they are modified in the meantime, or the return
address or MSR are modified, there is no need to reload these registers
when returning from interrupt.

Introduce per-CPU flags that track the validity of SRR and HSRR
registers. These are cleared when returning from interrupt, when
using the registers for something else (e.g., OPAL calls), when
adjusting the return address or MSR of a context, and when context
switching (which changes the return address and MSR).

This improves the performance of interrupt returns.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fold in fixup patch from Nick]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210617155116.2167984-5-npiggin@gmail.com


# 2c8c89b9 08-May-2021 Nicholas Piggin <npiggin@gmail.com>

powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocks

The paravit queued spinlock slow path adds itself to the queue then
calls pv_wait to wait for the lock to become free. This is implemented
by calling H_CONFER to donate cycles.

When hcall tracing is enabled, this H_CONFER call can lead to a spin
lock being taken in the tracing code, which will result in the lock to
be taken again, which will also go to the slow path because it queues
behind itself and so won't ever make progress.

An example trace of a deadlock:

__pv_queued_spin_lock_slowpath
trace_clock_global
ring_buffer_lock_reserve
trace_event_buffer_lock_reserve
trace_event_buffer_reserve
trace_event_raw_event_hcall_exit
__trace_hcall_exit
plpar_hcall_norets_trace
__pv_queued_spin_lock_slowpath
trace_clock_global
ring_buffer_lock_reserve
trace_event_buffer_lock_reserve
trace_event_buffer_reserve
trace_event_raw_event_rcu_dyntick
rcu_irq_exit
irq_exit
__do_irq
call_do_irq
do_IRQ
hardware_interrupt_common_virt

Fix this by introducing plpar_hcall_norets_notrace(), and using that to
make SPLPAR virtual processor dispatching hcalls by the paravirt
spinlock code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210508101455.1578318-2-npiggin@gmail.com


# 2874c5fd 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 3029 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# e9666d10 30-Dec-2018 Masahiro Yamada <yamada.masahiro@socionext.com>

jump_label: move 'asm goto' support test to Kconfig

Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label".

The jump label is controlled by HAVE_JUMP_LABEL, which is defined
like this:

#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
# define HAVE_JUMP_LABEL
#endif

We can improve this by testing 'asm goto' support in Kconfig, then
make JUMP_LABEL depend on CC_HAS_ASM_GOTO.

Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will
match to the real kernel capability.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>


# 2c86cd18 05-Jul-2018 Christophe Leroy <christophe.leroy@c-s.fr>

powerpc: clean inclusions of asm/feature-fixups.h

files not using feature fixup don't need asm/feature-fixups.h
files using feature fixup need asm/feature-fixups.h

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>


# eb039161 08-Mar-2017 Tobin C. Harding <me@tobin.cc>

powerpc/asm: Convert .llong directives to .8byte

.llong is an undocumented PPC specific directive. The generic
equivalent is .quad, but even better (because it's self describing) is
.8byte.

Convert all .llong directives to .8byte.

Signed-off-by: Tobin C. Harding <me@tobin.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>


# 58995a9a 08-Apr-2015 Anton Blanchard <anton@samba.org>

powerpc, jump_label: Include linux/jump_label.h to get HAVE_JUMP_LABEL define

Commit 1bc9e47aa8e4 ("powerpc/jump_label: Use HAVE_JUMP_LABEL")
converted uses of CONFIG_JUMP_LABEL to HAVE_JUMP_LABEL in
some assembly files.

HAVE_JUMP_LABEL is defined in linux/jump_label.h, so we need to
include this or we always get the non jump label fallback code.

Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: benh@kernel.crashing.org
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: heiko.carstens@de.ibm.com
Cc: jbaron@akamai.com
Cc: linux@arm.linux.org.uk
Cc: linuxppc-dev@lists.ozlabs.org
Cc: liuj97@gmail.com
Cc: mgorman@suse.de
Cc: mmarek@suse.cz
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rostedt@goodmis.org
Cc: schwidefsky@de.ibm.com
Cc: will.deacon@arm.com
Fixes: 1bc9e47aa8e4 ("powerpc/jump_label: Use HAVE_JUMP_LABEL")
Link: http://lkml.kernel.org/r/1428551492-21977-3-git-send-email-anton@samba.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 1bc9e47a 29-Oct-2014 Anton Blanchard <anton@samba.org>

powerpc/jump_label: Use HAVE_JUMP_LABEL

Commit d4fe0965e208 ("powerpc/jump_label: use HAVE_JUMP_LABEL?")
missed a few conversions. Change the remaining uses of
CONFIG_JUMP_LABEL to HAVE_JUMP_LABEL.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>


# aaad4224 02-Jul-2014 Anton Blanchard <anton@samba.org>

powerpc/pseries: Optimise hcall tracepoints

Now that we execute the hcall tracepoint entry and exit code out of
line, we can use the same stack across both functions.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# cc1adb5f 02-Jul-2014 Anton Blanchard <anton@samba.org>

powerpc/pseries: Use jump labels for hcall tracepoints

hcall tracepoints add quite a few instructions to our hcall path:

plpar_hcall:
mr r2,r2
mfcr r0
stw r0,8(r1)
b 164 <---- start
ld r12,0(r2)
std r12,32(r1)
cmpdi r12,0
beq 164 <---- end
...

We have an unconditional branch that gets noped out during boot and
a load/compare/branch. We also store the tracepoint value to the
stack for the hcall_exit path to use.

By using jump labels we can simplify this to just a single nop that
gets replaced with a branch when the tracepoint is enabled:

plpar_hcall:
mr r2,r2
mfcr r0
stw r0,8(r1)
nop <----
...

If jump labels are not enabled, we fall back to the old method.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# c1931e21 13-May-2014 Anton Blanchard <anton@samba.org>

powerpc/pseries: hcall functions are exported to modules, need _GLOBAL_TOC()

The hcall macros may call out to c code for tracing, so we need
to set up a valid r2. This fixes an oops found when testing
ibmvscsi as a module.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# b1576fec 03-Feb-2014 Anton Blanchard <anton@samba.org>

powerpc: No need to use dot symbols when branching to a function

binutils is smart enough to know that a branch to a function
descriptor is actually a branch to the functions text address.

Alan tells me that binutils has been doing this for 9 years.

Signed-off-by: Anton Blanchard <anton@samba.org>


# 44ce6a5e 25-Jun-2012 Michael Neuling <mikey@neuling.org>

powerpc: Merge STK_REG/PARAM/FRAMESIZE

Merge the defines of STACKFRAMESIZE, STK_REG, STK_PARAM from different
places.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# c75df6f9 25-Jun-2012 Michael Neuling <mikey@neuling.org>

powerpc: Fix usage of register macros getting ready for %r0 change

Anything that uses a constructed instruction (ie. from ppc-opcode.h),
need to use the new R0 macro, as %r0 is not going to work.

Also convert usages of macros where we are just determining an offset
(usually for a load/store), like:
std r14,STK_REG(r14)(r1)
Can't use STK_REG(r14) as %r14 doesn't work in the STK_REG macro since
it's just calculating an offset.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# ebb7f616 07-Jan-2012 Li Zhong <zhong@linux.vnet.ibm.com>

powerpc: Fix unpaired __trace_hcall_entry and __trace_hcall_exit

Unpaired calling of __trace_hcall_entry and __trace_hcall_exit could
cause incorrect preempt count. And it might happen as the global
variable hcall_tracepoint_refcount is checked separately before calling
them.

Instead, store the value that was used on entry in the stack frame
and retreive it from there after the call

Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# 46f52210 18-Nov-2010 Stephen Rothwell <sfr@canb.auug.org.au>

powerpc: Remove second definition of STACK_FRAME_OVERHEAD

Since STACK_FRAME_OVERHEAD is defined in asm/ptrace.h and that
is ASSEMBER safe, we can just include that instead of going via
asm-offsets.h.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# f90ece28 10-May-2010 Michael Neuling <mikey@neuling.org>

powerpc/pseries: Add hcall to read 4 ptes at a time in real mode

This adds plpar_pte_read_4_raw() which can be used read 4 PTEs from
PHYP at a time, while in real mode.

It also creates a new hcall9 which can be used in real mode. It's the
same as plpar_hcall9 but minus the tracing hcall statistics which may
require variables outside the RMO.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# dd17c8f7 29-Oct-2009 Rusty Russell <rusty@rustcorp.com.au>

percpu: remove per_cpu__ prefix.

Now that the return from alloc_percpu is compatible with the address
of per-cpu vars, it makes sense to hand around the address of per-cpu
variables. To make this sane, we remove the per_cpu__ prefix we used
created to stop people accidentally using these vars directly.

Now we have sparse, we can use that (next patch).

tj: * Updated to convert stuff which were missed by or added after the
original patch.

* Kill per_cpu_var() macro.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>


# 6f26353c 26-Oct-2009 Anton Blanchard <anton@samba.org>

powerpc: tracing: Give hypervisor call tracepoints access to arguments

While most users of the hcall tracepoints will only want the opcode
and return code, some will want all the arguments. To avoid the
complexity of using varargs we pass a pointer to the register save
area, which contains all the arguments.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# c8cd093a 26-Oct-2009 Anton Blanchard <anton@samba.org>

powerpc: tracing: Add hypervisor call tracepoints

Add hcall_entry and hcall_exit tracepoints. This replaces the inline
assembly HCALL_STATS code and converts it to use the new tracepoints.

To keep the disabled case as quick as possible, we embed a status word
in the TOC so we can get at it with a single load. By doing so we
keep the overhead at a minimum. Time taken for a null hcall:

No tracepoint code: 135.79 cycles
Disabled tracepoints: 137.95 cycles

For reference, before this patch enabling HCALL_STATS resulted in a null
hcall of 201.44 cycles!

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# 4f5fa2fb 20-Mar-2007 Anton Blanchard <anton@samba.org>

[POWERPC] Bypass hcall stats until cpu features have run

I noticed that we execute hcalls before cpu feature code has run (eg
for setting up the bolted kernel region). This means that we may be
executing code that is not appropriate for the processor we have.
Create an unconditional branch that we nop out all the time to fix this.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# b4aea36b 20-Mar-2007 Mohan Kumar M <mohan@in.ibm.com>

[POWERPC] Avoid hypervisor statistics calculation in real mode

kexec invokes plpar_hcall hypervisor call in real mode. plpar_hcall
refers to per cpu variables for accounting hypervisor statistics.
These variables may not be in the RMO region, so accesses to them
in real mode may result in a data storage exception.

This fixes this problem by using a new plpar_hcall_raw function which
does not update the hypervisor call statistics. Thanks to Anton for
suggesting this idea.

Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# dc40127c 08-Jan-2007 Anton Blanchard <anton@samba.org>

[POWERPC] Fix bugs in the hypervisor call stats code

There were a few issues with the HCALL_STATS code:

- PURR cpu feature checks were backwards
- We iterated one entry off the end of the hcall_stats array
- Remove dead update_hcall_stats() function prototype

I noticed one thing while debugging, and that is we call H_ENTER (to set
up the MMU hashtable in early init) before we have done the cpu fixups.
This means we will execute the PURR SPR reads even on a CPU that isnt
capable of it. I wonder if we can move the CPU feature fixups earlier.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# ab87e8dc 08-Jan-2007 Anton Blanchard <anton@samba.org>

[POWERPC] Fix corruption in hcall9

It looks to me like we are corrupting r12 in the hcall9 function.
Although we have r0 free we cant use offsets against it, so save
away r12 in there instead. r12 holds the ninth return value from
the hypervisor call, so without this fix, the caller will see the
wrong value for the ninth element in the array that gets the return
values.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# 57852a85 06-Sep-2006 Mike Kravetz <kravetz@us.ibm.com>

[POWERPC] powerpc: Instrument Hypervisor Calls

Add instrumentation for hypervisor calls on pseries. Call statistics
include number of calls, wall time and cpu cycles (if available) and
are made available via debugfs. Instrumentation code is behind the
HCALL_STATS config option and has no impact if not enabled.

Signed-off-by: Mike Kravetz <kravetz@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# b9377ffc 18-Jul-2006 Anton Blanchard <anton@samba.org>

[POWERPC] clean up pseries hcall interfaces

Our pseries hcall interfaces are out of control:

plpar_hcall_norets
plpar_hcall
plpar_hcall_8arg_2ret
plpar_hcall_4out
plpar_hcall_7arg_7ret
plpar_hcall_9arg_9ret

Create 3 interfaces to cover all cases:

plpar_hcall_norets: 7 arguments no returns
plpar_hcall: 6 arguments 4 returns
plpar_hcall9: 9 arguments 9 returns

There are only 2 cases in the kernel that need plpar_hcall9, hopefully
we can keep it that way.

Pass in a buffer to stash return parameters so we avoid the &dummy1,
&dummy2 madness.

Signed-off-by: Anton Blanchard <anton@samba.org>
--
Signed-off-by: Paul Mackerras <paulus@samba.org>


# b13a96cf 30-Mar-2006 Heiko J Schick <schihei@de.ibm.com>

[PATCH] powerpc: Extends HCALL interface for InfiniBand usage

This extends the HCALL interface for InfiniBand usage. I've
made the patch against the linux-2.6 git tree and Segher's patch:
[PATCH] Change H_StudlyCaps to H_SHOUTING_CAPS

We moved this into the common powerpc code based on comments we
got after posting the first eHCA InfiniBand device driver patch.

Signed-off-by: Heiko j Schick <schickhj@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# 2ef9481e 23-Jan-2006 Jon Mason <jdmason@us.ibm.com>

[PATCH] powerpc: trivial: modify comments to refer to new location of files

This patch removes all self references and fixes references to files
in the now defunct arch/ppc64 tree. I think this accomplises
everything wanted, though there might be a few references I missed.

Signed-off-by: Jon Mason <jdmason@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>


# 69a80d3f 10-Oct-2005 Paul Mackerras <paulus@samba.org>

powerpc: move pSeries files to arch/powerpc/platforms/pseries

Signed-off-by: Paul Mackerras <paulus@samba.org>