History log of /linux-master/arch/arm64/include/asm/spectre.h
Revision Date Author Comments
# 04bc786d 23-Jan-2023 Kent Overstreet <kent.overstreet@linux.dev>

arm64: Fix circular header dependency

Replace linux/percpu.h include with asm/percpu.h to avoid circular
dependency.

Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>


# af645439 16-Oct-2023 Mark Rutland <mark.rutland@arm.com>

arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2

In arm64_apply_bp_hardening() we use cpus_have_const_cap() to check for
ARM64_SPECTRE_V2 , but this is not necessary and alternative_has_cap_*()
would be preferable.

For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.

Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.

The cpus_have_const_cap() check in arm64_apply_bp_hardening() is
intended to avoid the overhead of looking up and invoking a per-cpu
function pointer when no branch predictor hardening is required. The
arm64_apply_bp_hardening() function itself is called in two distinct
flows:

1) When handling certain exceptions taken from EL0, where the PC could
be a TTBR1 address and hence might have trained a branch predictor.

As cpucaps are detected and alternatives are patched long before it
is possible to execute userspace, it is not necessary to use
cpus_have_const_cap() for these cases, and cpus_have_final_cap() or
alternative_has_cap() would be preferable.

2) When switching between tasks in check_and_switch_context().

This can be called before cpucaps are detected and alternatives are
patched, but this is long before the kernel mounts filesystems or
accepts any input. At this stage the kernel hasn't loaded any secrets
and there is no potential for hostile branch predictor training. Once
cpucaps have been finalized and alternatives have been patched,
switching tasks will invalidate any prior predictions. Hence it is
not necessary to use cpus_have_const_cap() for this case.

This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 010089e9 16-May-2023 Arnd Bergmann <arnd@arndb.de>

arm64: spectre: provide prototypes for internal functions

The helpers in proton-pack.c are called from assembler and have
no prototype in a header, which causes a W=1 warning:

arch/arm64/kernel/proton-pack.c:568:13: error: no previous prototype for 'spectre_v4_patch_fw_mitigation_enable' [-Werror=missing-prototypes]
arch/arm64/kernel/proton-pack.c:588:13: error: no previous prototype for 'smccc_patch_fw_mitigation_conduit' [-Werror=missing-prototypes]
arch/arm64/kernel/proton-pack.c:1064:14: error: no previous prototype for 'spectre_bhb_patch_loop_mitigation_enable' [-Werror=missing-prototypes]
arch/arm64/kernel/proton-pack.c:1075:14: error: no previous prototype for 'spectre_bhb_patch_fw_mitigation_enabled' [-Werror=missing-prototypes]

Add these to asm/spectre.h, which contains related declarations
already.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20230516160642.523862-6-arnd@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# bff8f413 19-Oct-2022 Mark Rutland <mark.rutland@arm.com>

arm64: factor out EL1 SSBS emulation hook

Currently call_undef_hook() is used to handle UNDEFINED exceptions from
EL0 and EL1. As support for deprecated instructions may be enabled
independently, the handlers for individual instructions are organised as
a linked list of struct undef_hook which can be manipulated dynamically.
As this can be manipulated dynamically, the list is protected with a
raw_spinlock which must be acquired when handling UNDEFINED exceptions
or when manipulating the list of handlers.

This locking is unfortunate as it serialises handling of UNDEFINED
exceptions, and requires RCU to be enabled for lockdep, requiring the
use of RCU_NONIDLE() in resume path of cpu_suspend() since commit:

a2c42bbabbe260b7 ("arm64: spectre: Prevent lockdep splat on v4 mitigation enable path")

The list of UNDEFINED handlers largely consist of handlers for
exceptions taken from EL0, and the only handler for exceptions taken
from EL1 handles `MSR SSBS, #imm` on CPUs which feature PSTATE.SSBS but
lack the corresponding MSR (Immediate) instruction. Other than this we
never expect to take an UNDEFINED exception from EL1 in normal
operation.

This patch reworks do_el0_undef() to invoke the EL1 SSBS handler
directly, relegating call_undef_hook() to only handle EL0 UNDEFs. This
removes redundant work to iterate the list for EL1 UNDEFs, and removes
the need for locking, permitting EL1 UNDEFs to be handled in parallel
without contention.

The RCU_NONIDLE() call in cpu_suspend() will be removed in a subsequent
patch, as there are other potential issues with the use of
instrumentable code and RCU in the CPU suspend code.

I've tested this by forcing the detection of SSBS on a CPU that doesn't
have it, and verifying that the try_emulate_el1_ssbs() callback is
invoked.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20221019144123.612388-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>


# 614c0b9f 24-Feb-2022 Mark Rutland <mark.rutland@arm.com>

arm64: prevent instrumentation of bp hardening callbacks

We may call arm64_apply_bp_hardening() early during entry (e.g. in
el0_ia()) before it is safe to run instrumented code. Unfortunately this
may result in running instrumented code in two cases:

* The hardening callbacks called by arm64_apply_bp_hardening() are not
marked as `noinstr`, and have been observed to be instrumented when
compiled with either GCC or LLVM.

* Since arm64_apply_bp_hardening() itself is only marked as `inline`
rather than `__always_inline`, it is possible that the compiler
decides to place it out-of-line, whereupon it may be instrumented.

For example, with defconfig built with clang 13.0.0,
call_hvc_arch_workaround_1() is compiled as:

| <call_hvc_arch_workaround_1>:
| d503233f paciasp
| f81f0ffe str x30, [sp, #-16]!
| 320183e0 mov w0, #0x80008000
| d503201f nop
| d4000002 hvc #0x0
| f84107fe ldr x30, [sp], #16
| d50323bf autiasp
| d65f03c0 ret

... but when CONFIG_FTRACE=y and CONFIG_KCOV=y this is compiled as:

| <call_hvc_arch_workaround_1>:
| d503245f bti c
| d503201f nop
| d503201f nop
| d503233f paciasp
| a9bf7bfd stp x29, x30, [sp, #-16]!
| 910003fd mov x29, sp
| 94000000 bl 0 <__sanitizer_cov_trace_pc>
| 320183e0 mov w0, #0x80008000
| d503201f nop
| d4000002 hvc #0x0
| a8c17bfd ldp x29, x30, [sp], #16
| d50323bf autiasp
| d65f03c0 ret

... with a patchable function entry registered with ftrace, and a direct
call to __sanitizer_cov_trace_pc(). Neither of these are safe early
during entry sequences.

This patch avoids the unsafe instrumentation by marking
arm64_apply_bp_hardening() as `__always_inline` and by marking the
hardening functions as `noinstr`. This avoids the potential for
instrumentation, and causes clang to consistently generate the function
as with the defconfig sample.

Note: in the defconfig compilation, when CONFIG_SVE=y, x30 is spilled to
the stack without being placed in a frame record, which will result in a
missing entry if call_hvc_arch_workaround_1() is backtraced. Similar is
true of qcom_link_stack_sanitisation(), where inline asm spills the LR
to a GPR prior to corrupting it. This is not a significant issue
presently as we will only backtrace here if an exception is taken, and
in such cases we may omit entries for other reasons today.

The relevant hardening functions were introduced in commits:

ec82b567a74fbdff ("arm64: Implement branch predictor hardening for Falkor")
b092201e00206141 ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")

... and these were subsequently moved in commit:

d4647f0a2ad71110 ("arm64: Rewrite Spectre-v2 mitigation code")

The arm64_apply_bp_hardening() function was introduced in commit:

0f15adbb2861ce6f ("arm64: Add skeleton to harden the branch predictor against aliasing attacks")

... and was subsequently moved and reworked in commit:

6279017e807708a0 ("KVM: arm64: Move BP hardening helpers into spectre.h")

Fixes: ec82b567a74fbdff ("arm64: Implement branch predictor hardening for Falkor")
Fixes: b092201e00206141 ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")
Fixes: d4647f0a2ad71110 ("arm64: Rewrite Spectre-v2 mitigation code")
Fixes: 0f15adbb2861ce6f ("arm64: Add skeleton to harden the branch predictor against aliasing attacks")
Fixes: 6279017e807708a0 ("KVM: arm64: Move BP hardening helpers into spectre.h")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220224181028.512873-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>


# 558c303c 10-Nov-2021 James Morse <james.morse@arm.com>

arm64: Mitigate spectre style branch history side channels

Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation.
When taking an exception from user-space, a sequence of branches
or a firmware call overwrites or invalidates the branch history.

The sequence of branches is added to the vectors, and should appear
before the first indirect branch. For systems using KPTI the sequence
is added to the kpti trampoline where it has a free register as the exit
from the trampoline is via a 'ret'. For systems not using KPTI, the same
register tricks are used to free up a register in the vectors.

For the firmware call, arch-workaround-3 clobbers 4 registers, so
there is no choice but to save them to the EL1 stack. This only happens
for entry from EL0, so if we take an exception due to the stack access,
it will not become re-entrant.

For KVM, the existing branch-predictor-hardening vectors are used.
When a spectre version of these vectors is in use, the firmware call
is sufficient to mitigate against Spectre-BHB. For the non-spectre
versions, the sequence of branches is added to the indirect vector.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>


# dee435be 08-Feb-2022 James Morse <james.morse@arm.com>

arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2

Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation as part of
a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that
previously reported 'Not affected' are now moderately mitigated by CSV2.

Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2
to also show the state of the BHB mitigation.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>


# 7f43c201 26-Nov-2020 Marc Zyngier <maz@kernel.org>

arm64: Make the Meltdown mitigation state available

Our Meltdown mitigation state isn't exposed outside of cpufeature.c,
contrary to the rest of the Spectre mitigation state. As we are going
to use it in KVM, expose a arm64_get_meltdown_state() helper which
returns the same possible values as arm64_get_spectre_v?_state().

Signed-off-by: Marc Zyngier <maz@kernel.org>


# 4f6a36fe 13-Nov-2020 Will Deacon <will@kernel.org>

KVM: arm64: Remove redundant hyp vectors entry

The hyp vectors entry corresponding to HYP_VECTOR_DIRECT (i.e. when
neither Spectre-v2 nor Spectre-v3a are present) is unused, as we can
simply dispatch straight to __kvm_hyp_vector in this case.

Remove the redundant vector, and massage the logic for resolving a slot
to a vectors entry.

Reported-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201113113847.21619-11-will@kernel.org


# cd1f56b9 13-Nov-2020 Will Deacon <will@kernel.org>

arm64: spectre: Consolidate spectre-v3a detection

The spectre-v3a mitigation is split between cpu_errata.c and spectre.c,
with the former handling detection of the problem and the latter handling
enabling of the workaround.

Move the detection logic alongside the enabling logic, like we do for the
other spectre mitigations.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20201113113847.21619-10-will@kernel.org


# c4792b6d 13-Nov-2020 Will Deacon <will@kernel.org>

arm64: spectre: Rename ARM64_HARDEN_EL2_VECTORS to ARM64_SPECTRE_V3A

Since ARM64_HARDEN_EL2_VECTORS is really a mitigation for Spectre-v3a,
rename it accordingly for consistency with the v2 and v4 mitigation.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20201113113847.21619-9-will@kernel.org


# b881cdce 13-Nov-2020 Will Deacon <will@kernel.org>

KVM: arm64: Allocate hyp vectors statically

The EL2 vectors installed when a guest is running point at one of the
following configurations for a given CPU:

- Straight at __kvm_hyp_vector
- A trampoline containing an SMC sequence to mitigate Spectre-v2 and
then a direct branch to __kvm_hyp_vector
- A dynamically-allocated trampoline which has an indirect branch to
__kvm_hyp_vector
- A dynamically-allocated trampoline containing an SMC sequence to
mitigate Spectre-v2 and then an indirect branch to __kvm_hyp_vector

The indirect branches mean that VA randomization at EL2 isn't trivially
bypassable using Spectre-v3a (where the vector base is readable by the
guest).

Rather than populate these vectors dynamically, configure everything
statically and use an enumerated type to identify the vector "slot"
corresponding to one of the configurations above. This both simplifies
the code, but also makes it much easier to implement at EL2 later on.

Signed-off-by: Will Deacon <will@kernel.org>
[maz: fixed double call to kvm_init_vector_slots() on nVHE]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20201113113847.21619-8-will@kernel.org


# 6279017e 13-Nov-2020 Will Deacon <will@kernel.org>

KVM: arm64: Move BP hardening helpers into spectre.h

The BP hardening helpers are an integral part of the Spectre-v2
mitigation, so move them into asm/spectre.h and inline the
arm64_get_bp_hardening_data() function at the same time.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20201113113847.21619-6-will@kernel.org


# c2876207 18-Sep-2020 Will Deacon <will@kernel.org>

arm64: Rewrite Spectre-v4 mitigation code

Rewrite the Spectre-v4 mitigation handling code to follow the same
approach as that taken by Spectre-v2.

For now, report to KVM that the system is vulnerable (by forcing
'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in
subsequent steps.

Signed-off-by: Will Deacon <will@kernel.org>


# d4647f0a 15-Sep-2020 Will Deacon <will@kernel.org>

arm64: Rewrite Spectre-v2 mitigation code

The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain.
This is largely due to it being written hastily, without much clue as to
how things would pan out, and also because it ends up mixing policy and
state in such a way that it is very difficult to figure out what's going
on.

Rewrite the Spectre-v2 mitigation so that it clearly separates state from
policy and follows a more structured approach to handling the mitigation.

Signed-off-by: Will Deacon <will@kernel.org>