History log of /linux-master/arch/arm64/include/asm/asm-extable.h
Revision Date Author Comments
# bacac637 21-Jun-2022 Tong Tiangen <tongtiangen@huawei.com>

arm64: extable: cleanup redundant extable type EX_TYPE_FIXUP

Currently, extable type EX_TYPE_FIXUP is no place to use, We can safely
remove it.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-7-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>


# e4208e80 21-Jun-2022 Tong Tiangen <tongtiangen@huawei.com>

arm64: extable: move _cond_extable to _cond_uaccess_extable

Currently, We use _cond_extable for cache maintenance uaccess helper
caches_clean_inval_user_pou(), so this should be moved over to
EX_TYPE_UACCESS_ERR_ZERO and rename _cond_extable to _cond_uaccess_extable
for clarity.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-6-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>


# c4ed0d73 21-Jun-2022 Tong Tiangen <tongtiangen@huawei.com>

arm64: extable: make uaaccess helper use extable type EX_TYPE_UACCESS_ERR_ZERO

Currnetly, the extable type used by __arch_copy_from/to_user() is
EX_TYPE_FIXUP. In fact, It is more clearly to use meaningful
EX_TYPE_UACCESS_*.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-5-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>


# 59e8a1ce 21-Jun-2022 Mark Rutland <mark.rutland@arm.com>

arm64: asm-extable: add asm uacess helpers

In subsequent patches we want to explciitly annotate uaccess fixups in
assembly files.

We have existing helpers for this for inline assembly, but due to
differing stringification requirements it's not possible to have a
single definition that we can use for both inline asm and plain asm
files. So as with other cases (e.g. gpr-regnum.h), we must prove
separate helprs for plain asm and inline asm.

So that we can do so, this patch adds helpers to define
EX_TYPE_UACCESS_ERR_ZERO fixups in plain assembly. These correspond 1-1
with the inline assembly versions except for the absence of
stringification. No plain assmebly heleprs are added for
EX_TYPE_LOAD_UNALIGNED_ZEROPAD fixups as these only exist for a single C
function.

For copy_{to,from}_user() we'll need fixups with regs and err, so I've
added _ASM_EXTABLE_UACCESS(insn, fixup), where both the error and zero
registers are WZR.

For clarity, the existing `_asm_extable` assemgbly maco is now defined
in terms of the _ASM_EXTABLE() CPP macro, making the CPP macros
canonical in all cases.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-4-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>


# 5519d7de 21-Jun-2022 Mark Rutland <mark.rutland@arm.com>

arm64: asm-extable: move data fields

In subsequent patches we'll need to fill in extable data fields in
regular assembly files. In preparation for this, move the definitions of
the extable data fields earlier in asm-extable.h so that they are
defined for both assembly and C files.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-3-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>


# 4953fc3d 21-Jun-2022 Tong Tiangen <tongtiangen@huawei.com>

arm64: extable: add new extable type EX_TYPE_KACCESS_ERR_ZERO support

Currently, The extable type EX_TYPE_UACCESS_ERR_ZERO is used by
__get/put_kernel_nofault(), but those helpers are not uaccess type, so we
add a new extable type EX_TYPE_KACCESS_ERR_ZERO which can be used by
__get/put_kernel_no_fault().

This is also to prepare for distinguishing the two types in machine check
safe process.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220621072638.1273594-2-tongtiangen@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>


# 753b3236 19-Oct-2021 Mark Rutland <mark.rutland@arm.com>

arm64: extable: add load_unaligned_zeropad() handler

For inline assembly, we place exception fixups out-of-line in the
`.fixup` section such that these are out of the way of the fast path.
This has a few drawbacks:

* Since the fixup code is anonymous, backtraces will symbolize fixups as
offsets from the nearest prior symbol, currently
`__entry_tramp_text_end`. This is confusing, and painful to debug
without access to the relevant vmlinux.

* Since the exception handler adjusts the PC to execute the fixup, and
the fixup uses a direct branch back into the function it fixes,
backtraces of fixups miss the original function. This is confusing,
and violates requirements for RELIABLE_STACKTRACE (and therefore
LIVEPATCH).

* Inline assembly and associated fixups are generated from templates,
and we have many copies of logically identical fixups which only
differ in which specific registers are written to and which address is
branched to at the end of the fixup. This is potentially wasteful of
I-cache resources, and makes it hard to add additional logic to fixups
without significant bloat.

* In the case of load_unaligned_zeropad(), the logic in the fixup
requires a temporary register that we must allocate even in the
fast-path where it will not be used.

This patch address all four concerns for load_unaligned_zeropad() fixups
by adding a dedicated exception handler which performs the fixup logic
in exception context and subsequent returns back after the faulting
instruction. For the moment, the fixup logic is identical to the old
assembly fixup logic, but in future we could enhance this by taking the
ESR and FAR into account to constrain the faults we try to fix up, or to
specialize fixups for MTE tag check faults.

Other than backtracing, there should be no functional change as a result
of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-13-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>


# 2e77a62c 19-Oct-2021 Mark Rutland <mark.rutland@arm.com>

arm64: extable: add a dedicated uaccess handler

For inline assembly, we place exception fixups out-of-line in the
`.fixup` section such that these are out of the way of the fast path.
This has a few drawbacks:

* Since the fixup code is anonymous, backtraces will symbolize fixups as
offsets from the nearest prior symbol, currently
`__entry_tramp_text_end`. This is confusing, and painful to debug
without access to the relevant vmlinux.

* Since the exception handler adjusts the PC to execute the fixup, and
the fixup uses a direct branch back into the function it fixes,
backtraces of fixups miss the original function. This is confusing,
and violates requirements for RELIABLE_STACKTRACE (and therefore
LIVEPATCH).

* Inline assembly and associated fixups are generated from templates,
and we have many copies of logically identical fixups which only
differ in which specific registers are written to and which address is
branched to at the end of the fixup. This is potentially wasteful of
I-cache resources, and makes it hard to add additional logic to fixups
without significant bloat.

This patch address all three concerns for inline uaccess fixups by
adding a dedicated exception handler which updates registers in
exception context and subsequent returns back into the function which
faulted, removing the need for fixups specialized to each faulting
instruction.

Other than backtracing, there should be no functional change as a result
of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-12-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>


# d6e2cc56 19-Oct-2021 Mark Rutland <mark.rutland@arm.com>

arm64: extable: add `type` and `data` fields

Subsequent patches will add specialized handlers for fixups, in addition
to the simple PC fixup and BPF handlers we have today. In preparation,
this patch adds a new `type` field to struct exception_table_entry, and
uses this to distinguish the fixup and BPF cases. A `data` field is also
added so that subsequent patches can associate data specific to each
exception site (e.g. register numbers).

Handlers are named ex_handler_*() for consistency, following the exmaple
of x86. At the same time, get_ex_fixup() is split out into a helper so
that it can be used by other ex_handler_*() functions ins subsequent
patches.

This patch will increase the size of the exception tables, which will be
remedied by subsequent patches removing redundant fixup code. There
should be no functional change as a result of this patch.

Since each entry is now 12 bytes in size, we must reduce the alignment
of each entry from `.align 3` (i.e. 8 bytes) to `.align 2` (i.e. 4
bytes), which is the natrual alignment of the `insn` and `fixup` fields.
The current 8-byte alignment is a holdover from when the `insn` and
`fixup` fields was 8 bytes, and while not harmful has not been necessary
since commit:

6c94f27ac847ff8e ("arm64: switch to relative exception tables")

Similarly, RO_EXCEPTION_TABLE_ALIGN is dropped to 4 bytes.

Concurrently with this patch, x86's exception table entry format is
being updated (similarly to a 12-byte format, with 32-bytes of absolute
data). Once both have been merged it should be possible to unify the
sorttable logic for the two.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: James Morse <james.morse@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-11-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>


# 819771cc 19-Oct-2021 Mark Rutland <mark.rutland@arm.com>

arm64: extable: consolidate definitions

In subsequent patches we'll alter the structure and usage of struct
exception_table_entry. For inline assembly, we create these using the
`_ASM_EXTABLE()` CPP macro defined in <asm/uaccess.h>, and for plain
assembly code we use the `_asm_extable()` GAS macro defined in
<asm/assembler.h>, which are largely identical save for different
escaping and stringification requirements.

This patch moves the common definitions to a new <asm/asm-extable.h>
header, so that it's easier to keep the two in-sync, and to remove the
implication that these are only used for uaccess helpers (as e.g.
load_unaligned_zeropad() is only used on kernel memory, and depends upon
`_ASM_EXTABLE()`.

At the same time, a few minor modifications are made for clarity and in
preparation for subsequent patches:

* The structure creation is factored out into an `__ASM_EXTABLE_RAW()`
macro. This will make it easier to support different fixup variants in
subsequent patches without needing to update all users of
`_ASM_EXTABLE()`, and makes it easier to see tha the CPP and GAS
variants of the macros are structurally identical.

For the CPP macro, the stringification of fields is left to the
wrapper macro, `_ASM_EXTABLE()`, as in subsequent patches it will be
necessary to stringify fields in wrapper macros to safely concatenate
strings which cannot be token-pasted together in CPP.

* The fields of the structure are created separately on their own lines.
This will make it easier to add/remove/modify individual fields
clearly.

* Additional parentheses are added around the use of macro arguments in
field definitions to avoid any potential problems with evaluation due
to operator precedence, and to make errors upon misuse clearer.

* USER() is moved into <asm/asm-uaccess.h>, as it is not required by all
assembly code, and is already refered to by comments in that file.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-8-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>