History log of /linux-master/drivers/firmware/efi/libstub/arm64-stub.c
Revision Date Author Comments
# 6b56beb5 22-Jul-2023 Alexandre Ghiti <alexghiti@rivosinc.com>

arm64: libstub: Move KASLR handling functions to kaslr.c

This prepares for riscv to use the same functions to handle the pĥysical
kernel move when KASLR is enabled.

Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Conor Dooley <conor.dooley@microchip.com>
Tested-by: Song Shuai <songshuaishuai@tinylab.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20230722123850.634544-4-alexghiti@rivosinc.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>


# bc5ddcef 07-Aug-2023 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Add limit argument to efi_random_alloc()

x86 will need to limit the kernel memory allocation to the lowest 512
MiB of memory, to match the behavior of the existing bare metal KASLR
physical randomization logic. So in preparation for that, add a limit
parameter to efi_random_alloc() and wire it up.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20230807162720.545787-22-ardb@kernel.org


# fc3608aa 21-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Use relocated version of kernel's struct screen_info

In some cases, we expose the kernel's struct screen_info to the EFI stub
directly, so it gets populated before even entering the kernel. This
means the early console is available as soon as the early param parsing
happens, which is nice. It also means we need two different ways to pass
this information, as this trick only works if the EFI stub is baked into
the core kernel image, which is not always the case.

Huacai reports that the preparatory refactoring that was needed to
implement this alternative method for zboot resulted in a non-functional
efifb earlycon for other cases as well, due to the reordering of the
kernel image relocation with the population of the screen_info struct,
and the latter now takes place after copying the image to its new
location, which means we copy the old, uninitialized state.

So let's ensure that the same-image version of alloc_screen_info()
produces the correct screen_info pointer, by taking the displacement of
the loaded image into account.

Reported-by: Huacai Chen <chenhuacai@loongson.cn>
Tested-by: Huacai Chen <chenhuacai@loongson.cn>
Link: https://lore.kernel.org/linux-efi/20230310021749.921041-1-chenhuacai@loongson.cn/
Fixes: 42c8ea3dca094ab8 ("efi: libstub: Factor out EFI stub entrypoint into separate file")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 3c60f67b 09-Mar-2023 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Remap relocated image with strict permissions

After relocating the executable image, use the EFI memory attributes
protocol to remap the code and data regions with the appropriate
permissions.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 61786170 11-Jan-2023 Ard Biesheuvel <ardb@kernel.org>

efi: arm64: enter with MMU and caches enabled

Instead of cleaning the entire loaded kernel image to the PoC and
disabling the MMU and caches before branching to the kernel's bare metal
entry point, we can leave the MMU and caches enabled, and rely on EFI's
cacheable 1:1 mapping of all of system RAM (which is mandated by the
spec) to populate the initial page tables.

This removes the need for managing coherency in software, which is
tedious and error prone.

Note that we still need to clean the executable region of the image to
the PoU if this is required for I/D coherency, but only if we actually
decided to move the image in memory, as otherwise, this will have been
taken care of by the loader.

This change affects both the builtin EFI stub as well as the zboot
decompressor, which now carries the entire EFI stub along with the
decompression code and the compressed image.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20230111102236.1430401-7-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# a37dac5c 05-Dec-2022 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: Limit allocations to 48-bit addressable physical region

The UEFI spec does not mention or reason about the configured size of
the virtual address space at all, but it does mention that all memory
should be identity mapped using a page size of 4 KiB.

This means that a LPA2 capable system that has any system memory outside
of the 48-bit addressable physical range and follows the spec to the
letter may serve page allocation requests from regions of memory that
the kernel cannot access unless it was built with LPA2 support and
enables it at runtime.

So let's ensure that all page allocations are limited to the 48-bit
range.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 9cf42bca 02-Aug-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: use EFI_LOADER_CODE region when moving the kernel in memory

The EFI spec is not very clear about which permissions are being given
when allocating pages of a certain type. However, it is quite obvious
that EFI_LOADER_CODE is more likely to permit execution than
EFI_LOADER_DATA, which becomes relevant once we permit booting the
kernel proper with the firmware's 1:1 mapping still active.

Ostensibly, recent systems such as the Surface Pro X grant executable
permissions to EFI_LOADER_CODE regions but not EFI_LOADER_DATA regions.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# d9ffe524 13-Oct-2022 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: Split off kernel image relocation for builtin stub

The arm64 build of the EFI stub is part of the core kernel image, and
therefore accesses section markers directly when it needs to figure out
the size of the various section.

The zboot decompressor does not have access to those symbols, but
doesn't really need that either. So let's move handle_kernel_image()
into a separate file (or rather, move everything else into a separate
file) so that the zboot build does not pull in unused code that links to
symbols that it does not define.

While at it, introduce a helper routine that the generic zboot loader
will need to invoke after decompressing the image but before invoking
it, to ensure that the I-side view of memory is consistent.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 895bc3a1 12-Oct-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: Factor out min alignment and preferred kernel load address

Factor out the expressions that describe the preferred placement of the
loaded image as well as the minimum alignment so we can reuse them in
the decompressor.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# aaeb3fc6 17-Oct-2022 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: Move dcache cleaning of loaded image out of efi_enter_kernel()

The efi_enter_kernel() routine will be shared between the existing EFI
stub and the zboot decompressor, and the version of
dcache_clean_to_poc() that the core kernel exports to the stub will not
be available in the latter case.

So move the handling into the .c file which will remain part of the stub
build that integrates directly with the kernel proper.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>


# 550b33cf 10-Nov-2022 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: Force the use of SetVirtualAddressMap() on Altra machines

Ampere Altra machines are reported to misbehave when the SetTime() EFI
runtime service is called after ExitBootServices() but before calling
SetVirtualAddressMap(). Given that the latter is horrid, pointless and
explicitly documented as optional by the EFI spec, we no longer invoke
it at boot if the configured size of the VA space guarantees that the
EFI runtime memory regions can remain mapped 1:1 like they are at boot
time.

On Ampere Altra machines, this results in SetTime() calls issued by the
rtc-efi driver triggering synchronous exceptions during boot. We can
now recover from those without bringing down the system entirely, due to
commit 23715a26c8d81291 ("arm64: efi: Recover from synchronous
exceptions occurring in firmware"). However, it would be better to avoid
the issue entirely, given that the firmware appears to remain in a funny
state after this.

So attempt to identify these machines based on the 'family' field in the
type #1 SMBIOS record, and call SetVirtualAddressMap() unconditionally
in that case.

Tested-by: Alexandru Elisei <alexandru.elisei@gmail.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# d3549a93 16-Sep-2022 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: avoid SetVirtualAddressMap() when possible

EFI's SetVirtualAddressMap() runtime service is a horrid hack that we'd
like to avoid using, if possible. For 64-bit architectures such as
arm64, the user and kernel mappings are entirely disjoint, and given
that we use the user region for mapping the UEFI runtime regions when
running under the OS, we don't rely on SetVirtualAddressMap() in the
conventional way, i.e., to permit kernel mappings of the OS to coexist
with kernel region mappings of the firmware regions. This means that, in
principle, we should be able to avoid SetVirtualAddressMap() altogether,
and simply use the 1:1 mapping that UEFI uses at boot time. (Note that
omitting SetVirtualAddressMap() is explicitly permitted by the UEFI
spec).

However, there is a corner case on arm64, which, if configured for
3-level paging (or 2-level paging when using 64k pages), may not be able
to cover the entire range of firmware mappings (which might contain both
memory and MMIO peripheral mappings).

So let's avoid SetVirtualAddressMap() on arm64, but only if the VA space
is guaranteed to be of sufficient size.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 171539f5 15-Sep-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: install boot-time memory map as config table

Expose the EFI boot time memory map to the kernel via a configuration
table. This is arch agnostic and enables future changes that remove the
dependency on DT on architectures that don't otherwise rely on it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# eab31265 03-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: simplify efi_get_memory_map() and struct efi_boot_memmap

Currently, struct efi_boot_memmap is a struct that is passed around
between callers of efi_get_memory_map() and the users of the resulting
data, and which carries pointers to various variables whose values are
provided by the EFI GetMemoryMap() boot service.

This is overly complex, and it is much easier to carry these values in
the struct itself. So turn the struct into one that carries these data
items directly, including a flex array for the variable number of EFI
memory descriptors that the boot service may return.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 2d987e64 05-Sep-2022 Mark Brown <broonie@kernel.org>

arm64/sysreg: Add _EL1 into ID_AA64MMFR0_EL1 definition names

Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64MMFR0_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-5-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 07768c55 19-Mar-2022 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: run image in place if randomized by the loader

If the loader has already placed the EFI kernel image randomly in
physical memory, and indicates having done so by installing the 'fixed
placement' protocol onto the image handle, don't bother randomizing the
placement again in the EFI stub.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 416a9f84 19-Mar-2022 Ard Biesheuvel <ardb@kernel.org>

efi: libstub: pass image handle to handle_kernel_image()

In a future patch, arm64's implementation of handle_kernel_image() will
omit randomizing the placement of the kernel if the load address was
chosen randomly by the loader. In order to do this, it needs to locate a
protocol on the image handle, so pass it to handle_kernel_image().

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# e9b7c3a4 19-Jan-2022 Mihai Carabas <mihai.carabas@oracle.com>

efi/libstub: arm64: Fix image check alignment at entry

The kernel is aligned at SEGMENT_SIZE and this is the size populated in the PE
headers:

arch/arm64/kernel/efi-header.S: .long SEGMENT_ALIGN // SectionAlignment

EFI_KIMG_ALIGN is defined as: (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN :
THREAD_ALIGN)

So it depends on THREAD_ALIGN. On newer builds this message started to appear
even though the loader is taking into account the PE header (which is stating
SEGMENT_ALIGN).

Fixes: c32ac11da3f8 ("efi/libstub: arm64: Double check image alignment at entry")
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# c32ac11d 26-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Double check image alignment at entry

On arm64, the stub only moves the kernel image around in memory if
needed, which is typically only for KASLR, given that relocatable
kernels (which is the default) can run from any 64k aligned address,
which is also the minimum alignment communicated to EFI via the PE/COFF
header.

Unfortunately, some loaders appear to ignore this header, and load the
kernel at some arbitrary offset in memory. We can deal with this, but
let's check for this condition anyway, so non-compliant code can be
spotted and fixed.

Cc: <stable@vger.kernel.org> # v5.10+
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# ff80ef5b 26-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Warn when efi_random_alloc() fails

Randomization of the physical load address of the kernel image relies on
efi_random_alloc() returning successfully, and currently, we ignore any
failures and just carry on, using the ordinary, non-randomized page
allocator routine. This means we never find out if a failure occurs,
which could harm security, so let's at least warn about this condition.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# 3a262423 21-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Relax 2M alignment again for relocatable kernels

Commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with
alignment check") simplified the way the stub moves the kernel image
around in memory before booting it, given that a relocatable image does
not need to be copied to a 2M aligned offset if it was loaded on a 64k
boundary by EFI.

Commit d32de9130f6c ("efi/arm64: libstub: Deal gracefully with
EFI_RNG_PROTOCOL failure") inadvertently defeated this logic by
overriding the value of efi_nokaslr if EFI_RNG_PROTOCOL is not
available, which was mistaken by the loader logic as an explicit request
on the part of the user to disable KASLR and any associated relocation
of an Image not loaded on a 2M boundary.

So let's reinstate this functionality, by capturing the value of
efi_nokaslr at function entry to choose the minimum alignment.

Fixes: d32de9130f6c ("efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# 5b94046e 26-Jul-2021 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm64: Force Image reallocation if BSS was not reserved

Distro versions of GRUB replace the usual LoadImage/StartImage calls
used to load the kernel image with some local code that fails to honor
the allocation requirements described in the PE/COFF header, as it
does not account for the image's BSS section at all: it fails to
allocate space for it, and fails to zero initialize it.

Since the EFI stub itself is allocated in the .init segment, which is
in the middle of the image, its BSS section is not impacted by this,
and the main consequence of this omission is that the BSS section may
overlap with memory regions that are already used by the firmware.

So let's warn about this condition, and force image reallocation to
occur in this case, which works around the problem.

Fixes: 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with alignment check")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>


# 26f55386 09-Mar-2021 James Morse <james.morse@arm.com>

arm64/mm: Fix __enable_mmu() for new TGRAN range values

As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1
might contain a range of values to describe supported translation granules
(4K and 16K pages sizes in particular) instead of just enabled or disabled
values. This changes __enable_mmu() function to handle complete acceptable
range of values (depending on whether the field is signed or unsigned) now
represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here,
also fix similar situations in EFI stub and KVM as well.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-efi@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1615355590-21102-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>


# 1c761ee9 20-Jan-2021 Mark Brown <broonie@kernel.org>

efi/arm64: Update debug prints to reflect other entropy sources

Currently the EFI stub prints a diagnostic on boot saying that KASLR will
be disabled if it is unable to use the EFI RNG protocol to obtain a seed
for KASLR. With the addition of support for v8.5-RNG and the SMCCC RNG
protocol it is now possible for KASLR to obtain entropy even if the EFI
RNG protocol is unsupported in the system, and the main kernel now
explicitly says if KASLR is active itself. This can result in a boot
log where the stub says KASLR has been disabled and the main kernel says
that it is enabled which is confusing for users.

Remove the explicit reference to KASLR from the diagnostics, the warnings
are still useful as EFI is the only source of entropy the stub uses when
randomizing the physical address of the kernel and the other sources may
not be available.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210120163810.14973-1-broonie@kernel.org
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# d32de913 26-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure

Currently, on arm64, we abort on any failure from efi_get_random_bytes()
other than EFI_NOT_FOUND when it comes to setting the physical seed for
KASLR, but ignore such failures when obtaining the seed for virtual
KASLR or for early seeding of the kernel's entropy pool via the config
table. This is inconsistent, and may lead to unexpected boot failures.

So let's permit any failure for the physical seed, and simply report
the error code if it does not equal EFI_NOT_FOUND.

Cc: <stable@vger.kernel.org> # v5.8+
Reported-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 762cd288 09-Sep-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: arm32: Use low allocation for the uncompressed kernel

Before commit

d0f9ca9be11f25ef ("ARM: decompressor: run decompressor in place if loaded via UEFI")

we were rather limited in the choice of base address for the uncompressed
kernel, as we were relying on the logic in the decompressor that blindly
rounds down the decompressor execution address to the next multiple of 128
MiB, and decompresses the kernel there. For this reason, we have a lot of
complicated memory region handling code, to ensure that this memory window
is available, even though it could be occupied by reserved regions or
other allocations that may or may not collide with the uncompressed image.

Today, we simply pass the target address for the decompressed image to the
decompressor directly, and so we can choose a suitable window just by
finding a 16 MiB aligned region, while taking TEXT_OFFSET and the region
for the swapper page tables into account.

So let's get rid of the complicated logic, and instead, use the existing
bottom up allocation routine to allocate a suitable window as low as
possible, and carve out a memory region that has the right properties.

Note that this removes any dependencies on the 'dram_base' argument to
handle_kernel_image(), and so this is removed as well. Given that this
was the only remaining use of dram_base, the code that produces it is
removed entirely as well.

Reviewed-by: Maxim Uvarov <maxim.uvarov@linaro.org>
Tested-by: Maxim Uvarov <maxim.uvarov@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 120dc60d 25-Aug-2020 Ard Biesheuvel <ardb@kernel.org>

arm64: get rid of TEXT_OFFSET

TEXT_OFFSET serves no purpose, and for this reason, it was redefined
as 0x0 in the v5.8 timeframe. Since this does not appear to have caused
any issues that require us to revisit that decision, let's get rid of the
macro entirely, along with any references to it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20200825135440.11288-1-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>


# 7c116db2 09-Jul-2020 Will Deacon <will@kernel.org>

efi/libstub/arm64: Retain 2MB kernel Image alignment if !KASLR

Since commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset
with alignment check"), loading a relocatable arm64 kernel at a physical
address which is not 2MB aligned and subsequently booting with EFI will
leave the Image in-place, relying on the kernel to relocate itself early
during boot. In conjunction with commit dd4bc6076587 ("arm64: warn on
incorrect placement of the kernel by the bootloader"), which enables
CONFIG_RELOCATABLE by default, this effectively means that entering an
arm64 kernel loaded at an alignment smaller than 2MB with EFI (e.g. using
QEMU) will result in silent relocation at runtime.

Unfortunately, this has a subtle but confusing affect for developers
trying to inspect the PC value during a crash and comparing it to the
symbol addresses in vmlinux using tools such as 'nm' or 'addr2line';
all text addresses will be displaced by a sub-2MB offset, resulting in
the wrong symbol being identified in many cases. Passing "nokaslr" on
the command line or disabling "CONFIG_RANDOMIZE_BASE" does not help,
since the EFI stub only copies the kernel Image to a 2MB boundary if it
is not relocatable.

Adjust the EFI stub for arm64 so that the minimum Image alignment is 2MB
unless KASLR is in use.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: David Brazdil <dbrazdil@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>


# 793473c2 30-Apr-2020 Arvind Sankar <nivedita@alum.mit.edu>

efi/libstub: Move pr_efi/pr_efi_err into efi namespace

Rename pr_efi to efi_info and pr_efi_err to efi_err to make it more
obvious that they are part of the EFI stub and not generic printk infra.

Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Link: https://lore.kernel.org/r/20200430182843.2510180-4-nivedita@alum.mit.edu
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 980771f6 16-Apr-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Drop __pure getters for EFI stub options

The practice of using __pure getter functions to access global
variables in the EFI stub dates back to the time when we had to
carefully prevent GOT entries from being emitted, because we
could not rely on the toolchain to do this for us.

Today, we use the hidden visibility pragma for all EFI stub source
files, which now all live in the same subdirectory, and we apply a
sanity check on the objects, so we can get rid of these getter
functions and simply refer to global data objects directly.

So switch over the remaining boolean variables carrying options set
on the kernel command line.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# e71356fe 31-Mar-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub/arm64: Switch to ordinary page allocator for kernel image

It is no longer necessary to locate the kernel as low as possible in
physical memory, and so we can switch from efi_low_alloc() [which is
a rather nasty concoction on top of GetMemoryMap()] to a new helper
called efi_allocate_pages_aligned(), which simply rounds up the size
to account for the alignment, and frees the misaligned pages again.

So considering that the kernel can live anywhere in the physical
address space, as long as its alignment requirements are met, let's
switch to efi_allocate_pages_aligned() to allocate the pages.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 5d12da9d 13-Apr-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub/arm64: Simplify randomized loading of kernel image

The KASLR code path in the arm64 version of the EFI stub incorporates
some overly complicated logic to randomly allocate a region of the right
alignment: there is no need to randomize the placement of the kernel
modulo 2 MiB separately from the placement of the 2 MiB aligned allocation
itself - we can simply follow the same logic used by the non-randomized
placement, which is to allocate at the correct alignment, and only take
TEXT_OFFSET into account if it is not a round multiple of the alignment.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 82046702 27-Mar-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub/arm64: Replace 'preferred' offset with alignment check

The notion of a 'preferred' load offset for the kernel dates back to the
times when the kernel's primary mapping overlapped with the linear region,
and memory below it could not be used at all.

Today, the arm64 kernel does not really care where it is loaded in physical
memory, as long as the alignment requirements are met, and so there is no
point in unconditionally moving the kernel to a new location in memory at
boot. Instead, we can
- check for a KASLR seed, and randomly reallocate the kernel if one is
provided
- otherwise, check whether the alignment requirements are met for the
current placement of the kernel, and just run it in place if they are
- finally, do an ordinary page allocation and reallocate the kernel to a
suitably aligned buffer anywhere in memory.

By the same reasoning, there is no need to take TEXT_OFFSET into account
if it is a round multiple of the minimum alignment, which is the usual
case for relocatable kernels with TEXT_OFFSET randomization disabled.
Otherwise, it suffices to use the relative misaligment of TEXT_OFFSET
when reallocating the kernel.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# e16e65a0 29-Mar-2020 Ard Biesheuvel <ardb@kernel.org>

arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature

When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with
different permissions (r-x for .text, r-- for .rodata, rw- for .data,
etc) are rounded up to 2 MiB so they can be mapped more efficiently.
In particular, it permits the segments to be mapped using level 2
block entries when using 4k pages, which is expected to result in less
TLB pressure.

However, the mappings for the bulk of the kernel will use level 2
entries anyway, and the misaligned fringes are organized such that they
can take advantage of the contiguous bit, and use far fewer level 3
entries than would be needed otherwise.

This makes the value of this feature dubious at best, and since it is not
enabled in defconfig or in the distro configs, it does not appear to be
in wide use either. So let's just remove it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Acked-by: Laura Abbott <labbott@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# c2136dce 29-Mar-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub/arm64: Avoid image_base value from efi_loaded_image

Commit:

9f9223778ef3 ("efi/libstub/arm: Make efi_entry() an ordinary PE/COFF entrypoint")

did some code refactoring to get rid of the EFI entry point assembler
code, and in the process, it got rid of the assignment of image_addr
to the value of _text. Instead, it switched to using the image_base
field of the efi_loaded_image struct provided by UEFI, which should
contain the same value.

However, Michael reports that this is not the case: older GRUB builds
corrupt this value in some way, and since we can easily switch back to
referring to _text to discover this value, let's simply do that.

While at it, fix another issue in commit 9f9223778ef3, which may result
in the unassigned image_addr to be misidentified as the preferred load
offset of the kernel, which is unlikely but will cause a boot crash if
it does occur.

Finally, let's add a warning if the _text vs. image_base discrepancy is
detected, so we can tell more easily how widespread this issue actually
is.

Reported-by: Michael Kelley <mikelley@microsoft.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-efi@vger.kernel.org


# 6f05106e 10-Feb-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Use hidden visibility for all source files

Instead of setting the visibility pragma for a small set of symbol
declarations that could result in absolute references that we cannot
support in the stub, declare hidden visibility for all code in the
EFI stub, which is more robust and future proof.

To ensure that the #pragma is taken into account before any other
includes are processed, put it in a header file of its own and
include it via the compiler command line using the -include option.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 9f922377 16-Feb-2020 Ard Biesheuvel <ardb@kernel.org>

efi/libstub/arm: Make efi_entry() an ordinary PE/COFF entrypoint

Expose efi_entry() as the PE/COFF entrypoint directly, instead of
jumping into a wrapper that fiddles with stack buffers and other
stuff that the compiler is much better at. The only reason this
code exists is to obtain a pointer to the base of the image, but
we can get the same value from the loaded_image protocol, which
we already need for other reasons anyway.

Update the return type as well, to make it consistent with what
is required for a PE/COFF executable entrypoint.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>


# 966291f6 24-Dec-2019 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Rename efi_call_early/_runtime macros to be more intuitive

The macros efi_call_early and efi_call_runtime are used to call EFI
boot services and runtime services, respectively. However, the naming
is confusing, given that the early vs runtime distinction may suggest
that these are used for calling the same set of services either early
or late (== at runtime), while in reality, the sets of services they
can be used with are completely disjoint, and efi_call_runtime is also
only usable in 'early' code.

So do a global sweep to replace all occurrences with efi_bs_call or
efi_rt_call, respectively, where BS and RT match the idiom used by
the UEFI spec to refer to boot time or runtime services.

While at it, use 'func' as the macro parameter name for the function
pointers, which is less likely to collide and cause weird build errors.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: James Morse <james.morse@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/20191224151025.32482-24-ardb@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# cd33a5c1 24-Dec-2019 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Remove 'sys_table_arg' from all function prototypes

We have a helper efi_system_table() that gives us the address of the
EFI system table in memory, so there is no longer point in passing
it around from each function to the next.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: James Morse <james.morse@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/20191224151025.32482-20-ardb@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 8173ec79 24-Dec-2019 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Drop sys_table_arg from printk routines

As a first step towards getting rid of the need to pass around a function
parameter 'sys_table_arg' pointing to the EFI system table, remove the
references to it in the printing code, which is represents the majority
of the use cases.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: James Morse <james.morse@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: https://lkml.kernel.org/r/20191224151025.32482-19-ardb@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 4febfb8d 02-Feb-2019 Ard Biesheuvel <ardb@kernel.org>

efi: Replace GPL license boilerplate with SPDX headers

Replace all GPL license blurbs with an equivalent SPDX header (most
files are GPLv2, some are GPLv2+). While at it, drop some outdated
header changelogs as well.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Alexander Graf <agraf@suse.de>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Jeffrey Hugo <jhugo@codeaurora.org>
Cc: Lee Jones <lee.jones@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Jones <pjones@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20190202094119.13230-7-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 4f74d72a 18-May-2018 Mark Rutland <mark.rutland@arm.com>

efi/libstub/arm64: Handle randomized TEXT_OFFSET

When CONFIG_RANDOMIZE_TEXT_OFFSET=y, TEXT_OFFSET is an arbitrary
multiple of PAGE_SIZE in the interval [0, 2MB).

The EFI stub does not account for the potential misalignment of
TEXT_OFFSET relative to EFI_KIMG_ALIGN, and produces a randomized
physical offset which is always a round multiple of EFI_KIMG_ALIGN.
This may result in statically allocated objects whose alignment exceeds
PAGE_SIZE to appear misaligned in memory. This has been observed to
result in spurious stack overflow reports and failure to make use of
the IRQ stacks, and theoretically could result in a number of other
issues.

We can OR in the low bits of TEXT_OFFSET to ensure that we have the
necessary offset (and hence preserve the misalignment of TEXT_OFFSET
relative to EFI_KIMG_ALIGN), so let's do that.

Reported-by: Kim Phillips <kim.phillips@arm.com>
Tested-by: Kim Phillips <kim.phillips@arm.com>
[ardb: clarify comment and commit log, drop unneeded parens]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Fixes: 6f26b3671184c36d ("arm64: kaslr: increase randomization granularity")
Link: http://lkml.kernel.org/r/20180518140841.9731-2-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 0426a4e6 18-Aug-2017 Ard Biesheuvel <ardb@kernel.org>

efi/libstub/arm64: Force 'hidden' visibility for section markers

To prevent the compiler from emitting absolute references to the section
markers when running in PIC mode, override the visibility to 'hidden' for
all contents of asm/sections.h

Tested-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20170818194947.19347-4-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 170976bc 14-Jul-2017 Mark Rutland <mark.rutland@arm.com>

efi/arm64: add EFI_KIMG_ALIGN

The EFI stub is intimately coupled with the kernel, and takes advantage
of this by relocating the kernel at a weaker alignment than the
documented boot protocol mandates.

However, it does so by assuming it can align the kernel to the segment
alignment, and assumes that this is 64K. In subsequent patches, we'll
have to consider other details to determine this de-facto alignment
constraint.

This patch adds a new EFI_KIMG_ALIGN definition that will track the
kernel's de-facto alignment requirements. Subsequent patches will modify
this as required.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>


# 60f38de7 04-Apr-2017 Ard Biesheuvel <ardb@kernel.org>

efi/libstub: Unify command line param parsing

Merge the parsing of the command line carried out in arm-stub.c with
the handling in efi_parse_options(). Note that this also fixes the
missing handling of CONFIG_CMDLINE_FORCE=y, in which case the builtin
command line should supersede the one passed by the firmware.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bhe@redhat.com
Cc: bhsharma@redhat.com
Cc: bp@alien8.de
Cc: eugene@hp.com
Cc: evgeny.kalugin@intel.com
Cc: jhugo@codeaurora.org
Cc: leif.lindholm@linaro.org
Cc: linux-efi@vger.kernel.org
Cc: mark.rutland@arm.com
Cc: roy.franz@cavium.com
Cc: rruigrok@codeaurora.org
Link: http://lkml.kernel.org/r/20170404160910.28115-1-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 6f26b367 18-Apr-2016 Ard Biesheuvel <ardb@kernel.org>

arm64: kaslr: increase randomization granularity

Currently, our KASLR implementation randomizes the placement of the core
kernel at 2 MB granularity. This is based on the arm64 kernel boot
protocol, which mandates that the kernel is loaded TEXT_OFFSET bytes above
a 2 MB aligned base address. This requirement is a result of the fact that
the block size used by the early mapping code may be 2 MB at the most (for
a 4 KB granule kernel)

But we can do better than that: since a KASLR kernel needs to be relocated
in any case, we can tolerate a physical misalignment as long as the virtual
misalignment relative to this 2 MB block size is equal in size, and code to
deal with this is already in place.

Since we align the kernel segments to 64 KB, let's randomize the physical
offset at 64 KB granularity as well (unless CONFIG_DEBUG_ALIGN_RODATA is
enabled). This way, the page table and TLB footprint is not affected.

The higher granularity allows for 5 bits of additional entropy to be used.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>


# 2b5fe07a 26-Jan-2016 Ard Biesheuvel <ardb@kernel.org>

arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness

Since arm64 does not use a decompressor that supplies an execution
environment where it is feasible to some extent to provide a source of
randomness, the arm64 KASLR kernel depends on the bootloader to supply
some random bits in the /chosen/kaslr-seed DT property upon kernel entry.

On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain
some random bits. At the same time, use it to randomize the offset of the
kernel Image in physical memory.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>


# 42b55734 16-Feb-2016 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: Check for h/w support before booting a >4 KB granular kernel

A kernel built with support for a page size that is not supported by the
hardware it runs on cannot boot to a state where it can inform the user
about the failure.

If we happen to be booting via UEFI, we can fail gracefully so check
if the currently configured page size is supported by the hardware before
entering the kernel proper. Note that UEFI mandates support for 4 KB pages,
so in that case, no check is needed.

Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Jeremy Linton <jeremy.linton@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1455712566-16727-10-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# dae31fd2 16-Feb-2016 Ard Biesheuvel <ardb@kernel.org>

efi/arm64: Drop __init annotation from handle_kernel_image()

After moving arm64-stub.c to libstub/, all of its sections are emitted
as .init.xxx sections automatically, and the __init annotation of
handle_kernel_image() causes it to end up in .init.init.text, which is
not recognized as an __init section by the linker scripts. So drop the
annotation.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1455712566-16727-5-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# bf457786 23-Oct-2015 Ard Biesheuvel <ardb@kernel.org>

arm64/efi: move arm64 specific stub C code to libstub

Now that we added special handling to the C files in libstub, move
the one remaining arm64 specific EFI stub C file to libstub as
well, so that it gets the same treatment. This should prevent future
changes from resulting in binaries that may execute incorrectly in
UEFI context.

With efi-entry.S the only remaining EFI stub source file under
arch/arm64, we can also simplify the Makefile logic somewhat.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>