#
443574b0 |
|
24-Mar-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Fix kfunc parameters incompatibility between bpf and riscv abi We encountered a failing case when running selftest in no_alu32 mode: The failure case is `kfunc_call/kfunc_call_test4` and its source code is like bellow: ``` long bpf_kfunc_call_test4(signed char a, short b, int c, long d) __ksym; int kfunc_call_test4(struct __sk_buff *skb) { ... tmp = bpf_kfunc_call_test4(-3, -30, -200, -1000); ... } ``` And its corresponding asm code is: ``` 0: r1 = -3 1: r2 = -30 2: r3 = 0xffffff38 # opcode: 18 03 00 00 38 ff ff ff 00 00 00 00 00 00 00 00 4: r4 = -1000 5: call bpf_kfunc_call_test4 ``` insn 2 is parsed to ld_imm64 insn to emit 0x00000000ffffff38 imm, and converted to int type and then send to bpf_kfunc_call_test4. But since it is zero-extended in the bpf calling convention, riscv jit will directly treat it as an unsigned 32-bit int value, and then fails with the message "actual 4294966063 != expected -1234". The reason is the incompatibility between bpf and riscv abi, that is, bpf will do zero-extension on uint, but riscv64 requires sign-extension on int or uint. We can solve this problem by sign extending the 32-bit parameters in kfunc. The issue is related to [0], and thanks to Yonghong and Alexei. Link: https://github.com/llvm/llvm-project/pull/84874 [0] Fixes: d40c3847b485 ("riscv, bpf: Add kfunc support for RV64") Signed-off-by: Pu Lehui <pulehui@huawei.com> Tested-by: Puranjay Mohan <puranjay12@gmail.com> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20240324103306.2202954-1-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
e63985ec |
|
03-Mar-2024 |
Puranjay Mohan <puranjay12@gmail.com> |
bpf, riscv64/cfi: Support kCFI + BPF on riscv64 The riscv BPF JIT doesn't emit proper kCFI prologues for BPF programs and struct_ops trampolines when CONFIG_CFI_CLANG is enabled. This causes CFI failures when calling BPF programs and can even crash the kernel due to invalid memory accesses. Example crash: root@rv-selftester:~/bpf# ./test_progs -a dummy_st_ops Unable to handle kernel paging request at virtual address ffffffff78204ffc Oops [#1] Modules linked in: bpf_testmod(OE) [....] CPU: 3 PID: 356 Comm: test_progs Tainted: P OE 6.8.0-rc1 #1 Hardware name: riscv-virtio,qemu (DT) epc : bpf_struct_ops_test_run+0x28c/0x5fc ra : bpf_struct_ops_test_run+0x26c/0x5fc epc : ffffffff82958010 ra : ffffffff82957ff0 sp : ff200000007abc80 gp : ffffffff868d6218 tp : ff6000008d87b840 t0 : 000000000000000f t1 : 0000000000000000 t2 : 000000002005793e s0 : ff200000007abcf0 s1 : ff6000008a90fee0 a0 : 0000000000000000 a1 : 0000000000000000 a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 a5 : ffffffff868dba26 a6 : 0000000000000001 a7 : 0000000052464e43 s2 : 00007ffffc0a95f0 s3 : ff6000008a90fe80 s4 : ff60000084c24c00 s5 : ffffffff78205000 s6 : ff60000088750648 s7 : ff20000000035008 s8 : fffffffffffffff4 s9 : ffffffff86200610 s10: 0000000000000000 s11: 0000000000000000 t3 : ffffffff8483dc30 t4 : ffffffff8483dc10 t5 : ffffffff8483dbf0 t6 : ffffffff8483dbd0 status: 0000000200000120 badaddr: ffffffff78204ffc cause: 000000000000000d [<ffffffff82958010>] bpf_struct_ops_test_run+0x28c/0x5fc [<ffffffff805083ee>] bpf_prog_test_run+0x170/0x548 [<ffffffff805029c8>] __sys_bpf+0x2d2/0x378 [<ffffffff804ff570>] __riscv_sys_bpf+0x5c/0x120 [<ffffffff8000e8fe>] syscall_handler+0x62/0xe4 [<ffffffff83362df6>] do_trap_ecall_u+0xc6/0x27c [<ffffffff833822c4>] ret_from_exception+0x0/0x64 Code: b603 0109 b683 0189 b703 0209 8493 0609 157d 8d65 (a303) ffca ---[ end trace 0000000000000000 ]--- Kernel panic - not syncing: Fatal exception SMP: stopping secondary CPUs Implement proper kCFI prologues for the BPF programs and callbacks and drop __nocfi for riscv64. Fix the trampoline generation code to emit kCFI prologue when a struct_ops trampoline is being prepared. Signed-off-by: Puranjay Mohan <puranjay12@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240303170207.82201-2-puranjay12@gmail.com
|
#
69065aa1 |
|
29-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Enable inline bpf_kptr_xchg() for RV64 RV64 JIT supports 64-bit BPF_XCHG atomic instructions. At the same time, the underlying implementation of xchg() and atomic64_xchg() in RV64 both are raw_xchg() that supported 64-bit. Therefore inline bpf_kptr_xchg() will have equivalent semantics. Let's inline it for better performance. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240130124659.670321-2-pulehui@huaweicloud.com
|
#
06a33d02 |
|
15-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Optimize bswap insns with Zbb support Optimize bswap instructions by rev8 Zbb instruction conbined with srli instruction. And Optimize 16-bit zero-extension with Zbb support. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240115131235.2914289-7-pulehui@huaweicloud.com
|
#
519fb722 |
|
15-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Optimize sign-extention mov insns with Zbb support Add 8-bit and 16-bit sign-extention wraper with Zbb support to optimize sign-extension mov instructions. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240115131235.2914289-6-pulehui@huaweicloud.com
|
#
361db44c |
|
15-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Simplify sext and zext logics in branch instructions There are many extension helpers in the current branch instructions, and the implementation is a bit complicated. We simplify this logic through two simple extension helpers with alternate register. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240115131235.2914289-4-pulehui@huaweicloud.com
|
#
914c7a5f |
|
15-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Unify 32-bit zero-extension to emit_zextw For code unification, add emit_zextw wrapper to unify all the 32-bit zero-extension operations. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240115131235.2914289-3-pulehui@huaweicloud.com
|
#
e33758f7 |
|
15-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Unify 32-bit sign-extension to emit_sextw For code unification, add emit_sextw wrapper to unify all the 32-bit sign-extension operations. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240115131235.2914289-2-pulehui@huaweicloud.com
|
#
1732ebc4 |
|
22-Jan-2024 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Fix unpredictable kernel crash about RV64 struct_ops We encountered a kernel crash triggered by the bpf_tcp_ca testcase as show below: Unable to handle kernel paging request at virtual address ff60000088554500 Oops [#1] ... CPU: 3 PID: 458 Comm: test_progs Tainted: G OE 6.8.0-rc1-kselftest_plain #1 Hardware name: riscv-virtio,qemu (DT) epc : 0xff60000088554500 ra : tcp_ack+0x288/0x1232 epc : ff60000088554500 ra : ffffffff80cc7166 sp : ff2000000117ba50 gp : ffffffff82587b60 tp : ff60000087be0040 t0 : ff60000088554500 t1 : ffffffff801ed24e t2 : 0000000000000000 s0 : ff2000000117bbc0 s1 : 0000000000000500 a0 : ff20000000691000 a1 : 0000000000000018 a2 : 0000000000000001 a3 : ff60000087be03a0 a4 : 0000000000000000 a5 : 0000000000000000 a6 : 0000000000000021 a7 : ffffffff8263f880 s2 : 000000004ac3c13b s3 : 000000004ac3c13a s4 : 0000000000008200 s5 : 0000000000000001 s6 : 0000000000000104 s7 : ff2000000117bb00 s8 : ff600000885544c0 s9 : 0000000000000000 s10: ff60000086ff0b80 s11: 000055557983a9c0 t3 : 0000000000000000 t4 : 000000000000ffc4 t5 : ffffffff8154f170 t6 : 0000000000000030 status: 0000000200000120 badaddr: ff60000088554500 cause: 000000000000000c Code: c796 67d7 0000 0000 0052 0002 c13b 4ac3 0000 0000 (0001) 0000 ---[ end trace 0000000000000000 ]--- The reason is that commit 2cd3e3772e41 ("x86/cfi,bpf: Fix bpf_struct_ops CFI") changes the func_addr of arch_prepare_bpf_trampoline in struct_ops from NULL to non-NULL, while we use func_addr on RV64 to differentiate between struct_ops and regular trampoline. When the struct_ops testcase is triggered, it emits wrong prologue and epilogue, and lead to unpredictable issues. After commit 2cd3e3772e41, we can use BPF_TRAMP_F_INDIRECT to distinguish them as it always be set in struct_ops. Fixes: 2cd3e3772e41 ("x86/cfi,bpf: Fix bpf_struct_ops CFI") Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20240123023207.1917284-1-pulehui@huaweicloud.com
|
#
26ef208c |
|
06-Dec-2023 |
Song Liu <song@kernel.org> |
bpf: Use arch_bpf_trampoline_size Instead of blindly allocating PAGE_SIZE for each trampoline, check the size of the trampoline with arch_bpf_trampoline_size(). This size is saved in bpf_tramp_image->size, and used for modmem charge/uncharge. The fallback arch_alloc_bpf_trampoline() still allocates a whole page because we need to use set_memory_* to protect the memory. struct_ops trampoline still uses a whole page for multiple trampolines. With this size check at caller (regular trampoline and struct_ops trampoline), remove arch_bpf_trampoline_size() from arch_prepare_bpf_trampoline() in archs. Also, update bpf_image_ksym_add() to handle symbol of different sizes. Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> # on riscv Link: https://lore.kernel.org/r/20231206224054.492250-7-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
96d1b7c0 |
|
06-Dec-2023 |
Song Liu <song@kernel.org> |
bpf: Add arch_bpf_trampoline_size() This helper will be used to calculate the size of the trampoline before allocating the memory. arch_prepare_bpf_trampoline() for arm64 and riscv64 can use arch_bpf_trampoline_size() to check the trampoline fits in the image. OTOH, arch_prepare_bpf_trampoline() for s390 has to call the JIT process twice, so it cannot use arch_bpf_trampoline_size(). Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> # on riscv Link: https://lore.kernel.org/r/20231206224054.492250-6-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
7112cd26 |
|
04-Oct-2023 |
Björn Töpel <bjorn@rivosinc.com> |
riscv, bpf: Track both a0 (RISC-V ABI) and a5 (BPF) return values The RISC-V BPF uses a5 for BPF return values, which are zero-extended, whereas the RISC-V ABI uses a0 which is sign-extended. In other words, a5 and a0 can differ, and are used in different context. The BPF trampoline are used for both BPF programs, and regular kernel functions. Make sure that the RISC-V BPF trampoline saves, and restores both a0 and a5. Fixes: 49b5e77ae3e2 ("riscv, bpf: Add bpf trampoline support for RV64") Signed-off-by: Björn Töpel <bjorn@rivosinc.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20231004120706.52848-3-bjorn@kernel.org
|
#
2f1b0d3d |
|
04-Oct-2023 |
Björn Töpel <bjorn@rivosinc.com> |
riscv, bpf: Sign-extend return values The RISC-V architecture does not expose sub-registers, and hold all 32-bit values in a sign-extended format [1] [2]: | The compiler and calling convention maintain an invariant that all | 32-bit values are held in a sign-extended format in 64-bit | registers. Even 32-bit unsigned integers extend bit 31 into bits | 63 through 32. Consequently, conversion between unsigned and | signed 32-bit integers is a no-op, as is conversion from a signed | 32-bit integer to a signed 64-bit integer. While BPF, on the other hand, exposes sub-registers, and use zero-extension (similar to arm64/x86). This has led to some subtle bugs, where a BPF JITted program has not sign-extended the a0 register (return value in RISC-V land), passed the return value up the kernel, e.g.: | int from_bpf(void); | | long foo(void) | { | return from_bpf(); | } Here, a0 would be 0xffff_ffff, instead of the expected 0xffff_ffff_ffff_ffff. Internally, the RISC-V JIT uses a5 as a dedicated register for BPF return values. Keep a5 zero-extended, but explicitly sign-extend a0 (which is used outside BPF land). Now that a0 (RISC-V ABI) and a5 (BPF ABI) differs, a0 is only moved to a5 for non-BPF native calls (BPF_PSEUDO_CALL). Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") Signed-off-by: Björn Töpel <bjorn@rivosinc.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://github.com/riscv/riscv-isa-manual/releases/download/riscv-isa-release-056b6ff-2023-10-02/unpriv-isa-asciidoc.pdf # [2] Link: https://github.com/riscv-non-isa/riscv-elf-psabi-doc/releases/download/draft-20230929-e5c800e661a53efe3c2678d71a306323b60eb13b/riscv-abi.pdf # [2] Link: https://lore.kernel.org/bpf/20231004120706.52848-2-bjorn@kernel.org
|
#
48a8f78c |
|
31-Aug-2023 |
Puranjay Mohan <puranjay12@gmail.com> |
bpf, riscv: use prog pack allocator in the BPF JIT Use bpf_jit_binary_pack_alloc() for memory management of JIT binaries in RISCV BPF JIT. The bpf_jit_binary_pack_alloc creates a pair of RW and RX buffers. The JIT writes the program into the RW buffer. When the JIT is done, the program is copied to the final RX buffer with bpf_jit_binary_pack_finalize. Implement bpf_arch_text_copy() and bpf_arch_text_invalidate() for RISCV JIT as these functions are required by bpf_jit_binary_pack allocator. Signed-off-by: Puranjay Mohan <puranjay12@gmail.com> Reviewed-by: Song Liu <song@kernel.org> Reviewed-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20230831131229.497941-5-puranjay12@gmail.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
83cc63af |
|
24-Aug-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Support unconditional bswap insn Add support unconditional bswap instruction. Since riscv is always little-endian, just treat the unconditional scenario the same as big-endian conversion. Signed-off-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230824095001.3408573-7-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
3e18ff4b |
|
24-Aug-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Support signed div/mod insns Add support signed div/mod instructions for RV64. Signed-off-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230824095001.3408573-6-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
d9839f16 |
|
24-Aug-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Support 32-bit offset jmp insn Add support 32-bit offset jmp instruction for RV64. Signed-off-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230824095001.3408573-5-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
694896ad |
|
24-Aug-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Support sign-extension mov insns Add support sign-extension mov instructions for RV64. Signed-off-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230824095001.3408573-4-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
3d06d816 |
|
24-Aug-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Support sign-extension load insns Add Support sign-extension load instructions for RV64. Signed-off-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230824095001.3408573-3-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
469fb2c3 |
|
24-Aug-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Fix missing exception handling and redundant zext for LDX_B/H/W For LDX_B/H/W, when zext has been inserted by verifier, it'll return 1, and no exception handling will continue. Also, when the offset is 12-bit value, the redundant zext inserted by the verifier is not removed. Fix both scenarios by moving down the removal of redundant zext. Signed-off-by: Pu Lehui <pulehui@huawei.com> Link: https://lore.kernel.org/r/20230824095001.3408573-2-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
25ad1065 |
|
21-Jul-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Adapt bpf trampoline to optimized riscv ftrace framework Commit 6724a76cff85 ("riscv: ftrace: Reduce the detour code size to half") optimizes the detour code size of kernel functions to half with T0 register and the upcoming DYNAMIC_FTRACE_WITH_DIRECT_CALLS of riscv is based on this optimization, we need to adapt riscv bpf trampoline based on this. One thing to do is to reduce detour code size of bpf programs, and the second is to deal with the return address after the execution of bpf trampoline. Meanwhile, we need to construct the frame of parent function, otherwise we will miss one layer when unwinding. The related tests have passed. Signed-off-by: Pu Lehui <pulehui@huawei.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230721100627.2630326-1-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
d40c3847 |
|
21-Feb-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Add kfunc support for RV64 This patch adds kernel function call support for RV64. Since the offset from RV64 kernel and module functions to bpf programs is almost within the range of s32, the current infrastructure of RV64 is already sufficient for kfunc, so let's turn it on. Suggested-by: Björn Töpel <bjorn@rivosinc.com> Signed-off-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230221140656.3480496-1-pulehui@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
#
2d311f48 |
|
27-Feb-2023 |
Randy Dunlap <rdunlap@infradead.org> |
riscv, bpf: Fix patch_text implicit declaration bpf_jit_comp64.c uses patch_text(), so add <asm/patch.h> to it to prevent the build error: ../arch/riscv/net/bpf_jit_comp64.c: In function 'bpf_arch_text_poke': ../arch/riscv/net/bpf_jit_comp64.c:691:23: error: implicit declaration of function 'patch_text'; did you mean 'path_get'? [-Werror=implicit-function-declaration] 691 | ret = patch_text(ip, new_insns, ninsns); | ^~~~~~~~~~ Fixes: 596f2e6f9cf4 ("riscv, bpf: Add bpf_arch_text_poke support for RV64") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Pu Lehui <pulehui@huawei.com> Acked-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/202302271000.Aj4nMXbZ-lkp@intel.com Link: https://lore.kernel.org/bpf/20230227072016.14618-1-rdunlap@infradead.org
|
#
49b5e77a |
|
15-Feb-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Add bpf trampoline support for RV64 BPF trampoline is the critical infrastructure of the BPF subsystem, acting as a mediator between kernel functions and BPF programs. Numerous important features, such as using BPF program for zero overhead kernel introspection, rely on this key component. We can't wait to support bpf trampoline on RV64. The related tests have passed, as well as the test_verifier with no new failure ceses. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/bpf/20230215135205.1411105-5-pulehui@huaweicloud.com
|
#
596f2e6f |
|
15-Feb-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Add bpf_arch_text_poke support for RV64 Implement bpf_arch_text_poke for RV64. For call scenario, to make BPF trampoline compatible with the kernel and BPF context, we follow the framework of RV64 ftrace to reserve 4 nops for BPF programs as function entry, and use auipc+jalr instructions for function call. However, since auipc+jalr call instruction is non-atomic operation, we need to use stop-machine to make sure instructions patching in atomic context. Also, we use auipc+jalr pair and need to patch in stop-machine context for jump scenario. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/bpf/20230215135205.1411105-4-pulehui@huaweicloud.com
|
#
0fd1fd01 |
|
15-Feb-2023 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Factor out emit_call for kernel and bpf context The current emit_call function is not suitable for kernel function call as it store return value to bpf R0 register. We can separate it out for common use. Meanwhile, simplify judgment logic, that is, fixed function address can use jal or auipc+jalr, while the unfixed can use only auipc+jalr. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/bpf/20230215135205.1411105-3-pulehui@huaweicloud.com
|
#
b54b6003 |
|
06-Dec-2022 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Emit fixed-length instructions for BPF_PSEUDO_FUNC For BPF_PSEUDO_FUNC instruction, verifier will refill imm with correct addresses of bpf_calls and then run last pass of JIT. Since the emit_imm of RV64 is variable-length, which will emit appropriate length instructions accorroding to the imm, it may broke ctx->offset, and lead to unpredictable problem, such as inaccurate jump. So let's fix it with fixed-length instructions. Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Suggested-by: Björn Töpel <bjorn@rivosinc.com> Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn@kernel.org> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20221206091410.1584784-1-pulehui@huaweicloud.com
|
#
dd642ccb |
|
10-Apr-2022 |
Pu Lehui <pulehui@huawei.com> |
riscv, bpf: Implement more atomic operations for RV64 This patch implement more BPF atomic operations for RV64. The newly added operations are shown below: atomic[64]_[fetch_]add atomic[64]_[fetch_]and atomic[64]_[fetch_]or atomic[64]_xchg atomic[64]_cmpxchg Since riscv specification does not provide AMO instruction for CAS operation, we use lr/sc instruction for cmpxchg operation, and AMO instructions for the rest ops. Tests "test_bpf.ko" and "test_progs -t atomic" have passed, as well as "test_verifier" with no new failure cases. Signed-off-by: Pu Lehui <pulehui@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20220410101246.232875-1-pulehui@huawei.com
|
#
fc839c6d |
|
10-Jan-2022 |
Jisheng Zhang <jszhang@kernel.org> |
riscv: bpf: Fix eBPF's exception tables eBPF's exception tables needs to be modified to relative synchronously. Suggested-by: Tong Tiangen <tongtiangen@huawei.com> Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Fixes: 1f77ed9422cb ("riscv: switch to relative extable and other improvements") Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
2bf847db |
|
18-Nov-2021 |
Jisheng Zhang <jszhang@kernel.org> |
riscv: extable: add `type` and `data` fields This is a riscv port of commit d6e2cc564775 ("arm64: extable: add `type` and `data` fields"). Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
ef127bca |
|
18-Nov-2021 |
Jisheng Zhang <jszhang@kernel.org> |
riscv: extable: make fixup_exception() return bool The return values of fixup_exception() and riscv_bpf_fixup_exception() represent a boolean condition rather than an error code, so it's better to return `bool` rather than `int`. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
c07935cb |
|
18-Nov-2021 |
Jisheng Zhang <jszhang@kernel.org> |
riscv: bpf: move rv_bpf_fixup_exception signature to extable.h This is to group riscv related extable related functions signature into one file. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
#
ebf7f6f0 |
|
04-Nov-2021 |
Tiezhu Yang <yangtiezhu@loongson.cn> |
bpf: Change value of MAX_TAIL_CALL_CNT from 32 to 33 In the current code, the actual max tail call count is 33 which is greater than MAX_TAIL_CALL_CNT (defined as 32). The actual limit is not consistent with the meaning of MAX_TAIL_CALL_CNT and thus confusing at first glance. We can see the historical evolution from commit 04fd61ab36ec ("bpf: allow bpf programs to tail-call other bpf programs") and commit f9dabe016b63 ("bpf: Undo off-by-one in interpreter tail call count limit"). In order to avoid changing existing behavior, the actual limit is 33 now, this is reasonable. After commit 874be05f525e ("bpf, tests: Add tail call test suite"), we can see there exists failed testcase. On all archs when CONFIG_BPF_JIT_ALWAYS_ON is not set: # echo 0 > /proc/sys/net/core/bpf_jit_enable # modprobe test_bpf # dmesg | grep -w FAIL Tail call error path, max count reached jited:0 ret 34 != 33 FAIL On some archs: # echo 1 > /proc/sys/net/core/bpf_jit_enable # modprobe test_bpf # dmesg | grep -w FAIL Tail call error path, max count reached jited:1 ret 34 != 33 FAIL Although the above failed testcase has been fixed in commit 18935a72eb25 ("bpf/tests: Fix error in tail call limit tests"), it would still be good to change the value of MAX_TAIL_CALL_CNT from 32 to 33 to make the code more readable. The 32-bit x86 JIT was using a limit of 32, just fix the wrong comments and limit to 33 tail calls as the constant MAX_TAIL_CALL_CNT updated. For the mips64 JIT, use "ori" instead of "addiu" as suggested by Johan Almbladh. For the riscv JIT, use RV_REG_TCC directly to save one register move as suggested by Björn Töpel. For the other implementations, no function changes, it does not change the current limit 33, the new value of MAX_TAIL_CALL_CNT can reflect the actual max tail call count, the related tail call testcases in test_bpf module and selftests can work well for the interpreter and the JIT. Here are the test results on x86_64: # uname -m x86_64 # echo 0 > /proc/sys/net/core/bpf_jit_enable # modprobe test_bpf test_suite=test_tail_calls # dmesg | tail -1 test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed] # rmmod test_bpf # echo 1 > /proc/sys/net/core/bpf_jit_enable # modprobe test_bpf test_suite=test_tail_calls # dmesg | tail -1 test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [8/8 JIT'ed] # rmmod test_bpf # ./test_progs -t tailcalls #142 tailcalls:OK Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Björn Töpel <bjorn@kernel.org> Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/bpf/1636075800-3264-1-git-send-email-yangtiezhu@loongson.cn
|
#
f47d4ffe |
|
02-Nov-2021 |
Björn Töpel <bjorn@kernel.org> |
riscv, bpf: Fix RV32 broken build, and silence RV64 warning Commit 252c765bd764 ("riscv, bpf: Add BPF exception tables") only addressed RV64, and broke the RV32 build [1]. Fix by gating the exception tables code with CONFIG_ARCH_RV64I. Further, silence a "-Wmissing-prototypes" warning [2] in the RV64 BPF JIT. [1] https://lore.kernel.org/llvm/202111020610.9oy9Rr0G-lkp@intel.com/ [2] https://lore.kernel.org/llvm/202110290334.2zdMyRq4-lkp@intel.com/ Fixes: 252c765bd764 ("riscv, bpf: Add BPF exception tables") Signed-off-by: Björn Töpel <bjorn@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Tong Tiangen <tongtiangen@huawei.com> Link: https://lore.kernel.org/bpf/20211103115453.397209-1-bjorn@kernel.org
|
#
252c765b |
|
27-Oct-2021 |
Tong Tiangen <tongtiangen@huawei.com> |
riscv, bpf: Add BPF exception tables When a tracing BPF program attempts to read memory without using the bpf_probe_read() helper, the verifier marks the load instruction with the BPF_PROBE_MEM flag. Since the riscv JIT does not currently recognize this flag it falls back to the interpreter. Add support for BPF_PROBE_MEM, by appending an exception table to the BPF program. If the load instruction causes a data abort, the fixup infrastructure finds the exception table and fixes up the fault, by clearing the destination register and jumping over the faulting instruction. A more generic solution would add a "handler" field to the table entry, like on x86 and s390. The same issue in ARM64 is fixed in 800834285361 ("bpf, arm64: Add BPF exception tables"). Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Pu Lehui <pulehui@huawei.com> Tested-by: Björn Töpel <bjorn@kernel.org> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/bpf/20211027111822.3801679-1-tongtiangen@huawei.com
|
#
f5e81d11 |
|
13-Jul-2021 |
Daniel Borkmann <daniel@iogearbox.net> |
bpf: Introduce BPF nospec instruction for mitigating Spectre v4 In case of JITs, each of the JIT backends compiles the BPF nospec instruction /either/ to a machine instruction which emits a speculation barrier /or/ to /no/ machine instruction in case the underlying architecture is not affected by Speculative Store Bypass or has different mitigations in place already. This covers both x86 and (implicitly) arm64: In case of x86, we use 'lfence' instruction for mitigation. In case of arm64, we rely on the firmware mitigation as controlled via the ssbd kernel parameter. Whenever the mitigation is enabled, it works for all of the kernel code with no need to provide any additional instructions here (hence only comment in arm64 JIT). Other archs can follow as needed. The BPF nospec instruction is specifically targeting Spectre v4 since i) we don't use a serialization barrier for the Spectre v1 case, and ii) mitigation instructions for v1 and v4 might be different on some archs. The BPF nospec is required for a future commit, where the BPF verifier does annotate intermediate BPF programs with speculation barriers. Co-developed-by: Piotr Krysiuk <piotras@gmail.com> Co-developed-by: Benedict Schlueter <benedict.schlueter@rub.de> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Piotr Krysiuk <piotras@gmail.com> Signed-off-by: Benedict Schlueter <benedict.schlueter@rub.de> Acked-by: Alexei Starovoitov <ast@kernel.org>
|
#
1d27d854 |
|
29-Mar-2021 |
Jisheng Zhang <jszhang@kernel.org> |
riscv: bpf: Move bpf_jit_alloc_exec() and bpf_jit_free_exec() to core We will drop the executable permissions of the code pages from the mapping at allocation time soon. Move bpf_jit_alloc_exec() and bpf_jit_free_exec() to bpf_jit_core.c so that they can be shared by both RV64I and RV32I. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Acked-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
|
#
91c960b0 |
|
14-Jan-2021 |
Brendan Jackman <jackmanb@google.com> |
bpf: Rename BPF_XADD and prepare to encode other atomics in .imm A subsequent patch will add additional atomic operations. These new operations will use the same opcode field as the existing XADD, with the immediate discriminating different operations. In preparation, rename the instruction mode BPF_ATOMIC and start calling the zero immediate BPF_ADD. This is possible (doesn't break existing valid BPF progs) because the immediate field is currently reserved MBZ and BPF_ADD is zero. All uses are removed from the tree but the BPF_XADD definition is kept around to avoid breaking builds for people including kernel headers. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20210114181751.768687-5-jackmanb@google.com
|
#
18a4d8c9 |
|
20-Jul-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
bpf, riscv: Use compressed instructions in the rv64 JIT This patch uses the RVC support and encodings from bpf_jit.h to optimize the rv64 jit. The optimizations work by replacing emit(rv_X(...)) with a call to a helper function emit_X, which will emit a compressed version of the instruction when possible, and when RVC is enabled. The JIT continues to pass all tests in lib/test_bpf.c, and introduces no new failures to test_verifier; both with and without RVC being enabled. Most changes are straightforward replacements of emit(rv_X(...), ctx) with emit_X(..., ctx), with the following exceptions bearing mention; * Change emit_imm to sign-extend the value in "lower", since the checks for RVC (and the instructions themselves) treat the value as signed. Otherwise, small negative immediates will not be recognized as encodable using an RVC instruction. For example, without this change, emit_imm(rd, -1, ctx) would cause lower to become 4095, which is not a 6b int even though a "c.li rd, -1" instruction suffices. * For {BPF_MOV,BPF_ADD} BPF_X, drop using addiw,addw in the 32-bit cases since the values are zero-extended into the upper 32 bits in the following instructions anyways, and the addition commutes with zero-extension. (BPF_SUB BPF_X must still use subw since subtraction does not commute with zero-extension.) This patch avoids optimizing branches and jumps to use RVC instructions since surrounding code often makes assumptions about the sizes of emitted instructions. Optimizing these will require changing these functions (e.g., emit_branch) to dynamically compute jump offsets. The following are examples of the JITed code for the verifier selftest "direct packet read test#3 for CGROUP_SKB OK", without and with RVC enabled, respectively. The former uses 178 bytes, and the latter uses 112, for a ~37% reduction in code size for this example. Without RVC: 0: 02000813 addi a6,zero,32 4: fd010113 addi sp,sp,-48 8: 02813423 sd s0,40(sp) c: 02913023 sd s1,32(sp) 10: 01213c23 sd s2,24(sp) 14: 01313823 sd s3,16(sp) 18: 01413423 sd s4,8(sp) 1c: 03010413 addi s0,sp,48 20: 03056683 lwu a3,48(a0) 24: 02069693 slli a3,a3,0x20 28: 0206d693 srli a3,a3,0x20 2c: 03456703 lwu a4,52(a0) 30: 02071713 slli a4,a4,0x20 34: 02075713 srli a4,a4,0x20 38: 03856483 lwu s1,56(a0) 3c: 02049493 slli s1,s1,0x20 40: 0204d493 srli s1,s1,0x20 44: 03c56903 lwu s2,60(a0) 48: 02091913 slli s2,s2,0x20 4c: 02095913 srli s2,s2,0x20 50: 04056983 lwu s3,64(a0) 54: 02099993 slli s3,s3,0x20 58: 0209d993 srli s3,s3,0x20 5c: 09056a03 lwu s4,144(a0) 60: 020a1a13 slli s4,s4,0x20 64: 020a5a13 srli s4,s4,0x20 68: 00900313 addi t1,zero,9 6c: 006a7463 bgeu s4,t1,0x74 70: 00000a13 addi s4,zero,0 74: 02d52823 sw a3,48(a0) 78: 02e52a23 sw a4,52(a0) 7c: 02952c23 sw s1,56(a0) 80: 03252e23 sw s2,60(a0) 84: 05352023 sw s3,64(a0) 88: 00000793 addi a5,zero,0 8c: 02813403 ld s0,40(sp) 90: 02013483 ld s1,32(sp) 94: 01813903 ld s2,24(sp) 98: 01013983 ld s3,16(sp) 9c: 00813a03 ld s4,8(sp) a0: 03010113 addi sp,sp,48 a4: 00078513 addi a0,a5,0 a8: 00008067 jalr zero,0(ra) With RVC: 0: 02000813 addi a6,zero,32 4: 7179 c.addi16sp sp,-48 6: f422 c.sdsp s0,40(sp) 8: f026 c.sdsp s1,32(sp) a: ec4a c.sdsp s2,24(sp) c: e84e c.sdsp s3,16(sp) e: e452 c.sdsp s4,8(sp) 10: 1800 c.addi4spn s0,sp,48 12: 03056683 lwu a3,48(a0) 16: 1682 c.slli a3,0x20 18: 9281 c.srli a3,0x20 1a: 03456703 lwu a4,52(a0) 1e: 1702 c.slli a4,0x20 20: 9301 c.srli a4,0x20 22: 03856483 lwu s1,56(a0) 26: 1482 c.slli s1,0x20 28: 9081 c.srli s1,0x20 2a: 03c56903 lwu s2,60(a0) 2e: 1902 c.slli s2,0x20 30: 02095913 srli s2,s2,0x20 34: 04056983 lwu s3,64(a0) 38: 1982 c.slli s3,0x20 3a: 0209d993 srli s3,s3,0x20 3e: 09056a03 lwu s4,144(a0) 42: 1a02 c.slli s4,0x20 44: 020a5a13 srli s4,s4,0x20 48: 4325 c.li t1,9 4a: 006a7363 bgeu s4,t1,0x50 4e: 4a01 c.li s4,0 50: d914 c.sw a3,48(a0) 52: d958 c.sw a4,52(a0) 54: dd04 c.sw s1,56(a0) 56: 03252e23 sw s2,60(a0) 5a: 05352023 sw s3,64(a0) 5e: 4781 c.li a5,0 60: 7422 c.ldsp s0,40(sp) 62: 7482 c.ldsp s1,32(sp) 64: 6962 c.ldsp s2,24(sp) 66: 69c2 c.ldsp s3,16(sp) 68: 6a22 c.ldsp s4,8(sp) 6a: 6145 c.addi16sp sp,48 6c: 853e c.mv a0,a5 6e: 8082 c.jr ra Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20200721025241.8077-4-luke.r.nels@gmail.com
|
#
bfabff3c |
|
20-Jul-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
bpf, riscv: Modify JIT ctx to support compressed instructions This patch makes the necessary changes to struct rv_jit_context and to bpf_int_jit_compile to support compressed riscv (RVC) instructions in the BPF JIT. It changes the JIT image to be u16 instead of u32, since RVC instructions are 2 bytes as opposed to 4. It also changes ctx->offset and ctx->ninsns to refer to 2-byte instructions rather than 4-byte ones. The riscv PC is required to be 16-bit aligned with or without RVC, so this is sufficient to refer to any valid riscv offset. The code for computing jump offsets in bytes is updated accordingly, and factored into a new "ninsns_rvoff" function to simplify the code. Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200721025241.8077-2-luke.r.nels@gmail.com
|
#
073ca6a0 |
|
05-May-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
bpf, riscv: Optimize BPF_JSET BPF_K using andi on RV64 This patch optimizes BPF_JSET BPF_K by using a RISC-V andi instruction when the BPF immediate fits in 12 bits, instead of first loading the immediate to a temporary register. Examples of generated code with and without this optimization: BPF_JMP_IMM(BPF_JSET, R1, 2, 1) without optimization: 20: li t1,2 24: and t1,a0,t1 28: bnez t1,0x30 BPF_JMP_IMM(BPF_JSET, R1, 2, 1) with optimization: 20: andi t1,a0,2 24: bnez t1,0x2c BPF_JMP32_IMM(BPF_JSET, R1, 2, 1) without optimization: 20: li t1,2 24: mv t2,a0 28: slli t2,t2,0x20 2c: srli t2,t2,0x20 30: slli t1,t1,0x20 34: srli t1,t1,0x20 38: and t1,t2,t1 3c: bnez t1,0x44 BPF_JMP32_IMM(BPF_JSET, R1, 2, 1) with optimization: 20: andi t1,a0,2 24: bnez t1,0x2c In these examples, because the upper 32 bits of the sign-extended immediate are 0, BPF_JMP BPF_JSET and BPF_JMP32 BPF_JSET are equivalent and therefore the JIT produces identical code for them. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20200506000320.28965-5-luke.r.nels@gmail.com
|
#
ca349a6a |
|
05-May-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
bpf, riscv: Optimize BPF_JMP BPF_K when imm == 0 on RV64 This patch adds an optimization to BPF_JMP (32- and 64-bit) BPF_K for when the BPF immediate is zero. When the immediate is zero, the code can directly use the RISC-V zero register instead of loading a zero immediate to a temporary register first. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20200506000320.28965-4-luke.r.nels@gmail.com
|
#
21a099ab |
|
05-May-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
bpf, riscv: Optimize FROM_LE using verifier_zext on RV64 This patch adds two optimizations for BPF_ALU BPF_END BPF_FROM_LE in the RV64 BPF JIT. First, it enables the verifier zero-extension optimization to avoid zero extension when imm == 32. Second, it avoids generating code for imm == 64, since it is equivalent to a no-op. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20200506000320.28965-3-luke.r.nels@gmail.com
|
#
0224b2ac |
|
05-May-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
bpf, riscv: Enable missing verifier_zext optimizations on RV64 Commit 66d0d5a854a6 ("riscv: bpf: eliminate zero extension code-gen") added support for the verifier zero-extension optimization on RV64 and commit 46dd3d7d287b ("bpf, riscv: Enable zext optimization for more RV64G ALU ops") enabled it for more instruction cases. However, BPF_LSH BPF_X and BPF_{LSH,RSH,ARSH} BPF_K are still missing the optimization. This patch enables the zero-extension optimization for these remaining cases. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20200506000320.28965-2-luke.r.nels@gmail.com
|
#
489553dd |
|
06-Apr-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
riscv, bpf: Fix offset range checking for auipc+jalr on RV64 The existing code in emit_call on RV64 checks that the PC-relative offset to the function fits in 32 bits before calling emit_jump_and_link to emit an auipc+jalr pair. However, this check is incorrect because offsets in the range [2^31 - 2^11, 2^31 - 1] cannot be encoded using auipc+jalr on RV64 (see discussion [1]). The RISC-V spec has recently been updated to reflect this fact [2, 3]. This patch fixes the problem by moving the check on the offset into emit_jump_and_link and modifying it to the correct range of encodable offsets, which is [-2^31 - 2^11, 2^31 - 2^11). This also enforces the check on the offset to other uses of emit_jump_and_link (e.g., BPF_JA) as well. Currently, this bug is unlikely to be triggered, because the memory region from which JITed images are allocated is close enough to kernel text for the offsets to not become too large; and because the bounds on BPF program size are small enough. This patch prevents this problem from becoming an issue if either of these change. [1]: https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/bwWFhBnnZFQ [2]: https://github.com/riscv/riscv-isa-manual/commit/b1e42e09ac55116dbf9de5e4fb326a5a90e4a993 [3]: https://github.com/riscv/riscv-isa-manual/commit/4c1b2066ebd2965a422e41eb262d0a208a7fea07 Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200406221604.18547-1-luke.r.nels@gmail.com
|
#
ca6cb544 |
|
04-Mar-2020 |
Luke Nelson <lukenels@cs.washington.edu> |
riscv, bpf: Factor common RISC-V JIT code This patch factors out code that can be used by both the RV64 and RV32 BPF JITs to a common bpf_jit.h and bpf_jit_core.c. Move struct definitions and macro-like functions to header. Rename rv_sb_insn/rv_uj_insn to rv_b_insn/rv_j_insn to match the RISC-V specification. Move reusable functions emit_body() and bpf_int_jit_compile() to bpf_jit_core.c with minor simplifications. Rename emit_insn() and build_{prologue,epilogue}() to be prefixed with "bpf_jit_" as they are no longer static. Rename bpf_jit_comp.c to bpf_jit_comp64.c to be more explicit. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Link: https://lore.kernel.org/bpf/20200305050207.4159-2-luke.r.nels@gmail.com
|