History log of /linux-master/tools/testing/selftests/bpf/progs/profiler.inc.h
Revision Date Author Comments
# 52dbd67d 08-Feb-2024 Jose E. Marchesi <jose.marchesi@oracle.com>

bpf: Abstract loop unrolling pragmas in BPF selftests

[Changes from V1:
- Avoid conflict by rebasing with latest master.]

Some BPF tests use loop unrolling compiler pragmas that are clang
specific and not supported by GCC. These pragmas, along with their
GCC equivalences are:

#pragma clang loop unroll_count(N)
#pragma GCC unroll N

#pragma clang loop unroll(full)
#pragma GCC unroll 65534

#pragma clang loop unroll(disable)
#pragma GCC unroll 1

#pragma unroll [aka #pragma clang loop unroll(enable)]
There is no GCC equivalence to this pragma. It enables unrolling on
loops that the compiler would not ordinarily unroll even with
-O2|-funroll-loops, but it is not equivalent to full unrolling
either.

This patch adds a new header progs/bpf_compiler.h that defines the
following macros, which correspond to each pair of compiler-specific
pragmas above:

__pragma_loop_unroll_count(N)
__pragma_loop_unroll_full
__pragma_loop_no_unroll
__pragma_loop_unroll

The selftests using loop unrolling pragmas are then changed to include
the header and use these macros in place of the explicit pragmas.

Tested in bpf-next master.
No regressions.

Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240208203612.29611-1-jose.marchesi@oracle.com


# 7e3811cb 26-Dec-2023 Alexei Starovoitov <ast@kernel.org>

selftests/bpf: Convert profiler.c to bpf_cmp.

Convert profiler[123].c to "volatile compare" to compare barrier_var() approach vs bpf_cmp_likely() vs bpf_cmp_unlikely().

bpf_cmp_unlikely() produces correct code, but takes much longer to verify:

./veristat -C -e prog,insns,states before after_with_unlikely
Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
------------------------------------ --------- --------- ------------------ ---------- ---------- -----------------
kprobe__proc_sys_write 1603 19606 +18003 (+1123.08%) 123 1678 +1555 (+1264.23%)
kprobe__vfs_link 11815 70305 +58490 (+495.05%) 971 4967 +3996 (+411.53%)
kprobe__vfs_symlink 5464 42896 +37432 (+685.07%) 434 3126 +2692 (+620.28%)
kprobe_ret__do_filp_open 5641 44578 +38937 (+690.25%) 446 3162 +2716 (+608.97%)
raw_tracepoint__sched_process_exec 2770 35962 +33192 (+1198.27%) 226 3121 +2895 (+1280.97%)
raw_tracepoint__sched_process_exit 1526 2135 +609 (+39.91%) 133 208 +75 (+56.39%)
raw_tracepoint__sched_process_fork 265 337 +72 (+27.17%) 19 24 +5 (+26.32%)
tracepoint__syscalls__sys_enter_kill 18782 140407 +121625 (+647.56%) 1286 12176 +10890 (+846.81%)

bpf_cmp_likely() is equivalent to barrier_var():

./veristat -C -e prog,insns,states before after_with_likely
Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
------------------------------------ --------- --------- -------------- ---------- ---------- -------------
kprobe__proc_sys_write 1603 1663 +60 (+3.74%) 123 127 +4 (+3.25%)
kprobe__vfs_link 11815 12090 +275 (+2.33%) 971 971 +0 (+0.00%)
kprobe__vfs_symlink 5464 5448 -16 (-0.29%) 434 426 -8 (-1.84%)
kprobe_ret__do_filp_open 5641 5739 +98 (+1.74%) 446 446 +0 (+0.00%)
raw_tracepoint__sched_process_exec 2770 2608 -162 (-5.85%) 226 216 -10 (-4.42%)
raw_tracepoint__sched_process_exit 1526 1526 +0 (+0.00%) 133 133 +0 (+0.00%)
raw_tracepoint__sched_process_fork 265 265 +0 (+0.00%) 19 19 +0 (+0.00%)
tracepoint__syscalls__sys_enter_kill 18782 18970 +188 (+1.00%) 1286 1286 +0 (+0.00%)
kprobe__proc_sys_write 2700 2809 +109 (+4.04%) 107 109 +2 (+1.87%)
kprobe__vfs_link 12238 12366 +128 (+1.05%) 267 269 +2 (+0.75%)
kprobe__vfs_symlink 7139 7365 +226 (+3.17%) 167 175 +8 (+4.79%)
kprobe_ret__do_filp_open 7264 7070 -194 (-2.67%) 180 182 +2 (+1.11%)
raw_tracepoint__sched_process_exec 3768 3453 -315 (-8.36%) 211 199 -12 (-5.69%)
raw_tracepoint__sched_process_exit 3138 3138 +0 (+0.00%) 83 83 +0 (+0.00%)
raw_tracepoint__sched_process_fork 265 265 +0 (+0.00%) 19 19 +0 (+0.00%)
tracepoint__syscalls__sys_enter_kill 26679 24327 -2352 (-8.82%) 1067 1037 -30 (-2.81%)
kprobe__proc_sys_write 1833 1833 +0 (+0.00%) 157 157 +0 (+0.00%)
kprobe__vfs_link 9995 10127 +132 (+1.32%) 803 803 +0 (+0.00%)
kprobe__vfs_symlink 5606 5672 +66 (+1.18%) 451 451 +0 (+0.00%)
kprobe_ret__do_filp_open 5716 5782 +66 (+1.15%) 462 462 +0 (+0.00%)
raw_tracepoint__sched_process_exec 3042 3042 +0 (+0.00%) 278 278 +0 (+0.00%)
raw_tracepoint__sched_process_exit 1680 1680 +0 (+0.00%) 146 146 +0 (+0.00%)
raw_tracepoint__sched_process_fork 299 299 +0 (+0.00%) 25 25 +0 (+0.00%)
tracepoint__syscalls__sys_enter_kill 18372 18372 +0 (+0.00%) 1558 1558 +0 (+0.00%)

default (mcpu=v3), no_alu32, cpuv4 have similar differences.

Note one place where bpf_nop_mov() is used to workaround the verifier lack of link
between the scalar register and its spill to stack.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20231226191148.48536-7-alexei.starovoitov@gmail.com


# 495d2d81 26-Dec-2023 Alexei Starovoitov <ast@kernel.org>

selftests/bpf: Attempt to build BPF programs with -Wsign-compare

GCC's -Wall includes -Wsign-compare while clang does not.
Since BPF programs are built with clang we need to add this flag explicitly
to catch problematic comparisons like:

int i = -1;
unsigned int j = 1;
if (i < j) // this is false.

long i = -1;
unsigned int j = 1;
if (i < j) // this is true.

C standard for reference:

- If either operand is unsigned long the other shall be converted to unsigned long.

- Otherwise, if one operand is a long int and the other unsigned int, then if a
long int can represent all the values of an unsigned int, the unsigned int
shall be converted to a long int; otherwise both operands shall be converted to
unsigned long int.

- Otherwise, if either operand is long, the other shall be converted to long.

- Otherwise, if either operand is unsigned, the other shall be converted to unsigned.

Unfortunately clang's -Wsign-compare is very noisy.
It complains about (s32)a == (u32)b which is safe and doen't have surprising behavior.

This patch fixes some of the issues. It needs a follow up to fix the rest.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/bpf/20231226191148.48536-2-alexei.starovoitov@gmail.com


# ba8ea723 12-Oct-2023 Artem Savkov <asavkov@redhat.com>

bpf: Change syscall_nr type to int in struct syscall_tp_t

linux-rt-devel tree contains a patch (b1773eac3f29c ("sched: Add support
for lazy preemption")) that adds an extra member to struct trace_entry.
This causes the offset of args field in struct trace_event_raw_sys_enter
be different from the one in struct syscall_trace_enter:

struct trace_event_raw_sys_enter {
struct trace_entry ent; /* 0 12 */

/* XXX last struct has 3 bytes of padding */
/* XXX 4 bytes hole, try to pack */

long int id; /* 16 8 */
long unsigned int args[6]; /* 24 48 */
/* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */
char __data[]; /* 72 0 */

/* size: 72, cachelines: 2, members: 4 */
/* sum members: 68, holes: 1, sum holes: 4 */
/* paddings: 1, sum paddings: 3 */
/* last cacheline: 8 bytes */
};

struct syscall_trace_enter {
struct trace_entry ent; /* 0 12 */

/* XXX last struct has 3 bytes of padding */

int nr; /* 12 4 */
long unsigned int args[]; /* 16 0 */

/* size: 16, cachelines: 1, members: 3 */
/* paddings: 1, sum paddings: 3 */
/* last cacheline: 16 bytes */
};

This, in turn, causes perf_event_set_bpf_prog() fail while running bpf
test_profiler testcase because max_ctx_offset is calculated based on the
former struct, while off on the latter:

10488 if (is_tracepoint || is_syscall_tp) {
10489 int off = trace_event_get_offsets(event->tp_event);
10490
10491 if (prog->aux->max_ctx_offset > off)
10492 return -EACCES;
10493 }

What bpf program is actually getting is a pointer to struct
syscall_tp_t, defined in kernel/trace/trace_syscalls.c. This patch fixes
the problem by aligning struct syscall_tp_t with struct
syscall_trace_(enter|exit) and changing the tests to use these structs
to dereference context.

Signed-off-by: Artem Savkov <asavkov@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/bpf/20231013054219.172920-1-asavkov@redhat.com


# 328bafc9 31-Mar-2023 Jiri Olsa <jolsa@kernel.org>

selftests/bpf: Add err.h header

Moving error macros from profiler.inc.h to new err.h header.
It will be used in following changes.

Also adding PTR_ERR macro that will be used in following changes.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230331093157.1749137-2-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>


# 1b5e3853 27-Jan-2023 Ilya Leoshkevich <iii@linux.ibm.com>

selftests/bpf: Fix profiler on s390x

Use bpf_probe_read_kernel() and bpf_probe_read_kernel_str() instead
of bpf_probe_read() and bpf_probe_read_kernel().

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/r/20230128000650.1516334-21-iii@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>


# abf08576 12-Jan-2023 Christian Brauner <brauner@kernel.org>

fs: port vfs_*() helpers to struct mnt_idmap

Convert to struct mnt_idmap.

Last cycle we merged the necessary infrastructure in
256c8aed2b42 ("fs: introduce dedicated idmap type for mounts").
This is just the conversion to struct mnt_idmap.

Currently we still pass around the plain namespace that was attached to a
mount. This is in general pretty convenient but it makes it easy to
conflate namespaces that are relevant on the filesystem with namespaces
that are relevent on the mount level. Especially for non-vfs developers
without detailed knowledge in this area this can be a potential source for
bugs.

Once the conversion to struct mnt_idmap is done all helpers down to the
really low-level helpers will take a struct mnt_idmap argument instead of
two namespace arguments. This way it becomes impossible to conflate the two
eliminating the possibility of any bugs. All of the vfs and all filesystems
only operate on struct mnt_idmap.

Acked-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christian Brauner (Microsoft) <brauner@kernel.org>


# e299bcd4 31-Mar-2022 Nikolay Borisov <nborisov@suse.com>

selftests/bpf: Fix vfs_link kprobe definition

Since commit 6521f8917082 ("namei: prepare for idmapped mounts")
vfs_link's prototype was changed, the kprobe definition in
profiler selftest in turn wasn't updated. The result is that all
argument after the first are now stored in different registers. This
means that self-test has been broken ever since. Fix it by updating the
kprobe definition accordingly.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220331140949.1410056-1-nborisov@suse.com


# fb355812 24-Nov-2020 Andrei Matei <andreimatei1@gmail.com>

bpf: Fix selftest compilation on clang 11

Before this patch, profiler.inc.h wouldn't compile with clang-11 (before
the __builtin_preserve_enum_value LLVM builtin was introduced in
https://reviews.llvm.org/D83242).

Another test that uses this builtin (test_core_enumval) is conditionally
skipped if the compiler is too old. In that spirit, this patch inhibits
part of populate_cgroup_info(), which needs this CO-RE builtin. The
selftests build again on clang-11.

The affected test (the profiler test) doesn't pass on clang-11 because
it's missing https://reviews.llvm.org/D85570, but at least the test suite
as a whole compiles. The test's expected failure is already called out in
the README.

Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Florian Lehner <dev@der-flo.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20201125035255.17970-1-andreimatei1@gmail.com


# 47254777 22-Oct-2020 Andrii Nakryiko <andrii@kernel.org>

selftest/bpf: Fix profiler test using CO-RE relocation for enums

Instead of hard-coding invalid pids_cgrp_id, use Kconfig to detect the
presence of that enum value and CO-RE to capture its actual value in the
hosts's kernel.

Fixes: 03d4d13fab3f ("selftests/bpf: Add profiler test")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20201022202739.3667367-1-andrii@kernel.org


# 03d4d13f 08-Oct-2020 Alexei Starovoitov <ast@kernel.org>

selftests/bpf: Add profiler test

The main purpose of the profiler test to check different llvm generation
patterns to make sure the verifier can load these large programs.

Note that profiler.inc.h test doesn't follow strict kernel coding style.
The code was formatted in the kernel style, but variable declarations are
kept as-is to preserve original llvm IR pattern.

profiler1.c should pass with older and newer llvm

profiler[23].c may fail on older llvm that don't have:
https://reviews.llvm.org/D85570
because llvm may do speculative code motion optimization that
will generate code like this:

// r9 is a pointer to map_value
// r7 is a scalar
17: bf 96 00 00 00 00 00 00 r6 = r9
18: 0f 76 00 00 00 00 00 00 r6 += r7
19: a5 07 01 00 01 01 00 00 if r7 < 257 goto +1
20: bf 96 00 00 00 00 00 00 r6 = r9
// r6 is used here

The verifier will reject such code with the error:
"math between map_value pointer and register with unbounded min value is not allowed"
At insn 18 the r7 is indeed unbounded. The later insn 19 checks the bounds and
the insn 20 undoes map_value addition. It is currently impossible for the
verifier to understand such speculative pointer arithmetic. Hence llvm D85570
addresses it on the compiler side.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20201009011240.48506-4-alexei.starovoitov@gmail.com