1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00
Commit graph

6 commits

Author SHA1 Message Date
Andrii Nakryiko
df8ff35311 libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h
Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h
header to make it available for non-selftests users.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-5-andriin@fb.com
2020-03-02 16:25:14 -08:00
Andrii Nakryiko
396f544ed5 selftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test
For kretprobes, there is no point in capturing input arguments from pt_regs,
as they are going to be, most probably, clobbered by the time probed kernel
function returns. So switch BPF_KRETPROBE to accept zero or one argument
(optional return result).

Fixes: ac065870d9 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-4-andriin@fb.com
2020-03-02 16:25:14 -08:00
Toke Høiland-Jørgensen
3e689141e6 selftests: Use consistent include paths for libbpf
Fix all selftests to include libbpf header files with the bpf/ prefix, to
be consistent with external users of the library. Also ensure that all
includes of exported libbpf header files (those that are exported on 'make
install' of the library) use bracketed includes instead of quoted.

To not break the build, keep the old include path until everything has been
changed to the new one; a subsequent patch will remove that.

Fixes: 6910d7d386 ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560568.1683545.9649335788846513446.stgit@toke.dk
2020-01-20 16:37:45 -08:00
Andrii Nakryiko
ac065870d9 selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros
Streamline BPF_TRACE_x macro by moving out return type and section attribute
definition out of macro itself. That makes those function look in source code
similar to other BPF programs. Additionally, simplify its usage by determining
number of arguments automatically (so just single BPF_TRACE vs a family of
BPF_TRACE_1, BPF_TRACE_2, etc). Also, allow more natural function argument
syntax without commas inbetween argument type and name.

Given this helper is useful not only for tracing tp_btf/fenty/fexit programs,
but could be used for LSM programs and others following the same pattern,
rename BPF_TRACE macro into more generic BPF_PROG. Existing BPF_TRACE_x
usages in selftests are converted to new BPF_PROG macro.

Following the same pattern, define BPF_KPROBE and BPF_KRETPROBE macros for
nicer usage of kprobe/kretprobe arguments, respectively. BPF_KRETPROBE, adopts
same convention used by fexit programs, that last defined argument is probed
function's return result.

v4->v5:
- fix test_overhead test (__set_task_comm is void) (Alexei);

v3->v4:
- rebased and fixed one more BPF_TRACE_x occurence (Alexei);

v2->v3:
- rename to shorter and as generic BPF_PROG (Alexei);

v1->v2:
- verified GCC handles pragmas as expected;
- added descriptions to macros;
- converted new STRUCT_OPS selftest to BPF_HANDLER (worked as expected);
- added original context as 'ctx' parameter, for cases where it has to be
  passed into BPF helpers. This might cause an accidental naming collision,
  unfortunately, but at least it's easy to work around. Fortunately, this
  situation produces quite legible compilation error:

progs/bpf_dctcp.c:46:6: error: redefinition of 'ctx' with a different type: 'int' vs 'unsigned long long *'
        int ctx = 123;
            ^
progs/bpf_dctcp.c:42:6: note: previous definition is here
void BPF_HANDLER(dctcp_init, struct sock *sk)
     ^
./bpf_trace_helpers.h:58:32: note: expanded from macro 'BPF_HANDLER'
____##name(unsigned long long *ctx, ##args)

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200110211634.1614739-1-andriin@fb.com
2020-01-10 14:02:07 -08:00
Martin KaFai Lau
f9a7cf6eb1 bpf: Introduce BPF_TRACE_x helper for the tracing tests
For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64.
This patch borrows the idea from BPF_CALL_x in filter.h to
convert a u64 to the arg type of the traced function.

The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog.
It will be used in the future TCP-ops bpf_prog that may return "void".

The new macros are defined in the new header file "bpf_trace_helpers.h".
It is under selftests/bpf/ for now.  It could be moved to libbpf later
after seeing more upcoming non-tracing use cases.

The tests are changed to use these new macros also.  Hence,
the k[s]u8/16/32/64 are no longer needed and they are removed
from the bpf_helpers.h.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com
2019-11-24 17:12:11 -08:00
Alexei Starovoitov
c4781e37c6 selftests/bpf: Add BPF trampoline performance test
Add a test that benchmarks different ways of attaching BPF program to a kernel function.
Here are the results for 2.4Ghz x86 cpu on a kernel without mitigations:
$ ./test_progs -n 49 -v|grep events
task_rename base	2743K events per sec
task_rename kprobe	2419K events per sec
task_rename kretprobe	1876K events per sec
task_rename raw_tp	2578K events per sec
task_rename fentry	2710K events per sec
task_rename fexit	2685K events per sec

On a kernel with retpoline:
$ ./test_progs -n 49 -v|grep events
task_rename base	2401K events per sec
task_rename kprobe	1930K events per sec
task_rename kretprobe	1485K events per sec
task_rename raw_tp	2053K events per sec
task_rename fentry	2351K events per sec
task_rename fexit	2185K events per sec

All 5 approaches:
- kprobe/kretprobe in __set_task_comm()
- raw tracepoint in trace_task_rename()
- fentry/fexit in __set_task_comm()
are roughly equivalent.

__set_task_comm() by itself is quite fast, so any extra instructions add up.
Until BPF trampoline was introduced the fastest mechanism was raw tracepoint.
kprobe via ftrace was second best. kretprobe is slow due to trap. New
fentry/fexit methods via BPF trampoline are clearly the fastest and the
difference is more pronounced with retpoline on, since BPF trampoline doesn't
use indirect jumps.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org
2019-11-24 16:58:46 -08:00