1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00
linux/kernel/bpf
Andrii Nakryiko c50c0b57a5 bpf: fix mark_all_scalars_precise use in mark_chain_precision
When precision backtracking bails out due to some unsupported sequence
of instructions (e.g., stack access through register other than r10), we
need to mark all SCALAR registers as precise to be safe. Currently,
though, we mark SCALARs precise only starting from the state we detected
unsupported condition, which could be one of the parent states of the
actual current state. This will leave some registers potentially not
marked as precise, even though they should. So make sure we start
marking scalars as precise from current state (env->cur_state).

Further, we don't currently detect a situation when we end up with some
stack slots marked as needing precision, but we ran out of available
states to find the instructions that populate those stack slots. This is
akin the `i >= func->allocated_stack / BPF_REG_SIZE` check and should be
handled similarly by falling back to marking all SCALARs precise. Add
this check when we run out of states.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230505043317.3629845-8-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-05-04 22:35:35 -07:00
..
preload bpf: iterators: Split iterators.lskel.h into little- and big- endian versions 2023-01-28 12:45:15 -08:00
arraymap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
bloom_filter.c bpf: compute hashes in bloom filter similar to hashmap 2023-04-02 08:44:49 -07:00
bpf_cgrp_storage.c bpf: Teach verifier that certain helpers accept NULL pointer. 2023-04-04 16:57:16 -07:00
bpf_inode_storage.c Networking changes for 6.4. 2023-04-26 16:07:23 -07:00
bpf_iter.c bpf: implement numbers iterator 2023-03-08 16:19:51 -08:00
bpf_local_storage.c bpf: Handle NULL in bpf_local_storage_free. 2023-04-12 10:27:50 -07:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h printk: stop including cache.h from printk.h 2022-05-13 07:20:07 -07:00
bpf_lsm.c bpf: Fix the kernel crash caused by bpf_setsockopt(). 2023-01-26 23:26:40 -08:00
bpf_struct_ops.c bpf: Check IS_ERR for the bpf_map_get() return value 2023-03-24 12:40:47 -07:00
bpf_struct_ops_types.h bpf: Add dummy BPF STRUCT_OPS for test purpose 2021-11-01 14:10:00 -07:00
bpf_task_storage.c bpf: Teach verifier that certain helpers accept NULL pointer. 2023-04-04 16:57:16 -07:00
btf.c bpf: minimal support for programs hooked into netfilter framework 2023-04-21 11:34:14 -07:00
cgroup.c bpf: Don't EFAULT for getsockopt with optval=NULL 2023-04-21 17:09:53 +02:00
cgroup_iter.c bpf: Pin the start cgroup in cgroup_iter_seq_init() 2022-11-21 17:40:42 +01:00
core.c bpf: Support 64-bit pointers to kfuncs 2023-04-13 21:36:41 -07:00
cpumap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
cpumask.c bpf: Treat KF_RELEASE kfuncs as KF_TRUSTED_ARGS 2023-03-25 16:56:22 -07:00
devmap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
disasm.c bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Synchronize dispatcher update with bpf_dispatcher_xdp_func 2022-12-14 12:02:14 -08:00
hashtab.c bpf: optimize hashmap lookups when key_size is divisible by 4 2023-04-01 15:08:19 -07:00
helpers.c bpf: Add bpf_dynptr_clone 2023-04-27 10:40:47 +02:00
inode.c fs: port inode_init_owner() to mnt_idmap 2023-01-19 09:24:28 +01:00
Kconfig rcu: Make the TASKS_RCU Kconfig option be selected 2022-04-20 16:52:58 -07:00
link_iter.c bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
local_storage.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
log.c bpf: Relax log_buf NULL conditions when log_level>0 is requested 2023-04-11 18:05:44 +02:00
lpm_trie.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
Makefile bpf: Split off basic BPF verifier log into separate file 2023-04-11 18:05:42 +02:00
map_in_map.c bpf: Remove btf_field_offs, use btf_record's fields instead 2023-04-15 17:36:49 -07:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Introduce MEM_RDONLY flag 2021-12-18 13:27:41 -08:00
memalloc.c bpf: Add a few bpf mem allocator functions 2023-03-25 19:52:51 -07:00
mmap_unlock_work.h bpf: Introduce helper bpf_find_vma 2021-11-07 11:54:51 -08:00
net_namespace.c net: Add includes masked by netdevice.h including uapi/bpf.h 2021-12-29 20:03:05 -08:00
offload.c bpf: offload map memory usage 2023-03-07 09:33:43 -08:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-11 12:05:14 -08:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
reuseport_array.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
ringbuf.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
stackmap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
syscall.c bpf: Print a warning only if writing to unprivileged_bpf_disabled. 2023-05-02 16:20:31 -07:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: keep a reference to the mm, in case the task is dead. 2022-12-28 14:11:48 -08:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c Networking changes for 6.4. 2023-04-26 16:07:23 -07:00
verifier.c bpf: fix mark_all_scalars_precise use in mark_chain_precision 2023-05-04 22:35:35 -07:00