Skip to content

Commit

Permalink
bpf: fix control-flow graph checking in privileged mode
Browse files Browse the repository at this point in the history
When BPF program is verified in privileged mode, BPF verifier allows
bounded loops. This means that from CFG point of view there are
definitely some back-edges. Original commit adjusted check_cfg() logic
to not detect back-edges in control flow graph if they are resulting
from conditional jumps, which the idea that subsequent full BPF
verification process will determine whether such loops are bounded or
not, and either accept or reject the BPF program. At least that's my
reading of the intent.

Unfortunately, the implementation of this idea doesn't work correctly in
all possible situations. Conditional jump might not result in immediate
back-edge, but just a few unconditional instructions later we can arrive
at back-edge. In such situations check_cfg() would reject BPF program
even in privileged mode, despite it might be bounded loop. Next patch
adds one simple program demonstrating such scenario.

So, this patch fixes this limitation by tracking not just immediate
conditional jump, but also all subsequent instructions that happened in
such conditional branch. For that we store a new flag, CONDITIONAL,
along with current DISCOVERED, EXPLORED, BRANCH, and FALLTHROUGH.
Conditional jump instructions forces CONDITIONAL flag, and in all other
situations we "inherit" this flag based on whether we arrived at given
instruction with CONDITIONAL flag during discovery step.

Note, this approach doesn't detect some obvious infinite loops during
check_cfg() if they happen inside conditional code path. This can be
fixed with more sophisticated DFS state implementation, where we'd
remember some sort of "conditional epoch", and so if a sequence of jumps
or sequential instructions lead to back-edge within the same epoch, that
a loop within the same branch.

But I didn't add that for two reasons. First, subsequent BPF verifier
logic will detect this and prevent anyways, and it's easy to do the same
with just conditional jumps, so there isn't much of a difference in
supporting this.

But also, second, this conditional jump branch might never be taken and
thus will be a dead code. And it seems desirable to be able to express
"this shall not be executed, otherwise we'll while(true){}" logic as
a kind of unreachable guard.

So keeping things simple and allowing this dead code elimination
approach to work.

Note a few test changes. For unknown reason, we have a few tests that
are specified to detect a back-edge in a privileged mode, but looking at
their code it seems like the right outcome is passing check_cfg() and
letting subsequent verification to make a decision about bounded or not
bounded looping.

Bounded recursion case is also interesting. The example should pass, as
recursion is limited to just a few levels and so we never reach maximum
number of nested frames and never exhaust maximum stack depth. But the
way that max stack depth logic works today it falsely detects this as
exceeding max nested frame count. This patch series doesn't attempt to
fix this orthogonal problem, so we just adjust expected verifier failure.

Fixes: 2589726 ("bpf: introduce bounded loops")
Reported-by: Hao Sun <[email protected]>
Signed-off-by: Andrii Nakryiko <[email protected]>
  • Loading branch information
anakryiko authored and d-e-s-o committed Nov 9, 2023
1 parent f035e03 commit a590388
Show file tree
Hide file tree
Showing 3 changed files with 29 additions and 29 deletions.
45 changes: 21 additions & 24 deletions kernel/bpf/verifier.c
Original file line number Diff line number Diff line change
Expand Up @@ -15454,8 +15454,9 @@ static int check_return_code(struct bpf_verifier_env *env, int regno)
enum {
DISCOVERED = 0x10,
EXPLORED = 0x20,
FALLTHROUGH = 1,
BRANCH = 2,
CONDITIONAL = 0x01,
FALLTHROUGH = 0x02,
BRANCH = 0x04,
};

static void mark_prune_point(struct bpf_verifier_env *env, int idx)
Expand Down Expand Up @@ -15489,16 +15490,15 @@ enum {
* w - next instruction
* e - edge
*/
static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
bool loop_ok)
static int push_insn(int t, int w, int e, struct bpf_verifier_env *env)
{
int *insn_stack = env->cfg.insn_stack;
int *insn_state = env->cfg.insn_state;

if (e == FALLTHROUGH && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
if ((e & FALLTHROUGH) && insn_state[t] >= (DISCOVERED | FALLTHROUGH))
return DONE_EXPLORING;

if (e == BRANCH && insn_state[t] >= (DISCOVERED | BRANCH))
if ((e & BRANCH) && insn_state[t] >= (DISCOVERED | BRANCH))
return DONE_EXPLORING;

if (w < 0 || w >= env->prog->len) {
Expand All @@ -15507,7 +15507,7 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
return -EINVAL;
}

if (e == BRANCH) {
if (e & BRANCH) {
/* mark branch target for state pruning */
mark_prune_point(env, w);
mark_jmp_point(env, w);
Expand All @@ -15516,13 +15516,13 @@ static int push_insn(int t, int w, int e, struct bpf_verifier_env *env,
if (insn_state[w] == 0) {
/* tree-edge */
insn_state[t] = DISCOVERED | e;
insn_state[w] = DISCOVERED;
insn_state[w] = DISCOVERED | (e & CONDITIONAL);
if (env->cfg.cur_stack >= env->prog->len)
return -E2BIG;
insn_stack[env->cfg.cur_stack++] = w;
return KEEP_EXPLORING;
} else if ((insn_state[w] & 0xF0) == DISCOVERED) {
if (loop_ok && env->bpf_capable)
} else if (insn_state[w] & DISCOVERED) {
if ((e & CONDITIONAL) && env->bpf_capable)
return DONE_EXPLORING;
verbose_linfo(env, t, "%d: ", t);
verbose_linfo(env, w, "%d: ", w);
Expand All @@ -15542,10 +15542,11 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
struct bpf_verifier_env *env,
bool visit_callee)
{
int ret, insn_sz;
int ret, insn_sz, cond;

cond = env->cfg.insn_state[t] & CONDITIONAL;
insn_sz = bpf_is_ldimm64(&insns[t]) ? 2 : 1;
ret = push_insn(t, t + insn_sz, FALLTHROUGH, env, false);
ret = push_insn(t, t + insn_sz, FALLTHROUGH | cond, env);
if (ret)
return ret;

Expand All @@ -15555,12 +15556,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,

if (visit_callee) {
mark_prune_point(env, t);
ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env,
/* It's ok to allow recursion from CFG point of
* view. __check_func_call() will do the actual
* check.
*/
bpf_pseudo_func(insns + t));
ret = push_insn(t, t + insns[t].imm + 1, BRANCH | cond, env);
}
return ret;
}
Expand All @@ -15573,16 +15569,18 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
static int visit_insn(int t, struct bpf_verifier_env *env)
{
struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t];
int ret, off, insn_sz;
int ret, off, insn_sz, cond;

if (bpf_pseudo_func(insn))
return visit_func_call_insn(t, insns, env, true);

cond = env->cfg.insn_state[t] & CONDITIONAL;

/* All non-branch instructions have a single fall-through edge. */
if (BPF_CLASS(insn->code) != BPF_JMP &&
BPF_CLASS(insn->code) != BPF_JMP32) {
insn_sz = bpf_is_ldimm64(insn) ? 2 : 1;
return push_insn(t, t + insn_sz, FALLTHROUGH, env, false);
return push_insn(t, t + insn_sz, FALLTHROUGH | cond, env);
}

switch (BPF_OP(insn->code)) {
Expand Down Expand Up @@ -15629,8 +15627,7 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
off = insn->imm;

/* unconditional jump with single edge */
ret = push_insn(t, t + off + 1, FALLTHROUGH, env,
true);
ret = push_insn(t, t + off + 1, FALLTHROUGH | cond, env);
if (ret)
return ret;

Expand All @@ -15643,11 +15640,11 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
/* conditional jump with two edges */
mark_prune_point(env, t);

ret = push_insn(t, t + 1, FALLTHROUGH, env, true);
ret = push_insn(t, t + 1, FALLTHROUGH | CONDITIONAL, env);
if (ret)
return ret;

return push_insn(t, t + insn->off + 1, BRANCH, env, true);
return push_insn(t, t + insn->off + 1, BRANCH | CONDITIONAL, env);
}
}

Expand Down
4 changes: 2 additions & 2 deletions tools/testing/selftests/bpf/progs/verifier_cfg.c
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ __naked void out_of_range_jump2(void)

SEC("socket")
__description("loop (back-edge)")
__failure __msg("unreachable insn 1")
__failure __msg("back-edge")
__msg_unpriv("back-edge")
__naked void loop_back_edge(void)
{
Expand All @@ -69,7 +69,7 @@ l0_%=: goto l0_%=; \

SEC("socket")
__description("loop2 (back-edge)")
__failure __msg("unreachable insn 4")
__failure __msg("back-edge")
__msg_unpriv("back-edge")
__naked void loop2_back_edge(void)
{
Expand Down
9 changes: 6 additions & 3 deletions tools/testing/selftests/bpf/progs/verifier_loops1.c
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,10 @@ l0_%=: r0 += 1; \
" ::: __clobber_all);
}

SEC("tracepoint")
SEC("socket")
__description("bounded loop, start in the middle")
__failure __msg("back-edge")
__success
__failure_unpriv __msg_unpriv("back-edge")
__naked void loop_start_in_the_middle(void)
{
asm volatile (" \
Expand Down Expand Up @@ -136,7 +137,9 @@ l0_%=: exit; \

SEC("tracepoint")
__description("bounded recursion")
__failure __msg("back-edge")
__failure
/* verifier limitation in detecting max stack depth */
__msg("the call stack of 8 frames is too deep !")
__naked void bounded_recursion(void)
{
asm volatile (" \
Expand Down

0 comments on commit a590388

Please sign in to comment.