kprobes: Fix to handle forcibly unoptimized kprobes on freeing_list
authorMasami Hiramatsu (Google) <mhiramat@kernel.org>
Mon, 20 Feb 2023 23:49:16 +0000 (08:49 +0900)
committerMasami Hiramatsu (Google) <mhiramat@kernel.org>
Mon, 20 Feb 2023 23:49:16 +0000 (08:49 +0900)
Since forcibly unoptimized kprobes will be put on the freeing_list directly
in the unoptimize_kprobe(), do_unoptimize_kprobes() must continue to check
the freeing_list even if unoptimizing_list is empty.

This bug can happen if a kprobe is put in an instruction which is in the
middle of the jump-replaced instruction sequence of an optprobe, *and* the
optprobe is recently unregistered and queued on unoptimizing_list.
In this case, the optprobe will be unoptimized forcibly (means immediately)
and put it into the freeing_list, expecting the optprobe will be handled in
do_unoptimize_kprobe().
But if there is no other optprobes on the unoptimizing_list, current code
returns from the do_unoptimize_kprobe() soon and does not handle the
optprobe which is on the freeing_list. Then the optprobe will hit the
WARN_ON_ONCE() in the do_free_cleaned_kprobes(), because it is not handled
in the latter loop of the do_unoptimize_kprobe().

To solve this issue, do not return from do_unoptimize_kprobes() immediately
even if unoptimizing_list is empty.

Moreover, this change affects another case. kill_optimized_kprobes() expects
kprobe_optimizer() will just free the optprobe on freeing_list.
So I changed it to just do list_move() to freeing_list if optprobes are on
unoptimizing list. And the do_unoptimize_kprobe() will skip
arch_disarm_kprobe() if the probe on freeing_list has gone flag.

Link: https://lore.kernel.org/all/Y8URdIfVr3pq2X8w@xpf.sh.intel.com/
Link: https://lore.kernel.org/all/167448024501.3253718.13037333683110512967.stgit@devnote3/
Fixes: e4add247789e ("kprobes: Fix optimize_kprobe()/unoptimize_kprobe() cancellation logic")
Reported-by: Pengfei Xu <pengfei.xu@intel.com>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Cc: stable@vger.kernel.org
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
kernel/kprobes.c

index 1c18ecf9f98b10a10443bda16a54da648275560a..6b6aff00b3b6f9f65aa110da6d228f5400a8eaa4 100644 (file)
@@ -555,17 +555,15 @@ static void do_unoptimize_kprobes(void)
        /* See comment in do_optimize_kprobes() */
        lockdep_assert_cpus_held();
 
-       /* Unoptimization must be done anytime */
-       if (list_empty(&unoptimizing_list))
-               return;
+       if (!list_empty(&unoptimizing_list))
+               arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
 
-       arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
-       /* Loop on 'freeing_list' for disarming */
+       /* Loop on 'freeing_list' for disarming and removing from kprobe hash list */
        list_for_each_entry_safe(op, tmp, &freeing_list, list) {
                /* Switching from detour code to origin */
                op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
-               /* Disarm probes if marked disabled */
-               if (kprobe_disabled(&op->kp))
+               /* Disarm probes if marked disabled and not gone */
+               if (kprobe_disabled(&op->kp) && !kprobe_gone(&op->kp))
                        arch_disarm_kprobe(&op->kp);
                if (kprobe_unused(&op->kp)) {
                        /*
@@ -797,14 +795,13 @@ static void kill_optimized_kprobe(struct kprobe *p)
        op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
 
        if (kprobe_unused(p)) {
-               /* Enqueue if it is unused */
-               list_add(&op->list, &freeing_list);
                /*
-                * Remove unused probes from the hash list. After waiting
-                * for synchronization, this probe is reclaimed.
-                * (reclaiming is done by do_free_cleaned_kprobes().)
+                * Unused kprobe is on unoptimizing or freeing list. We move it
+                * to freeing_list and let the kprobe_optimizer() remove it from
+                * the kprobe hash list and free it.
                 */
-               hlist_del_rcu(&op->kp.hlist);
+               if (optprobe_queued_unopt(op))
+                       list_move(&op->list, &freeing_list);
        }
 
        /* Don't touch the code, because it is already freed. */