This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Ada 1/6] watchpoint regression debugging with remote protocol (bare metal)


We have noticed a regression in our watchpoint support when debugging
through the remote protocol a program running on a bare metal platform,
when the program uses what we call the Ravenscar Runtime.

This runtime is a subset of the Ada runtime defined by the Ravenscar
Profile.  One of the nice things about this runtime is that it provides
tasking, which is equivalent to the concept of threads in C (it is
actually often mapped to threads, when available). For bare metal
targets, however, there is no OS, and therefore no thread layer.
What we did, then, was add a ravenscar-thread layer, which has insider
knowledge of the runtime to get the list of threads, but also all
necessary info to perform thread switching.

For the record, the commit which caused the regression is:

    commit 799a2abe613be0645b84f5aaa050f2f91e6ae3f7
    Date:   Mon Nov 30 16:05:16 2015 +0000
    Subject: remote: stop reason and watchpoint data address per thread

    Running local-watch-wrong-thread.exp with "maint set target-non-stop
    on" exposes that gdb/remote.c only records whether the target stopped
    for a breakpoint/watchpoint plus the watchpoint data address *for the
    last reported remote event*.  But in non-stop mode, we need to keep
    that info per-thread, as each thread can end up with its own
    last-status pending.

Our testcase is very simple. We have a package defining a global
variable named "Watch"...

    package Pck is
       Watch : Integer := 1974;
    end Pck;

... and a main subprogram which changes its value

    procedure Foo is
    begin
       Pck.Watch := Pck.Watch + 1;
    end Foo;

To reproduce, we built our program as usual, started it in QEMU,
and then connected GDB to QEMU...

    (gdb) target remote :4444
    (gdb) break _ada_foo
    (gdb) cont  <--- this is to make sure the program is started
                     and the variable we want to watch is initialized

... at which point we try to use a watchpoint on our global variable:

    (gdb) watch watch

... but, upon resuming the execution with a "cont", we expected to
get a watchpoint-hit notification, such as...

    (gdb) cont
    Hardware watchpoint 2: watch

    Old value = 1974
    New value = 1975
    0xfff00258 in foo () at /[...]/foo.adb:6
    6       end Foo;

... but unfortunately, we get a SIGTRAP instead:

    (gdb) cont
    Program received signal SIGTRAP, Trace/breakpoint trap.
    foo () at /[...]/foo.adb:6
        6   end Foo;

What happens is that, on the one hand, the change in remote.c
now stores the watchpoint-hit notification info in the thread
that received it; and on the other hand, we have a ravenscar-thread
layer which manages the thread list on top of the remote protocol
layer. The two of them get disconnected, and this eventually results
in GDB not realizing that we hit a watchpoint.  Below is how:

First, once connected and just before inserting our watchpoint,
we have the ravenscar-thread layer which built the list of threads
by extracting some info from inferior memory, giving us the following
two threads:

      (gdb) info threads
      Id   Target Id         Frame
      1    Thread 0 "0Q@" (Ravenscar task) foo () at /[...]/foo.adb:5
    * 2    Thread 0x24618 (Ravenscar task) foo () at /[...]/foo.adb:5

The first thread is the only thread QEMU told GDB about. The second
one is a thread that the ravenscar-thread added. QEMU has now way
to know about those threads, since they are really embedded inside
the program; that's why we have the ravenscar layer, which uses
inside-knowledge to extract the list of threads.

Next, we insert a watchpoint, which applies to all threads. No problem
so far.

Then, we continue; meaning that GDB sends a Z2 packet to QEMU to
get the watchpoint inserted, then a vCont to resume the program's
execution. The program hits the watchpoints, and thererfore QEMU
reports it back:

        Packet received: T05thread:01;watch:000022c4;

Since QEMU knows about one thread and one thread only, it stands
to reason that it would say that the event applies to thread:01,
which is our first thread in the "info threads" listing. That
thread has a ptid of {42000, lwp=1, tid=0}.

This is where Pedro's change kicks in: Seeing this event, and
having determined that the event was reported for thread 01,
and therefore ptid {42000, lwp=1, tid=0}, it saves the watchpoint-hit
event info in the private area of that thread/ptid. Once this is
done, remote.c's event-wait layer returns.

Enter the ravenscar-thread layer of the event-wait, which does
a little dance to delegate the wait to underlying layers with
ptids that those layers know about, and then when the target_beneath's
to_wait is done, tries to figure out which thread is now the active
thread. The code looks like this:

  1.    inferior_ptid = base_ptid;
  2.    beneath->to_wait (beneath, base_ptid, status, 0);
  3.    [...]
  4.        ravenscar_update_inferior_ptid ();
  5.
  6.    return inferior_ptid;

Line 1 is where we reset inferior_ptid to the ptid that
the target_beneath layer knows about, allowing us to then
call its to_wait implementation (line 2). And then, upon
return, we call ravenscar_update_inferior_ptid, which reads
inferior memory to determine which thread is actually active,
setting inferior_ptid accordingly. Then we return that
inferior_ptid (which, again, neither QEMU and therefore nor
the remote.c layer knows about).

Upon return, we eventually arrive to the part where we try
to handle the inferior event: we discover that we got a SIGTRAP
and, as part of its handling, we call watchpoints_triggered,
which calls target_stopped_by_watchpoint, which eventually
remote_stopped_by_watchpoint, where Pedro's change kicks in
again:

    struct thread_info *thread = inferior_thread ();
    return (thread->priv != NULL
            && thread->priv->stop_reason == TARGET_STOPPED_BY_WATCHPOINT);

Because the ravenscar-thread layer changed the inferior_ptid
to the ptid of the active thread, inferior_thread now returns
the private data of that thread. This is not the thread that
QEMU reported the watchpoint-hit on, and thus, the function
returns "no watchpoint hit, mister". Hence GDB not understanding
the SIGTRAP, thus reporting it verbatim.

The way we chose to fix the issue is by making sure that the
ravenscar-thread layer doesn't let the remote layer be called
with inferior_ptid being set to a thread that the remote layer
does not know about.

gdb/ChangeLog:

        * ravenscar-thread.c (ravenscar_stopped_by_sw_breakpoint)
        (ravenscar_stopped_by_hw_breakpoint, ravenscar_stopped_by_watchpoint)
        (ravenscar_stopped_data_address, ravenscar_core_of_thread):
        New functions.
        (init_ravenscar_thread_ops): Set the to_stopped_by_sw_breakpoint,
        to_stopped_by_hw_breakpoint, to_stopped_by_watchpoint,
        to_stopped_data_address and to_core_of_thread fields of
        ravenscar_ops.
---
 gdb/ravenscar-thread.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)

diff --git a/gdb/ravenscar-thread.c b/gdb/ravenscar-thread.c
index 850b84d2b5..7edfa29263 100644
--- a/gdb/ravenscar-thread.c
+++ b/gdb/ravenscar-thread.c
@@ -326,6 +326,66 @@ ravenscar_prepare_to_store (struct target_ops *self,
     }
 }
 
+/* Implement the to_stopped_by_sw_breakpoint target_ops "method".  */
+
+static int
+ravenscar_stopped_by_sw_breakpoint (struct target_ops *ops)
+{
+  ptid_t saved_ptid = inferior_ptid;
+  struct target_ops *beneath = find_target_beneath (ops);
+  int result;
+
+  inferior_ptid = base_ptid;
+  result = beneath->to_stopped_by_sw_breakpoint (beneath);
+  inferior_ptid = saved_ptid;
+  return result;
+}
+
+/* Implement the to_stopped_by_hw_breakpoint target_ops "method".  */
+
+static int
+ravenscar_stopped_by_hw_breakpoint (struct target_ops *ops)
+{
+  ptid_t saved_ptid = inferior_ptid;
+  struct target_ops *beneath = find_target_beneath (ops);
+  int result;
+
+  inferior_ptid = base_ptid;
+  result = beneath->to_stopped_by_hw_breakpoint (beneath);
+  inferior_ptid = saved_ptid;
+  return result;
+}
+
+/* Implement the to_stopped_by_watchpoint target_ops "method".  */
+
+static int
+ravenscar_stopped_by_watchpoint (struct target_ops *ops)
+{
+  ptid_t saved_ptid = inferior_ptid;
+  struct target_ops *beneath = find_target_beneath (ops);
+  int result;
+
+  inferior_ptid = base_ptid;
+  result = beneath->to_stopped_by_watchpoint (beneath);
+  inferior_ptid = saved_ptid;
+  return result;
+}
+
+/* Implement the to_stopped_data_address target_ops "method".  */
+
+static int
+ravenscar_stopped_data_address (struct target_ops *ops, CORE_ADDR *addr_p)
+{
+  ptid_t saved_ptid = inferior_ptid;
+  struct target_ops *beneath = find_target_beneath (ops);
+  int result;
+
+  inferior_ptid = base_ptid;
+  result = beneath->to_stopped_data_address (beneath, addr_p);
+  inferior_ptid = saved_ptid;
+  return result;
+}
+
 static void
 ravenscar_mourn_inferior (struct target_ops *ops)
 {
@@ -336,6 +396,21 @@ ravenscar_mourn_inferior (struct target_ops *ops)
   unpush_target (&ravenscar_ops);
 }
 
+/* Implement the to_core_of_thread target_ops "method".  */
+
+static int
+ravenscar_core_of_thread (struct target_ops *ops, ptid_t ptid)
+{
+  ptid_t saved_ptid = inferior_ptid;
+  struct target_ops *beneath = find_target_beneath (ops);
+  int result;
+
+  inferior_ptid = base_ptid;
+  result = beneath->to_core_of_thread (beneath, inferior_ptid);
+  inferior_ptid = saved_ptid;
+  return result;
+}
+
 /* Observer on inferior_created: push ravenscar thread stratum if needed.  */
 
 static void
@@ -369,6 +444,12 @@ init_ravenscar_thread_ops (void)
   ravenscar_ops.to_fetch_registers = ravenscar_fetch_registers;
   ravenscar_ops.to_store_registers = ravenscar_store_registers;
   ravenscar_ops.to_prepare_to_store = ravenscar_prepare_to_store;
+  ravenscar_ops.to_stopped_by_sw_breakpoint
+    = ravenscar_stopped_by_sw_breakpoint;
+  ravenscar_ops.to_stopped_by_hw_breakpoint
+    = ravenscar_stopped_by_hw_breakpoint;
+  ravenscar_ops.to_stopped_by_watchpoint = ravenscar_stopped_by_watchpoint;
+  ravenscar_ops.to_stopped_data_address = ravenscar_stopped_data_address;
   ravenscar_ops.to_thread_alive = ravenscar_thread_alive;
   ravenscar_ops.to_update_thread_list = ravenscar_update_thread_list;
   ravenscar_ops.to_pid_to_str = ravenscar_pid_to_str;
@@ -381,6 +462,7 @@ init_ravenscar_thread_ops (void)
   ravenscar_ops.to_has_registers = default_child_has_registers;
   ravenscar_ops.to_has_execution = default_child_has_execution;
   ravenscar_ops.to_stratum = thread_stratum;
+  ravenscar_ops.to_core_of_thread = ravenscar_core_of_thread;
   ravenscar_ops.to_magic = OPS_MAGIC;
 }
 
-- 
2.11.0


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]