This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

remote software breakpoint technical detail


Hi, all. I'm new to gdb. I have a question on the remote software
breakpoint implementation. Either my porting of gdb or there is a bug
in our hardware implementation.

We have an OpenRISC silicon (http://www.opencores.org). I'm using GDB 5.0.

Suppose the instruction cache has been disabled in the very beginning.

Here is what I observed:
1) the user set a breakpoint ('b') at instruction foo
2) the user continue ('c') the execution
3) gdb replaces instruction foo with a 'breakpoint instruction", which
will stall the processor
4) gdb unstall the processor
5) the processor fetches the breakpoint instruction into the execution
pipeline, and point pc to the next instruction
6) the breakpoint instruction is decoded, recognized, and the processor stalls
7) gdb restores instruction foo
8) the user issues the single instruction step ('si'), and he expects
instruction foo be executed next, but...

The question is:

What value of pc should be expected after step 5 completes?

if $pc==foo+4, foo won't be executed but the following instruction,
which is incorrect.

if $pc==foo, the breakpoint instruction _has been_ fetched into the
execution pipeline at step 5, what makes the cpu to *fetch again* the
instruction restored by gdb at step 7? GDB or the hardware must be
designed to do so?

--
Tzu-Chien Chiu - SMedia Technology Corp.
URL: http://www.csie.nctu.edu.tw/~jwchiu/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]