This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: tracepoint bytecode question


Der Herr Hofrat <der.herr@hofr.at> writes:
>  after writing up tracepoints in a minimum version that actually kind of
>  works (gdb-6.3) I noticed that there seems to be a lot of problems with 
>  the actual bytecode generated. simple things like 
>
>  collect $reg
>  collect variable_name
>
>  work ok, if it gets any more complicated the bytecode seems to be wrong
>
>  i.e.
>
>  collect x->y
>  collect var1 + var2
>
>  will produce wrong offsets and thus garbage traces, same thing if compiled
>  with optimization... 
>
>  so what I would like to know is how I can figure out what the bytecode
>  should look like of if it is correct and its my interpreter that is scewing
>  up - is there any more detailed document on the way bytecode gets calculated
>  or am I down to "read the source" - calculating the X packet output on paper
>  the addresses that are finally recorded seem to be wrong. scaning the path
>  I found the following in the source - which may be someone could
>  explain ?

Well, you can have GDB print the bytecode for any expression with the
'maint agent' command.  The definition of the bytecode language is in
Appendix E of the GDB manual ("The GDB Agent Expression Mechanism").
'info addr' ought to show you where a variable is located, but it
currently does not; instead, you can dump DWARF debugging information
with 'readelf -wi'.

If you've come across a variable whose location is being compiled to
bytecode incorrectly, please let us know and post what you've found.
It's okay if you don't have a patch; it's valuable just to have an
example we can work from.

If you can, you should try working with GDB 6.4.



>
> void
> legacy_virtual_frame_pointer (CORE_ADDR pc,
>                               int *frame_regnum,
>                               LONGEST *frame_offset)
> {
>   /* FIXME: cagney/2002-09-13: This code is used when identifying the
>      frame pointer of the current PC.  It is assuming that a single
>      register and an offset can determine this.  I think it should
>      instead generate a byte code expression as that would work better
>      with things like Dwarf2's CFI.  */
>   if (DEPRECATED_FP_REGNUM >= 0 && DEPRECATED_FP_REGNUM < NUM_REGS)
>     *frame_regnum = DEPRECATED_FP_REGNUM;
>   else if (SP_REGNUM >= 0 && SP_REGNUM < NUM_REGS)
>     *frame_regnum = SP_REGNUM;
>   else
>     /* Should this be an internal error?  I guess so, it is reflecting
>        an architectural limitation in the current design.  */
>     internal_error (__FILE__, __LINE__, "No virtual frame pointer available");
>   *frame_offset = 0;
> }
>
> is the offset really always 0 ??
> Also the register used here is SP_REGNUM (4 on i386) but it should be 5 ??
>
> The code in question has not really changed in 6.4 - so this should hold
> for the current gdb - I did not want to move on to a newer version without
> this woring first.

Hmm.  I'm surprised we're using this at all.  Is GDB really producing
symbols of type LOC_ARG and LOC_REF_ARG?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]