This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] Infinite backtraces...


> yeah, relying on return pointer register values seems a bit iffy. I
> suspect it might be zero in your case only by luck. on hppa-linux, for
> example:

So far, I haven't been able to find any documentation that would
guaranty that, indeed. So it's definitely iffy.

While working on something else, I just stumbled on this (in gdb 5.3):

  /* If this is a threaded application, and we see the
     routine "__pthread_exit", treat it as the stack root
     for this thread. */

I haven't run this through the debugger to confirm it, but it looks
like we used to stop the unwinding by using the name of the symbol
associated to the frame. Not very elegant, to say the least, but
could be effective for hpux (it would no longer be just architecture
dependent).

> (gdb) bt
> #0  thread_function (arg=0x0)
>     at /home/tausq/gdb/gdb-cvs/gdb/testsuite/gdb.threads/manythreads.c:32
> #1  0x405ee4b8 in pthread_start_thread () from /lib/libpthread.so.0
> #2  0x405ee540 in pthread_start_thread_event () from /lib/libpthread.so.0
> #3  0x40878514 in clone () from /lib/libc.so.6
> #4  0x40878514 in clone () from /lib/libc.so.6
> Previous frame identical to this frame (corrupt stack?)
> 
> so it terminates only because we are lucky... :(

It's funny that we share the same definition of being "lucky" :-).

> for hppa-linux, i believe the correct fix is to fix glibc so that the
> clone() procedure sets the "can't unwind" flag in the unwind record and
> then using a mechanism similar to what you proposed, we can stop the
> backtrace. 

Yes, if you can get that, I think that's by far the best approach.
Let the unwind information tell you instead of guessing.

> in your particular case, i'm curious to know how we get from a pc=0
> frame to a previous frame. that seems like a bug to me?

Not sure. I haven't really looked at this in details, since this part
of the callstack was bogus anyway. But my guess is that the fallback
unwinder kicked in (since we shouldn't find any unwind entry for that
PC), and just unwound blindly.

> i like the idea of a new method. perhaps the default implementation
> could be instead the "main" and "entry point" logic that's currently in
> the core frame code, and targets can overload and enhance this method
> accordingly?

I would prefer that the new method by called in addition to the current
logic. The reason being that, for architectures overriding the default,
they would have to reimplement that part again in addition of doing their
own magic.

BTW: You backtrace reminds me of something curious: 

> #3  0x40878514 in clone () from /lib/libc.so.6
> #4  0x40878514 in clone () from /lib/libc.so.6

I just noticed that I get a lot of duplicated frames in our backtraces.
Another example is the callstack I posted earlier:

    #1  0x0000a2cc in simple.caller (<_task>=0x4001c3a0) at simple.adb:21
    #2  0x0000a268 in simple__callerB___2 () at simple.adb:18
    #3  0x00017184 in system.tasking.stages.task_wrapper ()
    #4  0x00017058 in system__tasking__stages__task_wrapper ()
    #5  0x7aee0f60 in __pthread_create_system () from /usr/lib/libpthread.1
    #6  0x7aee0f08 in __pthread_create_system () from /usr/lib/libpthread.1

Frame #2 is a duplicate of #1, although it's hard to see if you don't
know the GNAT encoding. Same for frame #4 being a duplicate of #3.
Same for #5 and #6.

These frames are referring to stubs. With 5.3, we skipped these stubs.
Do you know if putting them here was intentional? I find them confusing,
so I'd like to remove them.

-- 
Joel


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]