This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFD] How to fix FRAME_CHAIN_VALID redefinition in config/i386/tm-i386v4.h ?


> Due to this change:
> 
> 2002-02-10  Andrew Cagney  <ac131313@redhat.com>
> 
> * gdbarch.sh: For for level one methods, disallow a definition
>         when partially multi-arched.  Add comments explaining rationale.
>         * gdbarch.h: Re-generate.
> 
> native SVR4 based platforms (including Solaris x86) no longer compile,
> as they redefine FRAME_CHAIN_VALID in config/i386/tm-i386v4.h.
> 
> I understand, that this redefinition has to go, but I have no idea, how to
> get back to the old behaviour cleanly.

Appreciated!

> Any ideas, suggestions ?

To expand a little on the superficial problem.  A non-multi-arch compile 
has:

	#include tm.h
		might define FRAME_CHAIN_VALID
	#include gdbarch.h
		doesn't define FRAME_CHAIN_VALID
		(as not multi-arch)
	#include frame.h
		defines FRAME_CHAIN_VALID
		if not defined using some convoluted
		#ifdef logic.

In the partial multi-arch case it ends up with:

	#include tm.h
		might define FRAME_CHAIN_VALID
	#include gdbarch.h
		(re)defines FRAME_CHAIN_VALID
	#include frame.h
		gets ignored

The upshot is that FRAME_CHAIN_VALID's definition can silently change 
when the multi-arch switch is thrown.  The above change stops this by 
barfing things during the build :-/

Looking at frame.h, though, I think I've come across some good news. 
The logic reads:

#if !defined (FRAME_CHAIN_VALID)
#if !defined (FRAME_CHAIN_VALID_ALTERNATE)
#define FRAME_CHAIN_VALID(chain, thisframe) file_frame_chain_valid 
(chain, thisframe)
#else
/* Use the alternate method of avoiding running up off the end of the frame
    chain or following frames back into the startup code.  See the comments
    in objfiles.h. */
#define FRAME_CHAIN_VALID(chain, thisframe) func_frame_chain_valid 
(chain,thisframe)
#endif /* FRAME_CHAIN_VALID_ALTERNATE */
#endif /* FRAME_CHAIN_VALID */

greping through the code, FRAME_CHAIN_VALID_ALTERNATE appears to have 
quietly disappeared!  Can someone confirm this?  Assuming that is the 
case, the above can be reduced to:

#ifndef FRAME_CHAIN_VALID
#define FRAME_CHAIN_VALID(chain, thisframe) file_frame_chain_valid 
(chain, thisframe)

and that, in turn, can be moved to gdbarch.* allowing the level-1 
requirement to be dropped.  Doesn't fix the underlying problem though :-(

--

> Three approaches come to mind:
> 
> - Do nothing about it and let SVR4 based platforms backtrace through main.
>   This is the simplest solution, albeit ugly.

I'll immediatly apply the above.  It gets you back the old behavour.

> - Use func_frame_chain_valid instead of file_frame_chain_valid in
>   i386-tdep.c. This would stop backtraces through main on GNU/Linux. See also
>   http://sources.redhat.com/ml/gdb/2002-02/msg00117.html
> 
> - Try to switch the frame_chain_valid method dynamically in i386_gdbarch_init,
>   something like:
> 
>  if (os_ident != ELFOSABI_NONE)
>    set_gdbarch_frame_chain_valid (gdbarch, file_frame_chain_valid);
>  else
>    set_gdbarch_frame_chain_valid (gdbarch, func_frame_chain_valid);
> 
>   This approach would work well for SVR4, but causes interesting problems
>   on GNU/Linux. As core files have no ABI markers, we can't distinguish
>   them, and we get different backtracing behaviour when debugging an
>   executable (GNU/Linux ABI) or a core file (generic ELF ABI), so we
>   simply can't do it.
> 
>   I suspect that we will hit this kind of multiarching problem more often
>   in native setups, where we can't discern the native ABI flavour from the
>   generic one (the various native sigtramp variants come to mind).

Yes.

>   Do we need a hook from XXX_gdbarch_init to some native code ?

It isn't just a native problem.  Consider a solaris-X-arm-linux-gnu GDB 
debugging a remote target that includes threads, shared libraries and 
sigtramps.

The current gdbarch select mechanism is based on the ABI and ISA but not 
the ``OS''.  (Strictly speaking SHLIBS and SIGTRAPPS and ... can 
probably be classed ABI, GDB's architecture doesn't reflect this).

> Any ideas, suggestions ?

Not really.  I'm having enough fun pinning BFD down on the semantics of 
bfd_architecture and bfd_machine.

Several thoughts:

- allow multiple registrarations for an architecture (eg i386-tdep.c, 
i386-linux-tdep.c, ...) and have gdbarch try the OS specific one before 
the generic one.

- Let a tdep file specify the ``os'' when registering their architecture 
so that the gdbarch code can select based on that.

- Add an ``os'' field to ``struct gdbarch_info'' which can be set to 
what is known to be the OS.

- Just tweek i386-tdep.c's *gdbarch_init() so that it uses a better 
local (architecture specific) heuristic.

I suspect a combination of the first three is the best.  The moment the 
heuristic is pushed down to the target we end up with inconsistent, 
target dependant, behavour.

Andrew



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]