This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

gdb support for Linux vsyscall DSO


[I changed the subject of the thread to distinguish this part of the
discussion from the state of DWARF CFI, which is a prerequisite but
independent.]

It's notable that I didn't say i386 in the subject.  I am helping David
Mosberger implement the same concept on ia64 in Linux 2.5 as well.  When
glibc starts using those entry points on Linux/ia64 (it doesn't yet), then
there will be the same issues for getting at the unwind info.  The vsyscall
DSO plan will be the same, so getting at the ELF sections is addressed the
same way (more on that below).  In the ia64 case, the unwind info is ia64's
flavor rather than DWARF2 flavor, but the use of ELF phdrs and sections to
locate it is precisely analogous.

> It certainly is my intention to make it possible, although It's not
> clear how things should be glued together.  

As you might imagine, I had thought through most of the details of what gdb
needs to do when I made the kernel changes (whose main purpose for me was
to make the gdb support possible).  There are two cases that need to be
addressed.  You've brought up the case of live processes.  There is also
the case of core dumps.

> I've seen your kernel patches and it seems as if there are two
> possibilities:

You omitted possibility #0, which I had intended to preemptively describe
and explain why I rejected it.  That is, to have some file that gdb can
read.  It would be simple to have a /proc file that gives you the DSO image
as a virtual file, and have the dynamic linker's shlib list refer to this
file name.  Then gdb might not need any change at all.  However, this sucks
rocks for remote debugging.  It also doesn't sit right with me because the
vsyscall DSO is actually there and your PC might be in it, even if you are
not using the dynamic linker or glibc at all and have no "info shared" list.

> 1. Reading the complete DSO from target memory, and somehow turning it
>    in a bfd.  That would make it possible to use the existing symbol
>    reading machinere to read symbols and .eh_frame info.

This is what I had anticipated doing, at least the first cut.  Given some
true assumptions about the vsyscall DSO, it is trivial to compute the total
file size from the ELF headers and recover the byte image of the file
exactly.  It can't be too hard to fake up a bfd in memory as if the
contents had been read from a file.

Having a bfd of the DSO file makes it simple to plug it into the existing
code that looks for the sections by name.  I do have one minor concern
about that.  An ELF stripping utility is well within its rights to remove
the section headers entirely from a DSO, and such DSOs work fine.  The
binutils and elfutils strip do not do this (nor ld -s, nor anything else
using bfd), but they could one day.  If the kernel people decide to shave
off 149 bytes from the image by getting rid of the section headers and
.shstrtab, it's hard for me to argue that they really shouldn't do that.
And any stripped ELF DSO could reasonably have its section headers removed
for whatever reason.  I think this would be addressed adequately by having
bfd do the same faking of sections based on ELF phdrs that it does for core
files, on any ELF file that has no section headers but does have phdrs.
But that is not an immediate concern, just something to air.

Let's assume we can fake a bfd containing the DSO file contents by finding
its ELF header in the inferior's memory.  Right now it would be a
digression to go into how we know where to look; I will get into that at
the end.  At this point it seems apropos to describe the core file
situation before getting to your #2.

Core files are sectionless ELF files that have ELF program headers.  In the
vsyscall DSO changes, I made Linux core dumps include the DSO image as part
of the memory dump (just as it appeared in memory) and also copy the DSO's
special phdrs to the core file, with adjusted file offsets.  Hence a new
Linux core dump has both a PT_GNU_EH_FRAME pointing at the DSO's
.eh_frame_hdr section and a PT_DYNAMIC pointing at its .dynamic section.
In these phdrs, p_vaddr and other fields match those phdrs in the DSO, but
p_offset gives the position in the core dump file where that section within
the DSO file image appears.  This means that the existing section-faking
code for ELF core files (including my recent change to it) will give a core
file bfd sections called "eh_frame_hdr13" and "dynamic12" or suchlike.

I made simple changes to dwarf2read.c to match sections named
"eh_frame_hdr*" as it matches ".eh_frame", and to dwarf2cfi.c to grok the
.eh_frame_hdr format to locate .eh_frame within some other memory section
(it will be in "load11" or something).  The same changes are trivial to
make to new code given the machinery for grokking .eh_frame format and its
pointer encodings.

With that, I think that finding vsyscall unwind info from core dumps is
accomplished just by making an objfile for the core file bfd and plugging
it into the unwind info grokking hooks or making it a symfile or something.
The part I don't know off hand how to do is making sure the objfile/symfile
comes and goes appropriately with attaching/detaching from a core dump.  
I presume that is straightforward for those better versed in gdb.

As it stands now, the vsyscall DSO has normal section headers.  So matching
".eh_frame" or ".eh_frame_hdr" by name in that special bfd is adequate.  If
it (or other DSOs) were fully stripped, then having bfd fake named sections
for these bfds as it does for ELF core file bfds would be sufficient to
trivially handle those as well.

> 2. Write a symbol reader that uses the run-tume dynamic linker (struct
>    r_debug, struct link_map) to locate the dynamic section of the
>    shared object, and uses it to read the relevant loaded sections of
>    the DSO from the target and interpret those bits.
> 
> If I'm not mistaken, the latter would also allow us to construct a
> minimal symbol table for DSO's even if the we don't have access to an
> on-disk file.  

Indeed so!  Or even if you just have a completely stripped ELF DSO on disk.
(Or even if you just get so tired of turning off auto-solib-add and using
add-symbol-file, and cursing its inability to add .text's vma to the
loadbase itself so you don't have to run objdump -h every damn time,
umpty-ump times a day debugging subhurds, that you just break down and
implement solib-absolute-prefix and become a compulsive maniac to keep
development and test filesystems synchronized!  Oh, I guess that won't
exactly be happening to anyone else in the future. ;-)

This is roughly the same as what I had thought would be the better
long-term plan than the above section-matching.  However, I would separate
it into two pieces.  I would still advocate using a special case mechanism
to find the vsyscall DSO's ELF header and locate its guts from there.  That
works even if the run-tomb dynamic linker's data structures are mangled or
missing.  But it is cleaner for the reasons above if that works from the
phdrs out (thence quickly to .dynamic) only, and doesn't rely on finding
section headers in the DSO.

Whether you use dynamic linker data structures or other magic to locate a
DSO's .dynamic section in memory, from there the procedure is the same to
locate its dynamic symbol table.

> We'll probably need some serious hacking on GDB's shared library support
> to make this all possible.

Not necessarily.  It's straightforward to locate the dynamic symbol table
and its string table by reading the .dynamic section, then copy them out of
the inferior and interpret the ELF symbol table format as we already do.
There are two ways to attack it.  One is to read the symbol and string
tables and then poke them directly into gdb symbol table form.  The other
approach is to fake a bfd containing .dynsym and .dynstr sections, plug
that in as an implicit symfile something like how separate debug info files
are attached along with the stripped objfiles they correspond to, and let
the existing infrastructure go to town.

To support everything we have mentioned, these are the steps I would take:

* Soup up the section-faking code now applied to core dumps so that it not
  only creates the dynamic* section but reads its contents to locate and
  fake sections called .dynsym et al.  Then if we treat a core dump as a
  symfile, it will provide the vsyscall DSO's symbols.  If we apply this
  section-faking code to all sectionless ELF files, then fully-stripped
  DSOs will be supported just as DSO's from binutils strip are now.

* For the vsyscall DSO, we read its ELF header from the inferior.  If it
  has section headers as it does now, we can calculate the total file image
  size to include those, fake a bfd containing the whole image, and all the
  existing support for an ELF DSO works (symbols and all).  If it doesn't
  have section headers, or we decide to ignore them, then we use only the
  file's PT_LOAD phdr to read the runtime memory image.  In that case, the
  section-faking code needs to apply to this synthesized bfd both to find
  .eh_frame_hdr via PT_GNU_EH_FRAME (or .IA64_unwind via PT_IA_64_UNWIND)
  and to find dynamic symbols via PT_DYNAMIC.

* For normal DSOs when you can't find the file on disk (or don't want to
  use it; there should probably be a new variant of auto-solib-add to
  control reading symbols from the inferior image as well as disk files),
  you can straightforwardly get the .dynamic section's address from the
  dynamic linker.  You can copy that from the inferior and from it know the
  locations of the .dynsym and .dynstr sections and copy them as well,
  putting it all into a synthesized bfd with those section names faked.
  There is no "proper" way to locate the phdrs, which you need to find the
  .eh_frame information.  However, in practice you can often assume that
  the loadbase (l_addr) points at the shared object's ELF header, and you
  can verify this by reading that memory and sanity-checking the ELF header
  format.  If you do that, you have all the phdrs and can use the PT_LOAD
  headers to fill out the solib bounds normally gotten from the disk file.
  However, prelinked DSOs may not make it easy to find the ELF header
  (l_addr is zero).

These latter ideas (everything from the quoted #2 on) are of secondary
concern.  I went into complete detail about them now only because you
brought it up.  Symbols from the vsyscall DSO in a core dump are nice, but
not essential.  Getting details from normal DSOs without using disk files
is a new feature that is very nice but not necessary, nor related except in
implementation details, to supporting vsyscall stuff.  The steps above are
not part of my immediate goals.

The essential need is to get .eh_frame information from the vsyscall DSO in
both live processes and core dumps.  The section-faking code already
suffices for core dumps.  For the time being, the image provided by the
kernel does have section headers, so it suffices just to synthesize a bfd
containing the whole file image read out of a live inferior's memory.  My
immediate goal is to get things working just using this.  Other things we
can see about later.  For this immediate goal, I would take these steps:

1. Make dwarf-frame.c work with .eh_frame info.  Mark is working on this,
   so I will wait for him or help with getting it done.

2. Make dwarf-frame.c locate .eh_frame via .eh_frame_hdr, matching a section
   named "eh_frame_hdrNN" as core dumps now have.  
   This is pretty trivial after step 1.

3. Modify corelow.c to do something like symbol_add_file with the core bfd.
   If that barfs on some backends, it could be done just for ELF cores.  I
   don't think it needs to be put into a Linux-specific module; if any
   other ELF core files contain useful phdrs they will work the same way.
   It also needs to remove the objfile/symfile again when the core file is
   detached.  On this I could use advice from those more expert on gdb.  I
   think it may suffice to call symbol_file_add on the core file name in
   place of opening the bfd directly, store the objfile pointer somewhere,
   and call free_objfile in core_detach.  But that is only a guess.  I can
   write this bit and observe internally that it works independent of the
   steps above that make it useful, so immediate implementation advice is
   solicited.

4. Write a function that creates an in-memory bfd by reading an ELF header
   and the information it points to from inferior memory.  I'll take a
   whack at this soon.

5. Write Linux backend code to locate the vsyscall DSO's ELF header, use
   that function to synthesize a bfd, and so something like symbol_add_file
   with it.  I'm not sure where this belongs.  It will be Linux-specific
   but not necessarily machine-dependent.  Where it belongs exactly
   probably depends on how exactly the locating is done.  The need to
   attach and detach the synthesized objfile is similar to the core file case.

Heretofore I have avoided mentioning how we locate the vsyscall DSO's ELF
header in the inferior's address space.  It's an unresolved question that
needs discussion, but it's a minor and boring detail among all the issues
involved here.  I saved it for those with the proven stamina to get through
the first 200 lines of my ranting.

The kernel tells the running process itself where to find the vsyscall DSO
image with an ElfNN_auxv_t element of type AT_SYSINFO_EHDR in the aux
vector it writes at the base of the stack at process startup.  For gdb to
determine this value, there are several approaches possible.

* Add a ptrace request just for it, i.e. PTRACE_GETSYSINFO_EHDR or
  something.  That is trivial to add on the kernel side, and easy for
  native Linux backend code to use.  It just seems sort of unsightly.
  A way to get that address from some /proc/PID file would be equivalent.

* Locate the address of the aux vector and read it out the inferior's
  stack.  The aux vector is placed on the stack just past the environment
  pointers.  AFAIK, gdb doesn't already have a way to know this stack
  address.  It's simple to record it in the kernel without significant new
  overhead, and have a way via either ptrace or /proc to get this address.
  I raise this suggestion because it may be most acceptable to the Linux
  kernel folks to add something with so little kernel work involved.  The
  problem with this is that the program might have clobbered its stack.
  glibc doesn't ordinarily modify it, but a buggy program might clobber it
  by random accident, and any program is within its rights to reuse that
  part of the stack.  It won't have done so at program startup, but if you
  attach to a live process that has clobbered this part of its stack then
  you won't find the vsyscall info so as to unwind from those PCs properly.
  I am curious what gdb hackers' opinions on this danger are.

* Add a way to get the original contents of the aux vector, like
  /proc/PID/auxv on Solaris.  That could be /proc/PID/auxv, or new ptrace
  requests that act like the old PIOCNAUXV and PIOCAUXV.  On Solaris,
  /proc/PID/auxv's contents are not affected by PID clobbering the auxv on
  its stack.  In Linux, about half of the auxv entries are constants the
  kernel can easily fill in anew on the fly, but the other half are
  specific bits about the executable that are not easy to recover later
  from other saved information.  Though AT_SYSINFO_EHDR is all we need, a
  general interface like this really should give the complete set that was
  given to the process.  By far the simplest way to implement that is
  simply to save the array in the kernel.  As the kernel code is now, I can
  do that without any additional copying overhead, but it does add at least
  34 words to the process data structure, which Linux kernel people might
  well reject.  Of these three options, this one is my preference on
  aesthetic grounds but I don't know whether it will happen on the kernel side.

I have not been able to imagine any way to get this magic address (the
vsyscall DSO loadbase) directly from the system that does not require a
special backend call and therefore cause some kind of new headache for
remote debugging.  I don't know what people's thinking is on trying to
translate this kind of thing across the remote protocol.

There is also the option to punt trying to find it directly from the
system, and rely on the dynamic linker's data structures to locate it.  As
mentioned above, I don't like relying on the inferior's data structures not
being frotzed, nor relying on there being a normal dynamic linker to be
able know about the vsyscall DSO.  Furthermore, the normal dynamic linker
report of l_addr is not helpful because that "loadbase" is the bias
relative to the addresses in the DSO's phdrs, which is 0 for the vsyscall
DSO since it is effectively prelinked.  The only address directly available
is l_ld, the address of its .dynamic section.  There is no proper way from
that to locate the phdrs or ELF file header, so it would have to be some
kludge rounding down or searching back from there or something.  The only
thing favoring this approach is that it requires no new target interfaces
and no new remote debugging complications.

I think this question is pretty open, though not all that exciting.  My
inclination is to implement /proc/PID/auxv (with storage overhead) and see
what the Linux kernel hackers' reaction to that is.  They may suggest that
it work by reading out of the process stack (which is what
/proc/PID/cmdline and /proc/PID/environ do).  I would like to know opinions
from the gdb camp on how bad a thing to do that might be.  Second choice is
ot make /proc/PID/maps list the vsyscall DSO mapping in a recognizable way;
that is likely to go over well enough with the kernel hackers.  I would be
all for an entirely different solution that is both robust (not relying on
the inferior's own fungible memory) and remote-friendly, if anyone thinks
of one.


I have raised a lot of crapola in this message.  I hope I have been clear
in specifying the details on which I need some assistance and direction.
Aside from the DWARF2 CFI stuff per se, I am prepared to write all the
necessary code myself and am only soliciting for advice, not for anyone
else to implement it.


Thanks,
Roland


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]