This is the mail archive of the
mailing list for the binutils project.
Re: [PATCH] Support gzip compressed exec and core files in gdb
- From: Michael Eager <eager at eagerm dot com>
- To: Pedro Alves <palves at redhat dot com>, Jan Kratochvil <jan dot kratochvil at redhat dot com>
- Cc: "gdb-patches at sourceware dot org" <gdb-patches at sourceware dot org>, binutils <binutils at sourceware dot org>
- Date: Thu, 12 Mar 2015 09:58:02 -0700
- Subject: Re: [PATCH] Support gzip compressed exec and core files in gdb
- Authentication-results: sourceware.org; auth=none
- References: <54FF77D6 dot 7010400 at eagerm dot com> <20150311221329 dot GB11980 at host1 dot jankratochvil dot net> <5500E074 dot 6070002 at eagerm dot com> <55016D6F dot 4010104 at redhat dot com> <5501B1EB dot 5010806 at eagerm dot com> <5501BB08 dot 90503 at redhat dot com>
On 03/12/15 09:12, Pedro Alves wrote:
On 03/12/2015 03:34 PM, Michael Eager wrote:
On 03/12/15 03:41, Pedro Alves wrote:
Waiting for GDB to decompress that once is already painful. Waiting for it
multiple times likely results in cursing and swearing at gdb's slow start
up. Smart users will realize that and end up decompressing the file manually
outside gdb, just once, anyway, thus saving time.
We could "fix" the "multiple times" issue by adding even more smarts,
based on an already-decompressed-files cache or some such. Though of
course, more smarts, more code to maintain.
I had considered adding a command or command line option to specify
the name of the uncompress file, so that it could be reused.
What's the point then? If you need to do that, then you alread
lost all the convenience. Just type "gunzip core.gz && gdb core"
instead of "gdb -tmp-core /tmp/core core.gz".
So I think my point still stands, and IMO, it's a crucial point.
I agree with Jan -- The real convenience would be being able to skip the
long whole-file decompression step altogether, with an on-demand
block-decompress scheme, because gdb in reality doesn't need to touch
most of the vast majority of the core dump's contents. That would
be a solution that I'd be happy to see implemented.
That's a solution to a different problem.
I don't think it is. What's the real problem you're solving?
Lots of users, most who are not very conversant with gdb, lots of
compressed exec and core files, all compressed with gzip. The
goal is to make using compressed files simple and transparent,
without users having to enter additional commands.
A wonderful patch supporting xz compressed files with an on-demand
block-decompress scheme would be a great solution for some other
use case, one which seems hypothetical at the moment.
If we're just decompressing to /tmp, then we also need to
compare the benefits of a built-in solution against having users
do the same with a user-provided gdb command implemented in one
of gdb's extensions languages (python, scheme), e.g., a command
that adds a "decompress-core" command that does the same:
decompresses whatever compression format, and loads the result
with "core /tmp/FILE".
This requires that users manually decompress files, and makes it
impossible to put the compressed file name on the command line.
No it doesn't: there's '-ex' to run commands on the command
line: gdb -ex "decompress-and-load-core foo.gz"
As I said, this requires users to know whether an exec or core
is compressed and manually enter a command to uncompress it.
It also looks to me like a wart and kludge, rather than having
GDB automatically identify the compression method and do the
operation transparently for the user.
What I'm saying is that it seems to me that you're doing
automatically in GDB, it can be done automatically in a
script. A gdb command implemented in python can of course
also identify the compression method and support a multiple
number of compression formats, by just calling the
appropriate decompression tool.
We have many wrappers around gdb, some of which handle compressed
files, some of which don't. They are all different and
problematic to debug and maintain. Putting support for compressed
files into gdb means that it simply works, and users don't have
to remember which wrapper accepts compressed files and which
As I said, I won't strongly object, though I still don't
see the compelling use case that warrants doing this in GDB.
I do envision ahead the usability problems and support
requests this will lead to.
IMO, whatever the solution, if built in, this is best implemented
in BFD, so that objdump can dump the same files gdb can.
I took that approach initially. But GDB finds and opens files,
not BFD. Moving what GDB is doing into BFD, where it should have
been in the first place (IMO), seemed more problematic.
If there's problems, let's fix them. From Alan's response,
the problem you mention doesn't really exist in the form you
I misspoke/misremembered. It isn't exec_close() which closes the
file, it is bfd_cache_close_all(). The bfd is not closed, only
While BFD does most of the file operations, there isn't a clear
functional boundary between GDB file operations and BFD file
operations. Allowing an opened fd to be passed into BFD makes
doing the decompression in BFD problematic, since BFD doesn't
know where the opened file was found, and the decompress libraries
(at least gzopen()) expects a path, not an opened fd. BFD
wouldn't be know how the fd was opened so that it could use the
same flags to open the uncompressed file.
Fixing the problem would mean eliminating functions in BFD
which accepted an open fd and replacing calls to these functions
with modified versions of bfd_open() which accepted additional
flags or did whatever other operations the caller did when opening
the files. GDB opens files a number of places, such as in
exec_file_attach() using openp() which searches the path for
the file. This would have to be migrated into BFD.
Michael Eager email@example.com
1960 Park Blvd., Palo Alto, CA 94306 650-325-8077