This is the mail archive of the
mailing list for the Archer project.
Re: Inter-CU DWARF size optimizations and gcc -flto
>>>>> "Daniel" == Daniel Jacobowitz <email@example.com> writes:
Daniel> You are correct, it does crush GDB :-) I routinely try - emphasis on
Daniel> try - to use GDB on programs with between 2500 and 5500 shared
Daniel> libraries. It's agonizing. I have another project I want to work on
Daniel> first, and not much time for GDB lately, but this is absolutely on my
Daniel> list to improve.
I am curious how you plan to improve it.
The plan I mentioned upthread is probably pretty good for scaling to
distro-sized programs, say 200 shared libraries or less (this is
LibreOffice or Mozilla). Maybe we could get a bit more by putting
minsyms into the index.
I am not so confident it would let gdb scale to 5000 shared libraries
For that size I've had two ideas.
First, and simplest, punt. Make the user disable automatic reading of
shared library debuginfo (or even minsyms) and make the user explicitly
mention which ones should be used -- either by 'sharedlibrary' or by a
I guess this one would sort of work today. (I haven't tried.)
Second, and harder, is the "big data" approach. This would be something
like -- load all the debuginfo into a server, tagged by build-id,
ideally with global type- and symbol-interning; then change gdb to send
queries to the server and get back the minimal DWARF (or DWARF-esque
bits) needed; crucially, this would be a global operation instead of
per-objfile, so that gdb could exploit parallelism on the server side.
Parallelism seems key to me. Parallelism on the machine running gdb
probably wouldn't work out, though, on the theory that there'd be too
much disk contention. Dunno, maybe worth trying.