This is the mail archive of the
mailing list for the Archer project.
Re: Inter-CU DWARF size optimizations and gcc -flto
Daniel> I have no idea. One thing I'd like to revisit is your work on
Daniel> threaded symbol load; I have plenty of cores available, and the
Daniel> machine is pretty much useless to me until my test starts.
This might help, it would be worth trying at least.
I am mildly skeptical about it working well with a very big program.
It seems like you could get into memory trouble, which would need a
different sort of scaling approach.
Also, with .gdb_index, in my tests the startup time of gdb is dominated
by minsym reading, even banal stuff like sorting them. I think you'd
have to insert some threading bits in there too... easy though.
Daniel> also a lot of room for profiling to identify bad algorithms; I think
Daniel> we spend a lot of time reading the solib list from the inferior
Daniel> (something I thought I and others had fixed thoroughly already...) and
Daniel> I routinely hit inefficient algorithms e.g. during "next".
Yeah, I hadn't even gotten to thinking about anything other than the
Tom> First, and simplest, punt. ÂMake the user disable automatic reading of
Tom> shared library debuginfo (or even minsyms) and make the user explicitly
Tom> mention which ones should be used -- either by 'sharedlibrary' or by a
Tom> linespec extension.
Daniel> I am hugely unexcited by this.
Yeah, me too. It would "work" but the user experience would be not be
Daniel> Something I've been thinking about is that incrementalism is hard in
Daniel> GDB because the symbol tables are so entwined... adding any sort of
Daniel> client/server interface would force us to detangle them, and then
Daniel> individual objects could have a longer life.
The symbol tables are my least favorite part of gdb right now, wresting
the crown from linespec this year. Though maybe that is just because I
don't know all parts equally well ;)