This is the mail archive of the
gdb@sources.redhat.com
mailing list for the GDB project.
Re: why is gdb 5.2 so slow
- From: Jim Blandy <jimb at redhat dot com>
- To: Andrew Cagney <ac131313 at redhat dot com>
- Cc: "Howell, David P" <david dot p dot howell at intel dot com>, Daniel Jacobowitz <drow at mvista dot com>, wim delvaux <wim dot delvaux at adaptiveplanet dot com>, gdb at sources dot redhat dot com
- Date: 04 Nov 2002 15:29:23 -0500
- Subject: Re: why is gdb 5.2 so slow
- References: <331AD7BED1579543AD146F5A1A44D5251279CE@fmsmsx403.fm.intel.com><3DC2E819.9000700@redhat.com>
Andrew Cagney <ac131313@redhat.com> writes:
> What would really help is for the kernel to provide an option where it
> rips out out any stray breakpoints after a detach. That way GDB could
> safely enable this by default.
I've heard it suggested that, for this behavior, the kernel shouldn't
know about breakpoints specifically, since those need all sorts of
other support (GDB has to be ptracing and waiting, etc.). Instead,
the kernel would provide some way for a debugger to make some memory
writes (e.g., breakpoints) --- but not others (e.g., variable
modifications) --- via a special interface that would revert the
writes when the GDB process exited or died.
The ways I can think of to implement this involve page table magic,
which makes me wonder if one couldn't actually use them for per-thread
breakpoints and thread hops, too. That is, if the debugger could make
one thread see the program text differently from the others, then it
could pull the breakpoint for that thread alone, while leaving it in
for the others. No thread hop necessary.
I'm of out my depth here, though --- I've never looked at Linux's mm
layer, for example, and don't understand its limitations. Just an
idea.