This is the mail archive of the mailing list for the Archer project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: ptrace improvement ideas

On Tue, 08 Feb 2011 02:58:44 +0100, Roland McGrath wrote:
> > 	linux_proc_pending_signals (/proc/PID/status) -> now 2x sigset_t
> > 	(GDB is only interested in SIGINT from that sigset_t now.)
> I'm surprised that GDB wants to know such a thing.  It makes me suspect
> there is a deeper problem to which looking that up is a workaround.
> Can you explain?

-> linux_nat_has_pending_sigint -> maybe_clear_ignore_sigint

without it single CTRL-C gets reported for each process in a process group:

echo 'main(){ fork(); pause(); }' | gcc -x c -; patched-gdb -nx ./a.out 
(gdb) set detach-on-fork off
(gdb) set target-async on
(gdb) set non-stop on
(gdb) run
Starting program: .../a.out 
[New process 19546]
^C <----------------------------------------------------
Program received signal SIGINT, Interrupt.
0x00000032a1cadb40 in __pause_nocancel () at ../sysdeps/unix/syscall-template.S:82
Program received signal SIGINT, Interrupt.
0x00000032a1cadb40 in __pause_nocancel () at ../sysdeps/unix/syscall-template.S:82

(Not reproducible with NPTL threads.)

> > 	linux_nat_core_of_thread_1 (/proc/%d/task/%ld/stat)
> > 	-> PTRACE_GET_CPUCORE -> long return value as the CPU #
> I had no idea GDB cared about such a thing at all.  Why does it?

It may be generally useless on normal platforms but Ericsson exchanges use
some CPU-affinities of tasks and Eclipse displays it for that purpose.
I failed to display the CPU core # now in Eclipse.

The MI protocol has to provide it as FE (Front End) does not explicitly ask
for it.

> > [nitpick] or PTRACE_GETSIGINFO after waitpid, as GDB does.
> This is a misunderstanding.

OK, sorry, got it now.

> The point of using waitid is that you can see the new si_tgid field, and
> hence receive both the thread-specific ID and the process-wide TGID for a
> tracee that you haven't seen before.

GDB has linux_proc_get_tgid :-) but only to workaround nptl/5983 fixed by you,
again reading "/proc/%d/status", for "Tgid".

> Otherwise, PTRACE_O_INHERIT results
> in spontaneous reports for new IDs that you know nothing about and have no
> way to associate with where they came from.  The si_tgid idea addresses the
> problem for the case of new threads (standard NPTL threads, that
> is--CLONE_THREAD thread creations in the kernel's terms).  It doesn't help
> at all for other kinds of creations, such as a normal fork or vfork.

> That is why I suggested it might actually not be desireable to have
> PTRACE_O_INHERIT apply to all new creations, but instead make it limited to
> CLONE_THREAD creations.

PTRACE_O_INHERIT cannot be used for fork/vfork with `set detach-on-fork on'
(the default mode, so that it is not multi-inferior) as GDB needs to remove
software breakpoints.  But the goal is multi-inferior `set detach-on-fork off'
anyway so in that mode PTRACE_O_INHERIT could apply even for fork/vfork.

> I'm interested in your thoughts on the issue of
> how GDB deals with the first report of an ID it hasn't seen before.  With
> current kernels, the only such situation that's possible is the brief race
> between a new PTRACE_O_TRACE{CLONE,FORK,VFORK} child reporting its first
> stop, and its parent reporting the PTRACE_EVENT_{CLONE,FORK,VFORK} stop for
> that child's creation (at which time PTRACE_GETEVENTMSG tells you the
> association between that parent's creation attempt and the new child).

`stopped_pids' tracks tasks which got reported as stopped before the parent's

With multi-inferior `set detach-on-fork off', with no
both clones and forks/vforks, couldn't it?  There is no longer any need to
track `stopped_pids'.

PTRACE_O_TRACEEXEC still needs to stop the inferior as GDB needs to reinsert
all the software breakpoints to their newly computed addresses.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]