This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Single stepping and threads


On Sat, 2006-12-02 at 16:27 +0000, Rob Quill wrote:
> On 30/11/06, Michael Snyder <Michael.Snyder@palmsource.com> wrote:
> > On Wed, 2006-11-29 at 08:38 -0800, Joel Brobecker wrote:
> > > > > I would say yes. A step should be a few instructions, while stepping
> > > > > over a call is potentially a much larger number of instructions.
> > > > > As a result, stepping over without letting the other threads go would
> > > > > more likely cause a lock.
> > > >
> > > > I think you mean "no" then?
> > >
> > > Oops, sorry, I meant "no".
> > >
> > > One of my coworkers expressed his opinion as follow:
> > >
> > > <<
> > > I would find it confusing if "step" and "next" behave differently with
> > > respect to threads, because they seem like basically the same thing.
> > > "Next is just like step, except that it goes over calls" seems simple to
> > > me. "Next is just like step, except that it goes over calls, and has
> > > some subtle difference regarding threads" seems more complicated to me.
> > >
> > > So I would suggest leaving the default as "off", or else changing it
> > > to "on".
> >
> > Default on would be a disaster -- most threaded programs would
> > not behave even remotely the same under the debugger as they would
> > solo.
> >
> > In fact, many would deadlock almost immediately.
> 
> I have a question regarding this. In concurrent programming (as we
> were tuaght it), the principle was that the interleaving of
> instructions from threads was random.

It's not really random, it's "unpredictable" -- within the 
constraints of the model.  If you violate the model, say, by
having a direct pipeline to the kernel scheduler, you could
predict task switches perfectly.

Any given scheduler has an algorithm for deciding when to 
switch threads.  Some have a choice of algorithms, and many
can be "tuned".  Most or all are affected by outside events, 
eg. programs blocking on a device.

The first point to be made here is that gdb's participation in
the system *changes* the system -- so that the quasi-random
task switches will happen at different times and in potentially
different orders than otherwise.


>  So, if "on" were the default,
> and a few steps were done in GDB, in fact, as many as it took to
> deadlock the program, surely it is possible (although, however
> unlikely) that when the program is run without GDB that the
> interleaving is the same as that forced by GDB, and the code would
> deadlock. Thus making the code bad, rather than the debugger.

No.  Say you have a producer thread and a consumer thread.
You decide to debug the consumer thread, in isolation (ie. 
the producer thread cannot run.

You come to a point where the consumer thread blocks, waiting
for the producer thread to produce something.

Normally, this is not deadlock.  The consumer will sleep, and 
the producer will wake up and produce something.  But the producer
now CANNOT run, therefore we are deadlocked.




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]