This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 2/3] skip_prolgoue (amd64)


On 12/05/2013 02:07 PM, Yao Qi wrote:
> On 12/05/2013 08:00 PM, Pedro Alves wrote:
>> I think we can.  My view here is that handling an event
>> is a quick and short lived operation.  GDB bursts a few reads
>> in sequence, and then moves on to the next event.  In that
>> scenario, you get as much stale results with or without a cache.
> 
> I disagree.  Results may be staled with cache, but results may be 
> different, not staled, without cache.  They are different because they 
> are red on different times, but all of them are valid.  It is a snapshot 
> of a piece of memory on a certain moment.

Sigh, I don't know why I wrote "stale" there.  I meant "wrong,
inconsistent, useless, whatnot".  As in, if a thread changes
memory while GDB is reading it, you can get incoherent/self-inconsistent
results.  E.g,. even if between the inferior's threads, writes to
'struct { int a, int b} ab;' are coordinated, say, with a mutex,
when printing 'ab', GDB can end up reading a chunk of the
structure's contents before the write, and another chunk after
the write, and present that frankenstein value to the user.
You can get such undefined results with or without a cache,
because the "certain moment" will be different for each of
the partial reads.  Even each partial read is not guaranteed
to be atomic.

>> IOW, even without the cache, running threads can change memory as
>> GDB reads it, and so the chances of hitting stale data with or
>> without a cache are practically the same.  OTOH, distinct target
>> events (and commands, etc.) can trigger quite apart (time-wise),
>> and that break the odd balance -- not flushing the cache
>> between events increases the changes of hitting stale data,
> 
> I suspect you meant "chances" instead of "changes".

Yes.

> 
>> compared to not having a cache.
> 
> Flushing the cache decreases likelihood of getting staled data, but
> can't completely remove it.  

Right.  The trick IMO, is selecting flush points that make it so
that that chances of getting an incoherent value/memory chunk
are practically the same with or without a cache.  Places where
GDB needs to be sure to get a coherent, instantaneous
snapshot view of memory need to handle that specially (we do
that nowhere presently), e.g., by pausing all (affected) threads,
or perhaps even something else smarter (say, with kernel help,
the debugger declares intention of reading a memory range,
and the kernel makes sure the associated pages don't change
from the debugger's view, COW pages if some inferior threads
wants to change them while the debugger is accessing them).

> I am fine to use cache in non-stop mode, as
> it helps performance, so we have to compromise.

Right.

>>>> Beside the predicate "is any thread running", another is "no thread is
>>>> resumed since last flushing".  Cache should be flushed when either is
>>>> true.
>> Not sure I understood that.
> 
> I meant, even "none of threads is running now", we may have to flush 
> cache if "they were resumed" (and all stopped now).

OK, I see what you mean.  I was assuming "all stopped now"
means "we've seen all threads report stops", but there are
indeed other ways to implement that predicate, and indeed
that case needs to be considered.

-- 
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]