This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Problem with breakpoint addresses


On Fri, 2006-10-13 at 09:19 +0100, Andrew STUBBS wrote:
> Michael Snyder wrote:
> > What's the size of $r1, and what's the size of an address?
> > By converting $r1 to an address, you're applying an implied cast.
> > If that doesn't give the expected result (eg. because $r1 is signed),
> > then you need to use an explicit cast.
> 
> Registers are 32 bit, addresses are 32 bit. It's just something in GDB 
> that uses 64 bit. It might be because sh-elf also supports sh64.
> 
> In any case, it is successfully setting the breakpoint and then failing 
> to recognise it when it is hit. That isn't the behaviour I would like. 
> If it totally failed to set it then giving the cast might be fair 
> enough, if the user thought addresses were 64 bit.

Hmmm.  Well, gdb's internal representation of a target address is
a typedef COREADDR, and usually it equates to a long long (64 bits).

Seems like, if we know that for a given architecture, an actual
target address is only 32 bits, we should always make sure to 
save only 32 bits into a COREADDR.

Maybe its time we made COREADDR into a first class object, with 
accessor methods.  I know, I know, we can only pretend to do it in
C, but we already do treat a number of things like objects.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]