This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 0/7] Support reading/writing memory on architectures with non 8-bits bytes


On 15-04-09 12:29 PM, Eli Zaretskii wrote:
>> Date: Thu, 9 Apr 2015 11:39:06 -0400
>> From: Simon Marchi <simon.marchi@ericsson.com>
>> CC: <gdb-patches@sourceware.org>
>>
>>> I wonder: wouldn't it be possible to keep the current "byte == 8 bits"
>>> notion, and instead to change the way addresses are interpreted by the
>>> target back-end?
>>>
>>> IOW, do we really need to expose this issue all the way to the higher
>>> levels of GDB application code?
>>
>> I don't think there is an elegant way of making this work without gdb
>> knowing at least a bit about it. If you don't make some changes at one
>> level, you'll end up needing to make the equivalent changes at some other
>> level (still in gdb core).
> 
> I didn't mean to imply that this could work without changes on _some_
> level.  The question is what level, and whether or not we expose this
> to the application level, where commands are interpreted.
>
>> >From what I understand, your suggestion would be to treat addresses as
>> indexes of octets in memory. So, to read target bytes at addresses 3
>> and 4, I would have to ask gdb for 4 "gdb" bytes starting at address 6.
>>
>>                                               size == 2
>>                                         v-------------------v
>>           +---------+---------+---------+---------+---------+---------+
>> real idx  |    0    |    1    |    2    |    3    |    4    |    5    |
>>           +----+----+----+----+----+----+----+----+----+----+----+----+
>> octet idx |  0 |  1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 | 10 | 11 |
>>           +----+----+----+----+----+----+----+----+----+----+----+----+
>>                                         ^-------------------^
>>                                               size == 4
>>
>> The backend would then divide everything by two and read 2 target bytes
>> starting at address 3.
> 
> Something like that, yes.

Ok.

>> If we require the user or the front-end to do that conversion, we just push
>> the responsibility over the fence to them.
> 
> I don't follow: how does the above place any requirements on the user?

This was my train of thoughts:

- The gdb core (the target interface I suppose?) would use octet indexing
  and octet size, which is compensated by the backend. I understand that
  we are clear on that, given your "Something like that, yes".
- The functions handling the commands (the application level?) should be
  agnostic about the byte size, meaning they won't do any adjustment for
  that.
- Therefore, if we take mi_cmd_data_read_memory_bytes as an example, it
  would mean that the front-end would have to pass the double of the
  address and the size to get the desired result.

If you want the gdb core to keep using addressing and size in octets, the
conversion needs to be done somewhere, either in the head of the user, in
the front-end or in the command handling function, passing control to a
gdb core function.

>> For the developer working with that system 8 hours per day, a size
>> of 1 is one 16-bits byte. His debugger should understand that
>> language.
> 
> By "size" do you mean the result of "sizeof"?  That could still
> measure in target-side units, I see no contradiction there.  I just
> don't see why do we need to call that unit a "byte".
> 
>> If I have a pointer p (char *p) and I want to examine memory starting at p,
>> I would do "x/10h p". That wouldn't give me what I want, as it would give me
>> memory at p/2.
> 
> I don't see how it follows from my suggestion that 10 here must mean
> 80 bits.  It could continue meaning 10 16-bit units.

Sorry about that, I should have just used "x p". The /10h part was not part of
my point. Following my previous point where the user would have needed to specify
the double of the address, it would have meant that asking to read at address p
would have given memory starting at address p/2.

>> Also, the gdb code in the context of these platforms becomes instantly more
>> hackish if you say that the address variable is not really the address we want
>> to read, but the double.
> 
> I didn't say that, either.

That's what I understood. If the backend needs to adjust the address by dividing it
by two, it means that the address parameter it received was the double of the actual
address...

>> Another problem: the DWARF information describes the types using sizes in
>> target bytes (at least in our case, other implementations could do it
>> differently I suppose). The "char" type has a size of 1 (1 x 16-bits).
> 
> That's fine, just don't call that a "byte".  Call it a "word".

I actually started by using word throughout the code, but then I found it even
more ambiguous than byte. In the context of the x command, a word is defined as
four bytes, so it still clashes.

>> So, when you "print myvar", gdb would have to know that it needs to convert
>> the size to octets to request the right amount of memory.
> 
> No, it won't.  It sounds like my suggestion was totally misunderstood.

Indeed, I think I missed your point. Re-reading the discussion doesn't help. Could
you clarify a bit how you envision things would work at various levels in gdb? If
we don't understand each other clearly, this discussion won't go anywhere useful.

>> I think the solution we propose is the one that models the best the debugged
>> system and therefore is the least hacky.
> 
> My problem with your solution is that you require the user to change
> her thinking about what a "byte" and "word" are.

It doesn't change anything for all the existing users of GDB. A byte will continue
to be 8-bits for those platforms. So they don't need to change anything about how
they think.

I would assume that somebody developing for a system with 16-bits byte would be very
well aware of that fact. It is quite fundamental. They won't be shocked if the
debugger shows 16-bits when they asked to read 1 byte. Quite the opposite actually,
it will feel like a natural extension of the compiler.

> GDB is moving to
> being able to debug several different targets at the same time, and I
> worry about the user's sanity when one of those targets is of the kind
> you are describing.  E.g., suppose we will have a command to copy
> memory from one target to another: how do we count the size of the
> buffer then?

About that particular example, I guess such a command would have no other choice but
to use the greatest common divisor of each memory unit, the octet. If there was
a command to copy data from one memory to another, this situation would arise with
the AVR ATMega chips, where SRAM data memory is 8-bits but the flash program memory is
16-bits.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]