This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Debugger support for __float128 type?


Hello,

I've been looking into supporting __float128 in the debugger, since we're
now introducing this type on PowerPC.  Initially, I simply wanted to do
whatever GDB does on Intel, but it turns out debugging __float128 doesn't
work on Intel either ...

The most obvious question is, how should the type be represented in
DWARF debug info in the first place?  Currently, GCC generates on i386:

        .uleb128 0x3    # (DIE (0x2d) DW_TAG_base_type)
        .byte   0xc     # DW_AT_byte_size
        .byte   0x4     # DW_AT_encoding
        .long   .LASF0  # DW_AT_name: "long double"

and

        .uleb128 0x3    # (DIE (0x4c) DW_TAG_base_type)
        .byte   0x10    # DW_AT_byte_size
        .byte   0x4     # DW_AT_encoding
        .long   .LASF1  # DW_AT_name: "__float128"

On x86_64, __float128 is encoded the same way, but long double is:

        .uleb128 0x3    # (DIE (0x31) DW_TAG_base_type)
        .byte   0x10    # DW_AT_byte_size
        .byte   0x4     # DW_AT_encoding
        .long   .LASF0  # DW_AT_name: "long double"

Now, GDB doesn't recognize __float128 on either platform, but on i386
it could at least in theory distinguish the two via DW_AT_byte_size.

But on x86_64 (and also on powerpc), long double and __float128 have
the identical DWARF encoding, except for the name.

Looking at the current DWARF standard, it's not really clear how to
make a distinction, either.  The standard has no way to specifiy any
particular floating-point format; the only attributes for a base type
of DW_ATE_float encoding are related to the size.

(For the Intel case, one option might be to represent the fact that
for long double, there only 80 data bits and the rest is padding, via
some combination of the DW_AT_bit_size and DW_AT_bit_offset or
DW_AT_data_bit_offset attributes.  But that wouldn't help for PowerPC
since both long double and __float128 really use 128 data bits,
just different encodings.)

Some options might be:

- Extend the official DWARF standard in some way

- Use a private extension (e.g. from the platform-reserved
  DW_AT_encoding value range)

- Have the debugger just hard-code a special case based
  on the __float128 name 

Am I missing something here?  Any suggestions welcome ...

B.t.w. is there interest in fixing this problem for Intel?  I notice
there is a GDB bug open on the issue, but nothing seems to have happened
so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  Ulrich.Weigand@de.ibm.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]