This is the mail archive of the
binutils@sources.redhat.com
mailing list for the binutils project.
Re: [wip:binutils] Large corefile support
Andrew Cagney <ac131313@redhat.com> writes:
> > Admittedly the issue is confused now, because file_ptr is typedefed to
> > be bfd_signed_vma. But we don't need to perpetuate the confusion. I
> > think we should define file_ptr to be off_t or off64_t, and then
> > consistently use ftello or ftello64 and friends if they are available.
>
> So always use 64-bit file I/O when available?
Yes, I suppose that is what I am suggesting. Does anybody think that
would be a bad idea?
> Note that because off64_t can be conditionally compiled file_ptr will
> be need to be defined using the underlying type. Otherwize it will
> make a mess of BFD's "bfd.h" file.
Good point. Perhaps the right approach would be something like:
if (64-bit type available && 64-bit file I/O available)
typedef long long file_ptr;
else
typedef long file_ptr;
> Hmm, makes me also wonder if, in a second pass, a rename of file_ptr
> to a more name space proof bfd_offset (better?) is required.
That would be nice, but seems like an independent issue.
> > So I guess I would like to see a bit more to your patch--change the
> > definition of file_ptr, too. Despite the comment, we know how to set
> > the type based on host definitions tested by autoconf--see, e.g., the
> > handling of BFD_HOST_64_BIT.
>
> Hmm, there's AC_COMPILE_CHECK_SIZEOF but I don't see how to convince
> it to both #define _GNU_SOURCE and #include <stdio.h>.
We don't actually care what the size is; we just want to make sure it
is large enough. Look at how bfd/configure.in checks for long long.
(If you really want the actual size, just look at what
AC_COMPILE_CHECK_SIZEOF actually does: it calls AC_TRY_COMPILE on
switch (0) case 0: case (sizeof (long) == $ac_size):;
for various values of ac_size.)
Ian