This is the mail archive of the binutils@sourceware.cygnus.com mailing list for the binutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: USE_MMAP


   Date: Sun, 13 Jun 1999 00:34:35 +0200 (CEST)
   From: Krister Walfridsson <cato@df.lth.se>

   Is USE_MMAP in bfd meant to be used? NetBSD's toolchain is defining it,
   and it causes some trouble for at least nm on a.out platforms.

I didn't think anybody used USE_MMAP.  The last time I fiddled around
with it (it was Ken who wrote it originally), the resulting linker was
actually slower, so it didn't seem worth pushing it (this was on
SunOS).  mmap always sounds good, but I don't think there is any
inherent reason for it to be faster than file I/O, and BFD doesn't
need any of the other advantages of using it.  Does the NetBSD
toolchain run faster because of the use of USE_MMAP?

   The problem is that nm gets a pointer to minisyms in display_rel_file().
   The minisyms are memory mapped, but nm wants to free() the memory anyway.

   What is the correct way to solve it? Don't use USE_MMAP? Add #ifdef USE_MMAP 
   in nm too? Add some kind of release_minisymbols() to bfd?

You're right, there is some confusion going on here.  I defined
bfd_read_minisymbols to return memory allocated using malloc, so it is
correct for nm to free it.  If USE_MMAP is not defined, then the a.out
read_minisymbols routine does return an malloc storage block (see
aout_get_external_symbols).  However, if USE_MMAP is defined, the
block is not allocated using malloc.  So there is a bug.

This could be fixed in a few possible ways.  Not using USE_MMAP would
be the simplest.  Copying the minisymbols into an malloc storage block
if USE_MMAP is defined would work, though it would not be very
efficient.  Otherwise, I think the interface would have to be changed
somehow.

Ian

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]