This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [rfc][3/3] Remote core file generation: memory map


Pedro Alves wrote:
> On Tuesday 08 November 2011 17:25:36, Ulrich Weigand wrote:
> > > The problem is that the way GDB uses the memory map is completely
> > > incompatible with the presence of multiple address spaces.
> > > 
> > > There is a single instance of the map (kept in a global variable
> > > mem_region_list in memattr.c), which is used for any access in
> > > any address space.  lookup_mem_region takes only a CORE_ADDR;
> > > the "info mem" commands only operate on addresses with no notion
> > > of address spaces.  
> 
> That's mostly because we never really needed to consider making it
> per multi-process/inferior/exec before, and managed to just look the
> other way.  Targets that do multi-process don't use the map presently.
> I'm sure there are other things that live in globals but that should
> be per-inferior or address space, waiting for someone to trip on
> them, and eventually get fixed.  :-)

Yes, that's what I thought :-)

> > This seems to me to be an argument *for* splitting the contents into
> > two maps; the system map which is static and cached (and used for
> > each memory access), and the per-process map which is dynamic
> > and uncached (and only used rarely, in response to unfrequently
> > used user commands) ...
> 
> On e.g., uclinux / no mmu, you could have both the
> system memory map returning the properties of memory of the
> whole system, and gdb could use that for all memory accesses,
> but, when generating a core of a single process, we're only
> interested in the memory "mapped" to that process.  So I tend
> to agree.

OK, another good point.

> We could also make the existing memory map be per-process/aspace,
> and define it describe only the process's map (a process is a means
> of a virtualization of the system resources after all).  The dynamic
> issue with process's memory maps then becomes a cache management
> policy decision.  E.g., at times we know the map can't change (all is
> stopped, or by user knob), this would automatically enable the dcache
> for all RO regions (mostly .text).  We can still do this while having
> two maps mechanism though.
> 
> It doesn't seem there's a true answer to this, but I'm leaning
> on a new target object.

OK.  In the meantime, I've noticed the discussion going on in parallel
on the "info core mappings" commands.  If we implement this, we have
the somewhat weird situation that we can show mappings for native
processes and for core files, but not for processes attached to remotely,
even if the target is also Linux ...

It would appear to me that this command actually just needs the very
same data I need here for the generate-core-file command, namely the
current list of memory mappings.

If we create a new target object for VMA memory mappings, maybe we
ought to then have a standard "info mappings" (or the like) command
implemented in GDB *common code* that works likewise on native,
core file, *and* also gdbserver targets; in fact, on all targets
that provide that new target object (which may need to be a bit
richer, e.g. provide mapped file names as well)?

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]