This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: PR10000: emit _stp_relocate* calculations correctly for kernel/module global $data


On Wed, 2009-04-22 at 13:07 -0700, Roland McGrath wrote:
> I am saying that runtime module segment bounds is all you need and that you
> should not be trying to do anything for which you would care about
> individual sections' bounds.

OK, we will figure something out to get those. One way or another.

> > Then we can treat these just like ET_DYN shared libraries.
> 
> Almost.  The kernel modules have two "floating" segments (core and init),
> i.e. the offset between core and init addresses is not static.  So there
> are two separate contiguous chunks to consider at runtime.
> 
> In ET_DYN objects, there might be several segments in theory (not just
> magically two), but there is a static offset between them.  So there is
> only one contiguous address range to consider.

Right. For our purposes, there is just one giant segment for a mapped in
ET_DYN object, that might span several sub-segments, but together those
cover one continuous address space, for which we carry one symbol and
unwind table.

> > Is it possible to find out during offline mode which sections will go
> > into which kernel segment (core/init)? That would probably simplify some
> > of our code.
> 
> There is nothing that libdwfl or anything generic to ELF stuff knows about
> this.  So again you would just encode specific knowledge about the Linux
> kernel module loader in code specific to Linux kernel modules.  It just
> segregates them by which sections have names beginning with ".init".
> 
> But I don't see why you need to know that at all.  At runtime, you need to
> know which module a PC belongs to, end of story.
> 
> At stap module load time, you need to apply the correct adjustment to each
> address value, no different from probe PCs.  At translation time, you need
> to know what to tell that load time logic to do, i.e. the libdwfl
> relocation base info, no different from probe PCs.

OK, it seems I have to study how the probe insertion code tracks these
addresses then.

The main problem I have is that we keep symbol data for functions and
data per "relocation section". So for ET_EXEC and ET_DYN that is just
one table with addresses relative to the load address. We just need to
keep track of where these are loaded at runtime (and there we can easily
get the size of the segment mapped in). With ET_REL we have multiple
"relocation sections", so we have multiple symbol tables, one for each
index as given by dwfl_module_relocate_address (or more accurately per
section name returned by dwfl_module_relocation_info() for that index),
up to dwfl_module_relocations(). All these tables are relative to the
base address those sections will be mapped to in memory. So if we aren't
going to track where these sections are mapped into kernel memory, we we
need to figure out how these sections map into the kernel segments for
the module at runtime, and/or we need to keep the tables relative to the
segment start. I am still slightly confused how the "relocation
sections" map to the "runtime segments" when mapped into the kernel at
runtime.

Cheers,

Mark


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]