This is the mail archive of the
elfutils-devel@sourceware.org
mailing list for the elfutils project.
Re: example for using libdw(fl) for in-process stack unwinding?
- From: Mark Wielaard <mjw at redhat dot com>
- To: elfutils-devel at lists dot fedorahosted dot org
- Date: Fri, 10 Jun 2016 13:04:14 +0200
- Subject: Re: example for using libdw(fl) for in-process stack unwinding?
On Thu, 2016-06-09 at 17:54 +0200, Milian Wolff wrote:
> from [1] I got that libdw(fl) can be used for unwinding as an alternative to
> libunwind, i.e. `dwfl_getthread_frames()`. Apparently, it is even considerably
> faster in the context of Linux perf. I'd like to try that out and compare it
> to libunwind in the context of my heaptrack tracer.
>
> [1]: https://lwn.net/Articles/579508/
>
> But, so far, I could not find an example for using libdw(fl) for in-process
> stack unwinding. All examples I can find out there use libunwind for the
> unwinding and libdw only for the DWARF debug information interpretation.
The libdwfl interface was designed for out-of-process/core-file
unwinding. In theory it should also work for in-process unwinding and
there have been some attempts to make that easy. But you need to create
a Dwfl and attach state for your own process, which is not entirely
trivial.
> I've tried my shot at implementing a trivial example around
> `dwfl_getthread_frames` but struggle with the API a lot. It is quite involved,
> contrary to a simple `unw_backtrace`, or even to the manual stepping with
> libunwind over the `unw_local_addr_space`. The documentation of libdw(fl)
> often refers to terms that I have no clue about as I'm not deeply acquainted
> with the DWARF and ELF specs. Problems I'm facing are:
>
> - Am I correct in assuming that in-process is the opposite of "offline use"
> referred to in the libdwfl API documentation?
Yes. offline means you report whole ELF modules and let libdwfl figure
out the in-memory layout. "online" means you report the modules given a
specific address layout (assume running process).
> * If so, what should I set `Dwfl_Callbacks::section_address` to?
section_address can usually be NULL. It is used when dealing with ET_REL
files. Normal processes are made up of ET_EXEC and ET_DYN files
(executable and shared libraries). Dwfl can also be used to introspect
the kernel. Modules in the kernel are ET_REL which need special rules
for memory layout. Normal processes don't need it.
> - How do I attach state in-process? `dwfl_attach_state` sounds like the
> correct choice, as `dwfl_linux_proc_attach` mentions ptrace which I don't
> want/need. So, assuming it's `dwfl_attach_state`:
>
> What is the correct way to get an `Elf *` for my current executable? Do I
> really `open("/proc/self/exe")` and pass that to `elf_begin`? What Elf_Cmd
> should I use? ELF_C_READ?
That should work. But you should also be able to just use
dwfl_linux_proc_report (dwfl, getpid()) or even dwfl_linux_proc_attach
(dwfl, getpid(), false) which reconstructs the whole process layout
from /proc/pid/map. The last is used for example in
tests/dwfl-proc-attach.c.
> How should the obligatory callbacks of Dwfl_Thread_Callbacks be implemented?
>
> * next_thread: I'm only interested in the current thread, so doing something
> similar to perf should be possible here
> * memory_read: just cast the address to a pointer and dereference it?
> * set_initial_registers: no clue, really
This is the big issue. And we don't yet have a standard callback for
that. Ben Gamari (added to CC) has been working on that. You can find
his patches at https://github.com/bgamari/elfutils/commits/local-unwind
with some discussion in the archives:
https://lists.fedorahosted.org/archives/list/elfutils-devel@lists.fedorahosted.org/thread/VDZY5DA6QEYYXLR4NWUY77NHE43HBSKH/
https://lists.fedorahosted.org/archives/list/elfutils-devel@lists.fedorahosted.org/thread/VFTKJQ3LS4WN3RVMZES3BOPTHR5IPHU6/
> Is there an easy-to-grasp example out there somewhere for me to follow on how
> to use libdw(fl) for in-process stack unwinding?
Hope the above helps. But feel free to ask more questions.
Cheers,
Mark