This is the mail archive of the
mailing list for the GDB project.
Function fingerprinting for useful backtraces in absence of debuginfo
- From: Martin Milata <mmilata at redhat dot com>
- To: Jan Kratochvil <jan dot kratochvil at redhat dot com>
- Cc: Tom Tromey <tromey at redhat dot com>, gdb at sourceware dot org, Karel Klic <kklic at redhat dot com>
- Date: Thu, 15 Sep 2011 14:32:31 +0200
- Subject: Function fingerprinting for useful backtraces in absence of debuginfo
Karel probably told you about this, but since more people are CC'd, I'll
add a brief introduction.
In ABRT , we would like to be able to check if two coredumps are from
the same bug in source code without using debuginfo. We have an idea how
to do this which involves computing some kind of fingerprint from
assembly of a function. Now we need someone who has good insight into
compilation and assembly in general to take a look at it and tell us
what he thinks. More detailed description is below.
Thanks for your time,
How would you check if two coredumps are from the same bug in source
code, but without using debuginfo?
In ABRT, we are working on coredump duplicate detection that is run at
the time of a crash. We want to avoid filling users' harddrives with
unnecessary coredumps from repeated crashes. At crash time, program
binaries are available, but debuginfo packages are not. Duplicate
coredumps should be detected even when the used binary or shared
library has been updated to newer version (=patched and recompiled),
and when the package has been rebuilt with a newer gcc.
The approach under consideration is to create a 'canonical backtrace'
from the coredump and its binaries without using the debuginfo. Having
a backtrace is useful as we have good duplicate detection algorithms
for backtraces. So the question is how to generate solid backtrace
from coredump. For each stack frame in a given core dump, we can
* The name of the function, if the corresponding binary is compiled
with function symbols (as is the case with the libraries) together
with offset into the function.
* Build ID of the binary together with offset of the instruction
pointer from the start of the executable segment of the file. This
should allow us to compare the pointers even if the text segments
were loaded at different addresses (prelink/aslr).
This means that we can compare two stack frames that either belong to
a libraries with function symbols available or to the same build of an
executable (that has Build IDs). We are not able to compare stackframes
from two executables built from slightly different source or with
different compiler options, because the instruction pointer offsets
The proposed solution of this problem is to take the instruction
pointer from each stack frame, look at the .eh_frame section of the
corresponding ELF to determine the boundaries of the function it
points to and then compute a fingerprint of this function. Such
fingerprint should be the same for two sequences of instructions that
were compiled from the same source code (and different for two
This is obviously not possible in general, but we thought we should be
able to devise something that will work in most of the cases. The
prototype we put together computes the fingerprint as several
properties of the function:
(Call graph properties)
* List of the library functions called.
* Whether the function calls some other functions in the file.
* Whether the function calls itself.
(Presence of types of instructions)
* Conditional jumps based on equality test/signed comparison/unsigned
This way, we are able to get the same fingerprint for something below
90 % of pairs of the same functions from a handful of programs we
tested, with ~3 % probability of two different random functions having
the same fingerprint.
What we need
Unfortunately, I have pretty much no experience with assembly and have
only a vague knowledge of compiler optimization techniques. The above
fingerprinting scheme is mostly based on trial-and-error and wild
So the question is: How to improve this function fingerprinting
scheme? Is there a better approach for coredump duplicate detection?