This is the mail archive of the
gdb@sourceware.org
mailing list for the GDB project.
RE: Static/dynamic userspace/kernel trace
> Having a very low overhead pre-filter of the trace output
> using full expressions based on context variables, keeping
> statistics through aggregate state variables and deciding
> what to push through the trace output buffer using formatted
> output and data kept in associative arrays helps a lot. Since
> all can be done without incurring extra i/o, context-switches
> or external/post filtering it makes interpreting/analyzing
> the actual trace data a lot easier and lower overhead. It
> might also help in your use case, since you don't have to
> push multi-megabytes of trace data off a machine but can
> tailor the trace buffers to only have a couple of K of
> targeted output.
We can use conditional tracing in LTTng/kprobe and GDB tracepoint, it could be good to have a more elaborate conditional tracing. It could be worth it to compare the different Linux conditional tracing options (both user space/kernel and dynamic/static) and see how we can improve some of them.
Our problem is if we do a very fancy condition or live analysis of the data before logging we have too much overhead in CPU cycles.