This is the mail archive of the systemtap@sources.redhat.com mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Experiences with kprobes


William Cohen wrote:
I wrote some simple tests to check the overhead of kprobes and jprobes. I have run them on an athlon and pentium III machine, but I haven't run them on an pentium IV. It could be the costs are higher on Pentium IV. Could you give these a try on your pentium IV machine? The following URL has an attachment with software for measuring overhead:

http://sources.redhat.com/ml/systemtap/current/msg00093.html

Here is the output I get: jprobe overhead count 1000000, number of cycles -1747898027

it seems weird that jprobes takes a negative time to complete. We have a time machine! ;-)

Changing int to unsigned (for format strings and results, clocks_per_iter and [kj]probes_triggers):

kprobe registered
kprobe start 287891875710, stop 292750426140
kprobe overhead count 1000000, number of cycles 563583134
kprobe overhead of cycles 563 per iteration
kprobe start 292797438780, stop 297732067650
kprobe overhead count 1000000, number of cycles 639661574
kprobe overhead of cycles 639 per iteration
kprobe start 297779038447, stop 302713671202
kprobe overhead count 1000000, number of cycles 639665459
kprobe overhead of cycles 639 per iteration
kprobe unregistered
plant jprobe at e086b140, handler addr e086b000
jprobe start 304225050330, stop 311066753032
jprobe overhead count 1000000, number of cycles 2546735406
jprobe overhead of cycles 2546 per iteration
jprobe start 311114384910, stop 318032310577
jprobe overhead count 1000000, number of cycles 2622958371
jprobe overhead of cycles 2622 per iteration
jprobe start 318079932187, stop 324920699910
jprobe overhead count 1000000, number of cycles 2545800427
jprobe overhead of cycles 2545 per iteration
jprobe unregistered

Doing experiment for kprobe-timing

Daemon started.
Profiler running.
Stopping profiling.
Killing daemon.
CPU: P4 / Xeon, speed 2993.08 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples % symbol name
7486 86.2542 debug
240 2.7653 debug_stack_correct
120 1.3826 page_fault


Doing experiment for jprobe-timing

Daemon started.
Profiler running.
Stopping profiling.
Killing daemon.
CPU: P4 / Xeon, speed 2993.08 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples % symbol name
2750 42.8549 int3
2595 40.4395 debug
110 1.7142 page_fault
84 1.3090 do_wp_page
72 1.1220 setjmp_pre_handler



jprobes seems excessively heavy with 2500 clocks, they account for most of the overhead I see (3x2500+3x600=9300).


Baruch


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]