This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Tracking vm activity


Hi,

I am playing around with getting information about page faults and I noticed that there is a probe alias for the entry for the pagefault code. However, there is no matching probe point for the return. It is useful to look at the return value to determine what kind of page fault occurred (major or minor). The attached patch provides a similar probe point for the return point. any comments on the patch?

This would be useful for the following senario. A probe on vm.pagefault could get the address and a probe on vm.pagefault.return could get information how it was handled. The analysis could then do things like which addresses cause major page faults (real disk accesses). One could write out a log of the major page faults (and the mmap operations) and track which files caused the page faults. Probably don't want to walk through the mm structures when taking a page fault to determine which file it came from.

-Will
Index: tapset/memory.stp
===================================================================
RCS file: /cvs/systemtap/src/tapset/memory.stp,v
retrieving revision 1.4
diff -U2 -u -r1.4 memory.stp
--- tapset/memory.stp	7 Nov 2006 09:26:24 -0000	1.4
+++ tapset/memory.stp	21 Mar 2007 14:40:27 -0000
@@ -27,4 +27,10 @@
 }
 
+probe vm.pagefault.return = kernel.function(
+        %( kernel_v >= "2.6.13" %? "__handle_mm_fault" %: "handle_mm_fault" %)
+        ).return
+{
+}
+
 /* Return which node the given address belongs to in a NUMA system */
 function addr_to_node:long(addr:long)  /* pure */ 

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]