Profiling (computer programming)
In computer science, program analysis is the use of specialized software, called a profiler, to gather data about a program's execution. The profiler is used to determine how long certain parts of the program take to execute, how often they are executed, or to generate the call graph (the mathematical graph of which functions call which other functions)
Typically this information is used to identify those portions of the program that take the longest to complete. These time consuming parts can then be optomized to run faster. It is also a a common technique for debugging.
Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on new architectures. Software writers need tools to analyze their programs and identify critical pieces of code. Compiler writers often use such tools to find out how well their instruction scheduling or branch prediction algorithm is performing... (ATOM, PLDI, '94)
History
Profiler-driven program analysis dates back to 1982, with the publication of Gprof: a Call Graph Execution Profiler [1]. The paper outlined a system which later became the GNU profiler, also known as gprof.
In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM [2]. ATOM is a platform for converting a program into its own profiler. That is, at compile time, it inserts code into the program to be analzed. That inserted code outputs analysis data. This technique, modifying a program to analyze itself, is known as "instrumentation".
In 2004, both the Gprof and ATOM papers appeared on the list of the 20 most influental PLDI papers of all time. [3]
Methods of data gathering
Statistical profilers
Some profilers operate by sampling. A sampling profiler probes the target program's Program counter at regular intervals using operating system interrupts. Sampling profiles are typically less accurate and specific, but allow the target program to run at near full speed.
Some profilers instrument the target program with additional instructions to collect the required information. Instrumenting the program can cause changes in the performance of the program, causing inaccurate results and heisenbugs. Instrumenting can potentially be very specific but slows down the target program as more specific information is collected.
The resulting data is not exact, but a stastistical approximation. The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods. [4]
Instrumentation platforms
External links
- gprof The GNU Profiler, part of GNU Binutils (which are part of the GNU project); you can use some visualisation tools called VCG tools and combine both of them using Call Graph Drawing Interface (CGDI); a second solution is kprof. More for C/C++ but works well for other languages.
- FunctionCheck is a profiler that was created "because the well known profiler gprof have some limitations". using special GCC tricks. kprof is a front-end. For C++/C.
- Valgrind is a GPL'd system for debugging and profiling x86-Linux programs. You can automatically detect many memory management and threading bugs. alleyoop is a front-end for valgrind. It works for any language and the assembler.