- Performance analysis
In
software engineering , performance analysis, more commonly today known as profiling, is the investigation of a program's behavior using information gathered as the program executes (i.e. it is a form ofdynamic program analysis , as opposed tostatic code analysis ). The usual goal of performance analysis is to determine which sections of a program to optimize - usually either to increase its speed or decrease its memory requirement (or sometimes both).Use of profilers
A profiler is a performance analysis tool that measures the behavior of a program as it executes, particularly the frequency and duration of function calls. (An
instruction set simulator which is also - by necessity - a profiler, can measure the totality of a programs behaviour from invokation to termination.) The output may be a stream of recorded events (a trace) or a statistical summary of the events observed (a profile) or an ongoing interaction with thehypervisor . Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, and performance counters. The usage of profilers is 'called out' in theperformance engineering process.As the summation in a profile often is done related to the source code positions where the events happen, the size of measurement data is linear to the code size of the program. In contrast, the size of a (full) trace is linear to the program's execution time, making it somewhat impractical. For sequential programs, a profile is usually enough, but performance problems in parallel programs (waiting for messages or synchronization issues) often depend on the time relationship of events, thus requiring the full trace to get an understanding of the problem.
:"Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on new architectures. Software writers need tools to analyze their programs and identify critical sections of code.
Compiler writers often use such tools to find out how well theirinstruction scheduling orbranch prediction algorithm is performing..." (ATOM,PLDI , '94)History
Performance analysis tools existed on
IBM/360 andIBM/370 platforms from the early 1970s, usually based on timer interrupts which recorded theProgram status word (PSW) at set timer intervals to detect "hot spots" in executing code. This was an early example of sampling (see below). In early 1974,Instruction Set Simulator s permitted full trace and other performance monitoring features.Profiler-driven program analysis on Unix dates back to at least 1979, when Unix systems included a basic tool "prof" that listed each function and how much of program execution time it used. In 1982,
gprof extended the concept to a completecall graph analysis ("Gprof: a Call Graph Execution Profiler" [http://docs.freebsd.org/44doc/psd/18.gprof/paper.pdf] )In 1994, Amitabh Srivastava and Alan Eustace of
Digital Equipment Corporation published a paper describing ATOM [http://www-2.cs.cmu.edu/~bumba/filing_cabinet/papers/srivastava-atom.pdf] . ATOM is a platform for converting a program into its own profiler. That is, atcompile time , it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique, modifying a program to analyze itself, is known as "instrumentation".In 2004, both the Gprof and ATOM papers appeared on the list of the 50 most influential
PLDI papers of all time. [http://www.cs.utexas.edu/users/mckinley/20-years.html]Profiler Types based on Output
Flat profiler
Flat profilers compute the average call times, from the calls, and do not breakdown the call times based on the callee or the context.
Call-Graph profiler
Call Graph profilers show the call times, and frequencies of the functions,and also the call-chains involved based on the callee. However context isnot preserved.
Methods of data gathering
Event based profilers
The programming languages listed here have event-based profilers:
# .NET: Can attach a profiling agent as a COM server to the CLR. Like Java, the runtime then provides various callbacks into the agent, for trapping events like method JIT / enter / leave, object creation, etc. Particularly powerful in that the profiling agent can rewrite the target application's bytecode in arbitrary ways.
# Java: JVM-Tools Interface (formerly JVM Profiling Interface) JVM API provides hooks to profiler, for trapping events like calls, class-load, unload, thread enter leave.
# Python: Python profiling includes the profile module, hotshot (which is call-graph based), and using the 'sys.setprofile()' module to trap events like c_{call,return,exception}, python_{call,return,exception}.
# Ruby: Ruby also uses a similar interface like Python for profiling. Flat-profiler in profile.rb, module, and ruby-prof a C-extension are present.tatistical profilers
Some profilers operate by sampling. A sampling profiler probes the target program's
program counter at regular intervals usingoperating system interrupt s. Sampling profiles are typically less accurate and specific, but allow the target program to run at near full speed.Some profilers instrument the target program with additional instructions to collect the required information.
Instrumenting the program can cause changes in the performance of the program, causing inaccurate results andheisenbug s. Instrumenting can potentially be very specific but slows down the target program as more specific information is collected.The resulting data are not exact, but a statistical approximation. "The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods." [http://lgl.epfl.ch/teaching/case_tools/doc/gprof/gprof_12.html]
Some of the most commonly used statistical profilers are
GNU 'sgprof ,Oprofile , AMD'sCodeAnalyst and SGI's Pixie.Instrumentation
* Manual: Done by the programmer, e.g. by adding instructions to explicitly calculate runtimes.
* Compiler assisted: Example: "gcc -pg ..." for gprof, "quantify g++ ..." forQuantify
* Binary translation: The tool adds instrumentation to a compiled binary. Example: ATOM
* Runtime instrumentation: Directly before execution the code is instrumented. The program run is fully supervised and controlled by the tool. Examples: PIN,Valgrind
* Runtime injection: More lightweight than runtime instrumentation. Code is modified at runtime to have jumps to helper functions. Example:DynInst Hypervisor/Simulator
* Hypervisor: Data are collected by running the (usually) unmodified program under a
hypervisor . Example:SIMMON
* Simulator and Hypervisor: Data collected interactively and selectively by running under anInstruction Set Simulator . Examples: SIMON and OLIVER.imple manual technique
A variation on the
sampling technique, based ondeep sampling , places less importance on accurate measurement, and more importance on achieving optimization. It requires no instrumentation and takes only a small number of random time samples (e.g. 20). This can be done by running the program under adebugger andinterrupt ing it manually at unpredictable times using aBreak key orEsc key . (These samples are taken during the time when the program is being subjectively slow. If needed, a temporary outer loop can be added, to make it run long enough to take manual samples.) At each sample, the entirecall stack consisting of theprogram counter plus the stack ofreturn address es is recorded. Then the call stack samples are examined for addresses that appear on multiple samples. "Any address that is on the call stack X% of the time identifies aninstruction which, if removed, will save X% of running time."For example, if the program is spending X = 60% of it's time doing something wasteful, such as an unnecessary function call (whether distributed over one invocation or many), it will be seen doing that on 12 out of 20 samples, more or less. The more time is being wasted, the fewer samples are needed to see it. Since the exact instruction is indicated by the samples, there is no guesswork and no need to look elsewhere for the problem (no matter how large the software is). When it is repaired, a performance factor of 1/(1-X) is gained. In this case 1/(1-0.6) = 2.5x.
The entire process can be repeated multiple times, until no more wasteful activities are captured, often resulting in substantial cumulative speedups.
ee also
*
Algorithmic efficiency
* Benchmark
*List of performance analysis tools
*Performance engineering
*Performance prediction
* PAPI is a portable interface (in the form of a library) to hardware performance counters on modern microprocessors.
*Software slug
*Worst case execution time
*Java performance References
* Dunlavey, “Performance tuning with instruction-level cost derived from call-stack sampling”, ACM SIGPLAN Notices 42, 8 (August, 2007), pp. 4-8.
* Dunlavey, “Performance Tuning: Slugging It Out!”,Dr. Dobb's Journal , Vol 18, #12, November 1993, pp 18-26.External links
* Sample Demo " [http://channel9.msdn.com/ShowPost.aspx?PostID=46208 Using VSTS Performance Tools to Speed Up Your App] " by Ian Huff, a developer at Microsoft who is demonstrating the profiler in
Visual Studio Team System 2005
* Article " [http://www.ibm.com/developerworks/rational/library/05/1004_gupta/ Need for speed -- Eliminating performance bottlenecks] " on doing execution time analysis of Java applications using IBM Rational Application Developer.
Wikimedia Foundation. 2010.