Interprocedural optimization

Interprocedural optimization

Interprocedural optimization (IPO) is a compiler technique used in computer programming to improve performance in programs containing many frequently used functions of small or medium length. IPO differs from other compiler optimization because it analyzes the entire program; other optimizations look at only a single function, or even a single block of code.

IPO seeks to reduce or eliminate duplicate calculations, inefficient use of memory, and to simplify iterative sequences such as loops. If there is a call to another routine that occurs within a loop, IPO analysis may determine that it is best to inline that. Additionally, IPO may re-order the routines for better memory layout and locality.

IPO may also include typical compiler optimizations on a whole-program level, for example dead code elimination, which removes code that is never executed. To accomplish this, the compiler tests for branches that are never taken and removes the code in that branch. IPO also tries to ensure better use of constants. Modern compilers offer IPO as an option at compile-time. The actual IPO process may occur at any step between the human-readable source code and producing a finished executable binary program.

Analysis

The objective, as ever, is to have the program run as swiftly as possible; the problem, as ever, is that it is not possible for a compiler to analyse a program and always correctly determine what it will do, still less what the programmer may have intended for it to do. By contrast, human programmers start at the other end with a purpose, and attempt to produce a program that will achieve it, preferably without expending a lot of thought in the process. So, the hope is that an optimising compiler will aid us by bridging the gap.

For various reasons, including readability, programs are frequently broken up into a number of procedures, which handle a few general cases. However, the generality of each procedure may result in wasted effort in specific usages. Interprocedural optimisation represents an attempt at reducing this waste.

Suppose you have a procedure that evaluates F(x), and your code requests the result of F(6) and then later, F(6) again. Surely this second evaluation is unnecessary: the result could have been saved, and referred to later instead. This assumes that F is a pure function. Particularly, this simple optimisation is foiled the moment that the implementation of F(x) is impure; that is, its execution involves reference to parameters other than the explicit argument 6 that have been changed between the invocations, or side effects such as printing some message to a log, counting the number of evaluations, accumulating the CPU time consumed, and so forth. Losing these side effects via non-evaluation a second time may be acceptable, or may not: a design issue beyond the scope of compilers.

More generally, aside from organisation, the second reason to use procedures is to avoid duplication of code that would be the same, or almost the same, each time the actions performed by the procedure are desired. A general approach to optimization would therefore be to reverse this: some or all invocations of a certain procedure are replaced by the respective code, with the parameters appropriately substituted. The compiler will then try to optimize the result.

Example

Program example; integer b; %A variable "global" to the procedure "Silly". Procedure Silly(a,x) if x < 0 then a:=x + b else a:=-6; End Silly; %Reference to b, not a parameter, makes "Silly" "impure" in general. integer a,x; %These variables are visible to "Silly" only if parameters. x:=7; b:=5; Silly(a,x); Print x; Silly(x,a); Print x; Silly(b,b); print b; End example;

If the parameters to "Silly" are passed by value, the actions of the procedure have no effect on the original variables, and since "Silly" does nothing to its environment (read from a file, write to a file, modify global variables such as "b", etc.) its code plus all invocations may be optimised away entirely, leaving just the "print" statements and the value of "a" undefined.

If instead the parameters are passed by reference, then action on them within "Silly" does indeed affect the originals. This is usually done by passing the machine address of the parameters to the procedure so that the procedure's adjustments are to the original storage area, but a variant method is copy-in, copy-out whereby the procedure works on a local copy of the parameters whose values are copied back to the originals on exit from the procedure. If the procedure has access to the same parameter but in different ways as in invocations such as "Silly(a,a)" or "Silly(a,b)", discrepancies can arise.

In the case of call by reference, procedure "Silly" has an effect. Suppose that its invocations are expanded in place, with parameters identified by address: the code amounts to x:=7; b:=5; if x < 0 then a:=x + b else a:=-6; print x; %"a" is changed. if a < 0 then x:=a + b else x:=-6; print x; %Because the parameters are swapped. if b < 0 then b:=b + b else b:=-6; print b; %Two versions of variable "b" in "Silly", plus the global usage.The compiler could then in this rather small example follow the constants along the logic (such as it is) and find that the predicates of the if-statements are constant and so... x:=7; b:=5; a:=-6; print 7; %"b" is not referenced, so this usage remains "pure". x:=-1; print -1; %"b" is referenced... b:=-6; print -6; %"b" is modified via its parameter manifestation.And since the assignments to "a", "b" and "x" deliver nothing to the outside world - they do not appear in output statements, nor as input to subsequent calculations (whose results in turn "do" lead to output, else they also are needless) - there is no point in this code either, and so the result is print 7; print -6; print -6;If however the parameters were passed by copy-in, copy-out then "Silly(b,b)" would expand into p1:=b; p2:=b; %Copy in. Local variables "p1" and "p2" are equal. if p2 < 0 then p1:=p2 + b else p1:=-6; %Thus "p1" may no longer equal "p2". b:=p1; b:=p2; %Copy out. In left-to-right order, the value from "p1" is overwritten.And in this case, copying the value of "p1" (which has been changed) to "b" is pointless, because it is immediately overwritten by the value of "p2", which value has not been modified within the procedure from its original value of "b" and so the third statement becomes print 5; %Not -6 Such differences in behaviour are likely to cause puzzlement, exacerbated by questions as to the order in which the parameters are copied: will it be left to right in both cases? These details are probably not carefully explained in the compiler manual, and if they are, they will likely be passed over as being not relevant to the immediate task and long forgotten by the time a problem arises. If (as is likely) temporary values are provided via a stack storage scheme, then it is likely that the copy-back process will be in the reverse order to the copy-in, which in this example would mean that "p1" would be the last value returned to "b" instead.

Incidentally, when procedures modify their parameters, it is important to be sure that any constants supplied as parameters will not have their value changed (constants can be held in memory just as variables are) lest subsequent usages of that constant (made via reference to its memory location) go awry. This can be accomplished by compiler-generated code copying the constant's value into a temporary variable whose address is passed to the procedure, and if its value is modified, no matter; it is never copied back to the location of the constant. Put another way, a carefully-written test program can report on whether parameters are passed by value or reference, and if used, what sort of copy-in and copy-out scheme. However, variation is endless: simple parameters might be passed by copy whereas large aggregates such as arrays might be passed by reference; simple constants such as zero might be generated by special machine codes (such as Clear, or LoadZ) while more complex constants might be stored in memory tagged as read-only with any attempt at modifying it resulting in immediate program termination, etc.

This example is extremely simple, although complications are already apparent. More likely it will be a case of many procedures, having a variety of deducible or programmer-declared properties that may enable the compiler's optimisations to find some advantage. Any parameter to a procedure might be read only, be written to, be both read and written to, or be ignored altogether giving rise to opportunities such as constants not needing protection via temporary variables, but what happens in any given invocation may well depend on a complex web of considerations. Other procedures, especially function-like procedures will have certain behaviours that in specific invocations may enable some work to be avoided: for instance, the Gamma function, if invoked with an integer parameter, could be converted to a calculation involving integer factorials.

Some computer languages enable (or even require) assertions as to the usage of parameters, and might further offer the opportunity to declare that variables have their values restricted to some set (for instance, 6 < x ≤ 28) thus providing further grist for the optimisation process to grind through, and also providing worthwhile checks on the coherence of the source code to detect blunders. But this is never enough - only some variables can be given simple constraints, while others would require complex specifications: how might it be specified that variable "P" is to be a prime number, and if so, is or is not the value 1 included? Complications are immediate: what are the valid ranges for "D" given that "M" is a month number? And are all violations worthy of immediate termination? Even if all that could be handled, what benefit might follow? And at what cost? Full specifications would amount to a re-statement of the program's function in another form and quite aside from the time the compiler would consume in processing them, they would thus be subject to bugs. Instead, only simple specifications are allowed with run-time range checking provided.

In cases where a program reads no input (as in the example), one could imagine the compiler's analysis being carried forward so that the result will be no more than a series of print statements, or possibly some loops expediently generating such values. Would it then recognise a program to generate prime numbers, and convert to the best-known method for doing so, or, present instead a reference to a library? Unlikely! In general, arbitrarily complex considerations arise (the Entscheidungsproblem) to preclude this, and there is no option but to run the code with limited improvements only.

Flags and implementation

The Intel C/C++ compilers allow whole-program IPO. The flag to enable interprocedural optimizations for a single file is -ip, the flag to enable interprocedural optimization across all files in the program is -ipo [http://www.tacc.utexas.edu/services/userguides/intel8/cc/c_ug/lin1149.htm] .

The GNU GCC compiler has function inlining, which is turned on by default at -O3, and can be turned on manually via passing the switch (-finline-functions) at compile time [http://gcc.gnu.org/onlinedocs/gcc-4.1.1/gcc/Optimize-Options.html#Optimize-Options] . GCC version 4.1 has a new infrastructure for inter-procedural optimization [http://gcc.gnu.org/wiki/Interprocedural_optimizations] .

The Microsoft C Compiler, integrated into Visual Studio, also supports interprocedural optimization. [http://msdn.microsoft.com/vstudio/tour/vs2005_guided_tour/VS2005pro/Framework/CPlusAdvancedProgramOptimization.htm]

References

* http://www.intel.com/cd/software/products/asmo-na/eng/compilers/fwin/278834.htm#Interprocedural_Optimization_(IPO) Intel Visual Fortran Compiler 9.1, Standard and Professional Editions, for Windows* - Intel Software Network]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Compiler optimization — is the process of tuning the output of a compiler to minimize or maximize some attributes of an executable computer program. The most common requirement is to minimize the time taken to execute a program; a less common one is to minimize the… …   Wikipedia

  • Link-time optimization — is a type of program optimization performed by a compiler to a program at link time. Link time optimization occurs in programming languages that compile programs on a file by file basis (such as C and Fortran), rather than all at once (such as… …   Wikipedia

  • Loop optimization — In compiler theory, loop optimization plays an important role in improving cache performance, making effective use of parallel processing capabilities, and reducing overheads associated with executing loops. Most execution time of a scientific… …   Wikipedia

  • Compiler — This article is about the computing term. For the anime, see Compiler (anime). A diagram of the operation of a typical multi language, multi target compiler A compiler is a computer program (or set of programs) that transforms source code written …   Wikipedia

  • Intel C++ Compiler — Esta página o sección está siendo traducida del idioma inglés a partir del artículo Intel C++ Compiler, razón por la cual puede haber lagunas de contenidos, errores sintácticos o escritos sin traducir. Puedes colaborar con Wikipedia …   Wikipedia Español

  • Intel C++ Compiler — (also known as icc or icl) describes a group of C/C++ compilers from Intel. Compilers are available for Linux, Microsoft Windows and Mac OS X.Intel supports compilation for its IA 32, Intel 64, Itanium 2, and XScale processors. The Intel C++… …   Wikipedia

  • IPO (disambiguation) — IPO can refer to:*In finance, Initial public offering *In intellectual property (patents, trademarks, copyrights): ** the Intellectual Property Owners Association ** an organization and public office referred to as intellectual property office… …   Wikipedia

  • IPO — wird verwendet als Abkürzung für Initial Public Offering, erstmaliges öffentliches Anbieten von Aktien an der Börse, siehe Börsengang International Procurement Organization, Beschaffungsdienstleister im Rahmen des Global Sourcing, siehe Global… …   Deutsch Wikipedia

  • Межпроцедурная оптимизация — (англ. Interprocedural Optimization, IPO) или полнопрограммная оптимизация  оптимизация компилятора, которая затрагивает несколько процедур, зачастую находящихся в разных модулях. Такую оптимизацию можно применить, лишь проанализировав… …   Википедия

  • Register allocation — In compiler optimization, register allocation is the process of multiplexing a large number of target program variables onto a small number of CPU registers. The goal is to keep as many operands as possible in registers to maximise the execution… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”