Jump to content

Program optimization

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by TakuyaMurata (talk | contribs) at 14:01, 12 May 2003 (temporary submit to save; I will merge related articles and organize). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In computer science, optimization is the process to produce the most effective and efficient code on particular setting. Such code is called optimal code.

By effective and efficient, its goal is generally to to maximize the execution speed and/or minimize memory space requirements, while keeping the same functionality. The two objectives are sometimes mutually exclusive, and in this case a choice must be made.

It can be doen by some kind of automatic computation, mostly found in compilers or by hand of programmers.

As compiler technologies have been developed, compilers often shows to generate better code than code by human.

What is optimal code largely depends on particular configuration, environments, platforms, or computer architectures.

the goal of optimization is to minimize speed (execution time) or space (memory requirements), mostly at runtime while keeping functionarities completely. Optimization may or may not involve sacrificing some functionarities, but most of time this is unfavorable. Despite its name, it cannot be certain of finding the most optimal code, so the term is rather a misnomer. An alternative, code improvement is sometimes used instead, although not commonly.

In computer programming, the goal of optimization is Code optimization usually starts with a rethinking of the algorithm used in the program: more often than not, a particular algorithm can be specifically tailored to a particular problem, yelding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.

After one is reasonably sure that the best algorithm is selected, code optimization can start: loops can be unrolled (for maximum efficiency of a processor cache memory), data types as small as possible can be used, an integer arithmetic can be used instead of a floating-point one, hash tables can replace linear vectors, and so on.

Sometimes, a critical part of the program can be re-written in a different, faster programming language. For example, it is common for very high-level languages like Python to have modules written in C, for a greater speed. Programs already written in C can have modules written in assembly.

The rewriting technique pays off due to a law known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, optimizing just a small part of the program can have a huge effect on the overall speed.

It should be noted that code optimization is not an exact science. Typical problems, data sets and requirements, and the suite of programming languages and processor characteristics, all lead to such a big number of possibilities that the programmer can only make educated guesses and arrive to a "good enough" solution.


See also: software optimization, compiler optimization, and performance tuning (those article should be merged to here).