Last ten users to contribute to the page (page_recent_contributors ) | [
0 => '74.71.223.151',
1 => 'David Eppstein',
2 => '14.139.243.10',
3 => 'Srijanrshetty',
4 => 'Amaniitk',
5 => 'Materialscientist',
6 => 'Kuchayrameez',
7 => 'Petrb',
8 => 'ClueBot NG',
9 => '103.16.69.3'
] |
Old page wikitext, before the edit (old_wikitext ) | '[[Image:Greedy algorithm 36 cents.svg|thumb|280px|right| Greedy algorithms determine minimum number of coins to give while [[Change-making problem|making change]]. These are the steps a human would take to emulate a greedy algorithm to represent 36 cents using only coins with values {1, 5, 10, 20}. The coin of the highest value, less than the remaining change owed, is the local optimum. (Note that in general the change-making problem requires [[dynamic programming]] or [[Linear_programming#Integer_unknowns|integer programming]] to find an optimal solution; However, most currency systems, including the Euro and US Dollar, are special cases where the greedy strategy does find an optimum solution.)]]
A '''greedy algorithm''' is an [[algorithm]] that follows the [[problem solving]] [[heuristic (computer science)|heuristic]] of making the locally optimal choice at each stage<ref name="NISTg">{{cite web|last=Black|first=Paul E.|title=greedy algorithm|url=http://xlinux.nist.gov/dads//HTML/greedyalgo.html|work=Dictionary of Algorithms and Data Structures|publisher=[[National Institute of Standards and Technology|U.S. National Institute of Standards and Technology]] (NIST) |accessdate=17 August 2012|date=2 February 2005}}</ref> with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For example, a greedy strategy for the [[traveling salesman problem]] (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In [[mathematical optimization]], greedy algorithms solve [[combinatorial optimization|combinatorial problem]]s having the properties of [[matroid]]s.
==Specifics==
In general, greedy algorithms have five components:
# A candidate set, from which a solution is created
# A selection function, which chooses the best candidate to be added to the solution
# A feasibility function, that is used to determine if a candidate can be used to contribute to a solution
# An objective function, which assigns a value to a solution, or a partial solution, and
# A solution function, which will indicate when we have discovered a complete solution
Greedy algorithms produce good solutions on some [[mathematical problem]]s, but not on others. Most problems for which they work, will have two properties:
; '''Greedy choice property''' : We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from [[dynamic programming]], which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.
; '''Optimal substructure''' : "A problem exhibits [[optimal substructure]] if an optimal solution to the problem contains optimal solutions to the sub-problems."<ref>Introduction to Algorithms (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 16 "Greedy Algorithms".</ref>
===Cases of failure===
{{multiple image
| direction = vertical
| width = 300
| header = Examples on how a greedy algorithm may fail to achieve the optimal solution.
| image1 = Greedy Glouton.svg
| alt1 =
| caption1 = Starting at A, a greedy algorithm will find the local maximum at "m", oblivious of the global maximum at "M".
| image2 = Greedy-search-path-example.gif
| alt2 =
| caption2 =
With a goal of reaching the largest-sum, at each step, the greedy algorithm will choose what appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second step, and will not reach the best solution, which contains 99.}}
For many other problems, greedy algorithms fail to produce the optimal solution, and may even produce the ''unique worst possible'' solution. One example is the [[traveling salesman problem]] mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest neighbor heuristic produces the unique worst possible tour.<ref>(G. Gutin, A. Yeo and A. Zverovich, 2002)</ref>
Imagine the [[Change-making problem|coin example]] with only 25-cent, 10-cent, and 4-cent coins. The greedy algorithm would not be able to make change for 41 cents, since after committing to use one 25-cent coin and one 10-cent coin it would be impossible to use 4-cent coins for the balance of 6 cents, whereas a person or a more sophisticated algorithm could make change for 41 cents with one 25-cent coin and four 4-cent coins.
==Types==
Greedy algorithms can be characterized as being 'short sighted', and as 'non-recoverable'. They are ideal only for problems which have 'optimal substructure'. Despite this, greedy algorithms are best suited for simple problems (e.g. giving change). It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch and bound algorithm. There are a few variations to the greedy algorithm:
* Pure greedy algorithms
* Orthogonal greedy algorithms
* Relaxed greedy algorithms
==Applications==
Greedy algorithms mostly (but not always) fail to find the globally optimal solution, because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early which prevent them from finding the best overall solution later. For example, all known [[greedy coloring]] algorithms for the [[graph coloring problem]] and all other [[NP-complete]] problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum.
If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like [[dynamic programming]]. Examples of such greedy algorithms are [[Kruskal's algorithm]] and [[Prim's algorithm]] for finding [[minimum spanning tree]]s, and the algorithm for finding optimum [[Huffman tree]]s.
The theory of [[matroid]]s, and the more general theory of [[greedoid]]s, provide whole classes of such algorithms.
Greedy algorithms appear in network [[routing]] as well. Using greedy routing, a message is forwarded to the neighboring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in [[geographic routing]] used by [[ad hoc network]]s. Location may also be an entirely artificial construct as in [[small world routing]] and [[distributed hash table]].
==Examples==
* The [[activity selection problem]] is characteristic to this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other.
* In the [[Macintosh computer]] game ''[[Crystal Quest]]'' the objective is to collect crystals, in a fashion similar to the [[travelling salesman problem]]. The game has a demo mode, where the game uses a greedy algorithm to go to every crystal. The [[artificial intelligence]] does not account for obstacles, so the demo mode often ends quickly.
* The [[matching pursuit]] is an example of greedy algorithm applied on signal approximation.
* A greedy algorithm finds the optimal solution to [[Malfatti circles|Malfatti's problem]] of finding three disjoint circles within a given triangle that maximize the total area of the circles; it is conjectured that the same greedy algorithm is optimal for any number of circles.
* A greedy algorithm is used to construct a Huffman tree during [[Huffman coding]] where it finds an optimal solution.
* In [[decision tree learning]] greedy algorithms are commonly used however they are not guaranteed to find the optimal solution.
==See also==
{{Portal|Computer Science|Mathematics}}
*[[Multi-armed_bandit#Semi-uniform_strategies|Epsilon-greedy strategy]]
*[[Greedy algorithm for Egyptian fractions]]
*[[Greedy source]]
*[[Matroid]]
==Notes==
<references/>
==References==
* ''[[Introduction to Algorithms]]'' (Cormen, Leiserson, and Rivest) 1990, Chapter 17 "Greedy Algorithms" p. 329.
* ''Introduction to Algorithms'' (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 16 "Greedy Algorithms".
* G. Gutin, A. Yeo and A. Zverovich, Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP. Discrete Applied Mathematics 117 (2002), 81–86.
* J. Bang-Jensen, G. Gutin and A. Yeo, When the greedy algorithm fails. Discrete Optimization 1 (2004), 121–127.
* G. Bendall and F. Margot, Greedy Type Resistance of Combinatorial Problems, Discrete Optimization 3 (2006), 288–298.
==External links==
* {{springer|title=Greedy algorithm|id=p/g110210}}
* [http://yuval.bar-or.org/index.php?item=9 Greedy algorithm visualization] A visualization of a greedy solution to the N-Queens puzzle by Yuval Baror.
* [http://www.oreillynet.com/onlamp/blog/2008/04/python_greedy_coin_changer_alg.html Python greedy coin] example by Noah Gift.
{{optimization algorithms|combinatorial|state=expanded}}
[[Category:Optimization algorithms and methods]]
[[Category:Combinatorial algorithms]]
[[Category:Matroid theory]]
[[Category:Exchange algorithms]]' |
New page wikitext, after the edit (new_wikitext ) | '[[Image:Greedy algorithm 36 cents.svg|thumb|280px|right| Greedy algorithms determine minimum number of coins to give while [[Change-making problem|making change]]. These are the steps a human would take to emulate a greedy algorithm to represent 36 cents using only coins with values {1, 5, 10, 20}. The coin of the highest value, less than the remaining change owed, is the local optimum. (Note that in general the change-making problem requires [[dynamic programming]] or [[Linear_programming#Integer_unknowns|integer programming]] to find an optimal solution; However, most currency systems, including the Euro and US Dollar, are special cases where the greedy strategy does find an optimum solution.)]]
A '''greedy algorithm''' is an [[algorithm]] that follows the [[problem solving]] [[heuristic (computer science)|heuristic]] of making the locally optimal choice at each stage<ref name="NISTg">{{cite web|last=Black|first=Paul E.|title=greedy algorithm|url=http://xlinux.nist.gov/dads//HTML/greedyalgo.html|work=Dictionary of Algorithms and Data Structures|publisher=[[National Institute of Standards and Technology|U.S. National Institute of Standards and Technology]] (NIST) |accessdate=17 August 2012|date=2 February 2005}}</ref> with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For example, a greedy strategy for the [[traveling salesman problem]] (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In [[mathematical optimization]], greedy algorithms solve [[combinatorial optimization|combinatorial problem]]s having the properties of [[matroid]]s.
==Specifics==
In general, greedy algorithms have five components:
# A candidate set, from which a solution is created
# A selection function, which chooses the best candidate to be added to the solution
# A feasibility function, that is used to determine if a candidate can be used to contribute to a solution
# An objective function, which assigns a value to a solution, or a partial solution, and
# A solution function, which will indicate when we have discovered a complete solution
Greedy algorithms produce good solutions on some [[mathematical problem]]s, but not on others. Most problems for which they work, will have two properties:
; '''Greedy choice property''' : We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from [[dynamic programming]], which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.
; '''Optimal substructure''' : "A problem exhibits [[optimal substructure]] if an optimal solution to the problem contains optimal solutions to the sub-problems."<ref>Introduction to Algorithms (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 16 "Greedy Algorithms".</ref>
===Cases of failure===
{{multiple image
| direction = vertical
| width = 300
| header = Examples on how a greedy algorithm may fail to achieve the optimal solution.
| image1 = Greedy Glouton.svg
| alt1 =
| caption1 = Starting at A, a greedy algorithm will find the local maximum at "m", oblivious of the global maximum at "M".
| image2 = Greedy-search-path-example.gif
| alt2 =
| caption2 =
With a goal of reaching the largest-sum, at each step, the greedy algorithm will choose what appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second step, and will not reach the best solution, which contains 99.}}
For many other problems, greedy algorithms fail to produce the optimal solution, and may even produce the ''unique worst possible'' solution. One example is the [[traveling salesman problem]] mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest neighbor heuristic produces the unique worst possible tour.<ref>(G. Gutin, A. Yeo and A. Zverovich, 2002)</ref>
Imagine the [[Change-making problem|coin example]] with only 25-cent, 10-cent, and 4-cent coins. The greedy algorithm would not be able to make change for 41 cents, since after committing to use one 25-cent coin and one 10-cent coin it would be impossible to use 4-cent coins for the balance of 6 cents, whereas a person or a more sophisticated algorithm could make change for 41 cents with one 25-cent coin and four 4-cent coins.
==Types==
Greedy algorithms can be characterized as being 'short sighted', and as 'non-recoverable'. They are ideal only for problems which have 'optimal substructure'. Despite this, greedy algorithms are best suited for simple problems (e.g. giving change). It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch and bound algorithm. There are a few variations to the greedy algorithm:
* Pure greedy algorithms
* Orthogonal greedy algorithms
* Relaxed greedy algorithms
==Applications==
Greedy algorithms mostly (but not always) fail to find the globally optimal solution, because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early which prevent them from finding the best overall solution later. For example, all known [[greedy coloring]] algorithms for the [[graph coloring problem]] and all other [[NP-complete]] problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum.
If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like [[dynamic programming]]. Examples of such greedy algorithms are [[Kruskal's algorithm]] and [[Prim's algorithm]] for finding [[minimum spanning tree]]s, and the algorithm for finding optimum [[Huffman tree]]s.
The theory of [[matroid]]s, and the more general theory of [[greedoid]]s, provide whole classes of such algorithms.
Greedy algorithms appear in network [[routing]] as well. Using greedy routing, a message is forwarded to the neighboring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in [[geographic routing]] used by [[ad hoc network]]s. Location may also be an entirely artificial construct as in [[small world routing]] and [[distributed hash table]].
==Examples==
* The [[activity selection problem]] is characteristic to this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other.
* In the [[Macintosh computer]] game ''[[Crystal Quest]]'' the objective is to collect crystals, in a fashion similar to the [[travelling salesman problem]]. The game has a demo mode, where the game uses a greedy algorithm to go to every crystal. The [[artificial intelligence]] does not account for obstacles, so the demo mode often ends quickly.
* The [[matching pursuit]] is an example of greedy algorithm applied on signal approximation.
* A greedy algorithm finds the optimal solution to [[Malfatti circles|Malfatti's problem]] of finding three disjoint circles within a given triangle that maximize the total area of the circles; it is conjectured that the same greedy algorithm is optimal for any number of circles.
* A greedy algorithm is usvvbvcvvbcvbed to construct a Huffman tree during [[Huffman coding]] where it finds an optimal solution.
* In [[decision tree learning]] greedy algorithms are commonly used however they are not guaranteed to find the optimal solution.
==See also==
{{Portal|Computer Science|Mathematics}}
*[[Multi-armed_bandit#Semi-uniform_strategies|Epsilon-greedy strategy]]
*[[Greedy algorithm for Egyptian fractions]]
*[[Greedy source]]
*[[Matroid]]
==Notes==
<references/>
==References==
* ''[[Introduction to Algorithms]]'' (Cormen, Leiserson, and Rivest) 1990, Chapter 17 "Greedy Algorithms" p. 329.
* ''Introduction to Algorithms'' (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 16 "Greedy Algorithms".
* G. Gutin, A. Yeo and A. Zverovich, Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP. Discrete Applied Mathematics 117 (2002), 81–86.
* J. Bang-Jensen, G. Gutin and A. Yeo, When the greedy algorithm fails. Discrete Optimization 1 (2004), 121–127.
* G. Bendall and F. Margot, Greedy Type Resistance of Combinatorial Problems, Discrete Optimization 3 (2006), 288–298.
==External links==
* {{springer|title=Greedy algorithm|id=p/g110210}}
* [http://yuval.bar-or.org/index.php?item=9 Greedy algorithm visualization] A visualization of a greedy solution to the N-Queens puzzle by Yuval Baror.
* [http://www.oreillynet.com/onlamp/blog/2008/04/python_greedy_coin_changer_alg.html Python greedy coin] example by Noah Gift.
{{optimization algorithms|combinatorial|state=expanded}}
[[Category:Optimization algorithms and methods]]
[[Category:Combinatorial algorithms]]
[[Category:Matroid theory]]
[[Category:Exchange algorithms]]' |