Complexity of algorithms: Difference between revisions
imported>Nick Johnson No edit summary |
imported>Nick Johnson (fixed broken math markup) |
||
Line 1: | Line 1: | ||
In [[computer science]], '''Big O Notation''' is a [[notation]] for expressing the worst-case [[execution time]] for an [[algorithm]]. | In [[computer science]], '''Big O Notation''' is a [[notation]] for expressing the worst-case [[execution time]] for an [[algorithm]]. | ||
It is generally written as <math>O(f(x_1, x_2, \ldots, x_n)), where <math>f</math> is a [[function]] of zero or more parameters <math>x_i</math> of the system. It roughly states that in the [[asymptote|aymptotic case]] (i.e. as one or more of the system parameters extend towards their extremes), the actual execution time of the algorithm is bounded by <math>K\times f(x_1,x_2,\ldots,x_n) | It is generally written as <math>O(f(x_1, x_2, \ldots, x_n))</math>, where <math>f</math> is a [[function]] of zero or more parameters <math>x_i</math> of the system. It roughly states that in the [[asymptote|aymptotic case]] (i.e. as one or more of the system parameters extend towards their extremes), the actual execution time of the algorithm is bounded by <math>K\times f(x_1,x_2,\ldots,x_n)</math> for some positive constant K. | ||
Big O Notation is attractive to computer scientists because it expresses the complexity of an algorithm without restricting the statement to a particular [[computer architecture]] or time period; even if computers get faster, or if the algorithm is switched from a PC to a Apple, the worst case execution time of an algorithm does not change. Additionally, when comparing two algorithms, it is readily apparent which will execute faster in the extreme case. | Big O Notation is attractive to computer scientists because it expresses the complexity of an algorithm without restricting the statement to a particular [[computer architecture]] or time period; even if computers get faster, or if the algorithm is switched from a PC to a Apple, the worst case execution time of an algorithm does not change. Additionally, when comparing two algorithms, it is readily apparent which will execute faster in the extreme case. | ||
Line 12: | Line 12: | ||
** <math>O(n^2)</math> --- Quadratic time --- the time grows quadratically with the size (n) of the problem. Please note that, by convention, one only writes the highest-degree term for polynomial time. | ** <math>O(n^2)</math> --- Quadratic time --- the time grows quadratically with the size (n) of the problem. Please note that, by convention, one only writes the highest-degree term for polynomial time. | ||
* Sub-polynomial time algorithms (grow slower than polynomial time algorithms). | * Sub-polynomial time algorithms (grow slower than polynomial time algorithms). | ||
* <math>O(log n)</math> -- Logarithmic time | ** <math>O(log n)</math> -- Logarithmic time | ||
* Super-polynomial time | * Super-polynomial time algorithms (grow faster than polynomial time algorithms). | ||
* <math>O(n!)</math> | ** <math>O(n!)</math> | ||
* <math>O(2^n)</math> | ** <math>O(2^n)</math> | ||
Revision as of 16:12, 21 February 2007
In computer science, Big O Notation is a notation for expressing the worst-case execution time for an algorithm.
It is generally written as , where is a function of zero or more parameters of the system. It roughly states that in the aymptotic case (i.e. as one or more of the system parameters extend towards their extremes), the actual execution time of the algorithm is bounded by for some positive constant K.
Big O Notation is attractive to computer scientists because it expresses the complexity of an algorithm without restricting the statement to a particular computer architecture or time period; even if computers get faster, or if the algorithm is switched from a PC to a Apple, the worst case execution time of an algorithm does not change. Additionally, when comparing two algorithms, it is readily apparent which will execute faster in the extreme case.
Most commonly, one will see
- Polynomial time algorithms,
- --- Constant time --- the time necessary to perform the algorithm does not change in response to the size of the problem.
- --- Linear time --- the time grows linearly with the size (n) of the problem.
- --- Quadratic time --- the time grows quadratically with the size (n) of the problem. Please note that, by convention, one only writes the highest-degree term for polynomial time.
- Sub-polynomial time algorithms (grow slower than polynomial time algorithms).
- -- Logarithmic time
- Super-polynomial time algorithms (grow faster than polynomial time algorithms).
Despite the examples, Big O Notation can express execution time as a function of multiple examples.