Previous | Table of Contents | Next |
Complexity of Algorithms
An algorithms complexity is determined by the computational power needed to execute it. The computational complexity of an algorithm is often measured by two variables: T (for time complexity) and S (for space complexity, or memory requirement). Both T and S are commonly expressed as functions of n, where n is the size of the input. (There are other measures of complexity: the number of random bits, the communications bandwidth, the amount of data, and so on.)
Generally, the computational complexity of an algorithm is expressed in what is called big O notation: the order of magnitude of the computational complexity. Its just the term of the complexity function which grows the fastest as n gets larger; all lower-order terms are ignored. For example, if the time complexity of a given algorithm is 4n2 + 7n + 12, then the computational complexity is on the order of n2, expressed O(n2).
Measuring time complexity this way is system-independent. You dont have to know the exact timings of various instructions or the number of bits used to represent different variables or even the speed of the processor. One computer might be 50 percent faster than another and a third might have a data path twice as wide, but the order-of-magnitude complexity of an algorithm remains the same. This isnt cheating; when youre dealing with algorithms as complex as the ones presented here, the other stuff is negligible (is a constant factor) compared to the order-of-magnitude complexity.
This notation allows you to see how the input size affects the time and space requirements. For example, if T = O(n), then doubling the input size doubles the running time of the algorithm. If T = O(2n), then adding one bit to the input size doubles the running time of the algorithm (within a constant factor).
Generally, algorithms are classified according to their time or space complexities. An algorithm is constant if its complexity is independent of n: O(1). An algorithm is linear, if its time complexity is O(n). Algorithms can also be quadratic, cubic, and so on. All these algorithms are polynomial; their complexity is O(nm), when m is a constant. The class of algorithms that have a polynomial time complexity are called polynomial-time algorithms.
Algorithms whose complexities are O(t f(n)), where t is a constant greater than 1 and f (n) is some polynomial function of n, are called exponential. The subset of exponential algorithms whose complexities are O(c f(n)), where c is a constant and f (n) is more than constant but less than linear, is called superpolynomial.
Ideally, a cryptographer would like to be able to say that the best algorithm to break this encryption algorithm is of exponential-time complexity. In practice, the strongest statements that can be made, given the current state of the art of computational complexity theory, are of the form all known cracking algorithms for this cryptosystem are of superpolynomial-time complexity. That is, the cracking algorithms that we know are of superpolynomial-time complexity, but it is not yet possible to prove that no polynomial-time cracking algorithm could ever be discovered. Advances in computational complexity may some day make it possible to design algorithms for which the existence of polynomial-time cracking algorithms can be ruled out with mathematical certainty.
As n grows, the time complexity of an algorithm can make an enormous difference in whether the algorithm is practical. Table 11.2 shows the running times for different algorithm classes in which n equals one million. The table ignores constants, but also shows why ignoring constants is reasonable.
Table 11.2 Running Times of Different Classes of Algorithms | |||
---|---|---|---|
Class | Complexity | # of Operations for n = 106 | Time at 106 O/S |
Constant | O(1) | 1 | 1 µsec. |
Linear | O(n) | 106 | 1 sec. |
Quadratic | O(n2) | 1012 | 11.6 days |
Cubic | O(n3) | 1018 | 32, 000 yrs. |
Exponential | O(2n) | 10301,030 | 10301,006 times the age of the universe |
Assuming that the unit of time for our computer is a microsecond, the computer can complete a constant algorithm in a microsecond, a linear algorithm in a second, and a quadratic algorithm in 11.6 days. It would take 32, 000 years to complete a cubic algorithm; not terribly practical, but a computer built to withstand the next ice age would deliver a solution eventually. Performing the exponential algorithm is futile, no matter how well you extrapolate computing power, parallel processing, or contact with superintelligent aliens.
Look at the problem of a brute-force attack against an encryption algorithm. The time complexity of this attack is proportional to the number of possible keys, which is an exponential function of the key length. If n is the length of the key, then the complexity of a brute-force attack is O(2n). Section 12.3 discusses the controversy surrounding a 56-bit key for DES instead of a 112-bit key. The complexity of a brute-force attack against a 56-bit key is 256; against a 112-bit key the complexity is 2112. The former is possible; the latter isnt.
Previous | Table of Contents | Next |