|Previous||Table of Contents||Next|
A new and novel idea, studied by Papua Guam , is the use of cellular automata in public-key cryptosystems. This system is still far too new and has not been studied extensively, but a preliminary examination suggests that it may have a cryptographic weakness similar to one seen in other cases . Still, this is a promising area of research. Cellular automata have the property that, even if they are invertible, it is impossible to calculate the predecessor of an arbitrary state by reversing the rule for finding the successor. This sounds a whole lot like a trapdoor one-way function.
Many other public-key algorithms have been proposed and broken over the years. The Matsumoto-Imai algorithm  was broken in . The Cade algorithm was first proposed in 1985, broken in 1986 , and then strengthened in the same year . In addition to these attacks, there are general attacks for decomposing polynomials over finite fields . Any algorithm that gets its security from the composition of polynomials over a finite field should be looked upon with skepticism, if not outright suspicion.
The Yagisawa algorithm combines exponentiation mod p with arithmetic mod p 1 ; it was broken in . Another public-key algorithm, Tsujii-Kurosawa-Itoh-Fujioka-Matsumoto  is insecure . A third system, Luccio-Mazzone , is insecure . A signature scheme based on birational permutations  was broken the day after it was presented . Tatsuaki Okamoto has several signature schemes: one is provably as secure as the Discrete Logarithm Problem, and another is provably as secure as the Discrete Logarithm Problem and the Factoring Problem . Similar schemes are in .
Gustavus Simmons suggested J-algebras as a basis for public-key algorithms [1455,145]. This idea was abandoned after efficient methods for factoring polynomials were invented . Special polynomial semigroups have also been studied [1619,962], but so far nothing has come of it. Harald Niederreiter proposed a public-key algorithm based on shift-register sequences . Another is based on Lyndon words  and another on propositional calculus . And a recent public-key algorithm gets its security from the matrix cover problem . Tatsuaki Okamoto and Kazuo Ohta compare a number of digital signature schemes in .
Prospects for creating radically new and different public-key cryptography algorithms seem dim. In 1988 Whitfield Diffie noted that most public-key algorithms are based on one of three hard problems [492, 494]:
According to Diffie [492,494], the Discrete Logarithm Problem was suggested by J. Gill, the Factoring Problem by Knuth, and the knapsack problem by Diffie himself.
This narrowness in the mathematical foundations of public-key cryptography is worrisome. A breakthrough in either the problem of factoring or of calculating discrete logarithms could render whole classes of public-key algorithms insecure. Diffie points out [492,494] that this risk is mitigated by two factors:
- 1. The operations on which public key cryptography currently dependsmultiplying, exponentiating, and factoringare all fundamental arithmetic phenomena. They have been the subject of intense mathematical scrutiny for centuries and the increased attention that has resulted from their use in public key cryptosystems has on balance enhanced rather than diminished our confidence.
- 2. Our ability to carry out large arithmetic computations has grown steadily and now permits us to implement our systems with numbers sufficient in size to be vulnerable only to a dramatic breakthrough in factoring, logarithms, or root extraction.
As we have seen, not all public-key algorithms based on these problems are secure. The strength of any public-key algorithm depends on more than the computational complexity of the problem upon which it is based; a hard problem does not necessarily imply a strong algorithm. Adi Shamir listed three reasons why this is so :
- 1. Complexity theory usually deals with single isolated instances of a problem. A cryptanalyst often has a large collection of statistically related problems to solveseveral ciphertexts encrypted with the same key.
- 2. The computational complexity of a problem is typically measured by its worst-case or average-case behavior. To be useful as a cipher, the problem must be hard to solve in almost all cases.
- 3. An arbitrarily difficult problem cannot necessarily be transformed into a cryptosystem, and it must be possible to insert trapdoor information into the problem so that a shortcut solution is possible with this information and only with this information.
|Previous||Table of Contents||Next|