Applied Cryptography, Second Edition: Protocols, Algorthms, and Source Code in C (cloth)
(Publisher: John Wiley & Sons, Inc.)
Author(s): Bruce Schneier
ISBN: 0471128457
Publication Date: 01/01/96

Previous Table of Contents Next


The wrong way to find primes is to generate random numbers and then try to factor them. The right way is to generate random numbers and test if they are prime. There are several probabilistic primality tests; tests that determine whether a number is prime with a given degree of confidence. Assuming this “degree of confidence” is large enough, these sorts of tests are good enough. I’ve heard primes generated in this manner called “industrial-grade primes”: These are numbers that are probably prime with a controllably small chance of error.

Assume a test is set to fail once in 250 tries. This means that there is a 1 in 1015 chance that the test falsely indicates that a composite number is prime. (The test will never falsely indicate that a prime number is composite.) If for some reason you need more confidence that the number is prime, you can set the failure level even lower. On the other hand, if you consider that the odds of the number being composite are 300 million times less than the odds of winning top prize in a state lottery, you might not worry about it so much.

Overviews of recent developments in the field can be found in [1256, 206]. Other important papers are [1490, 384, 11, 19, 626, 651, 911].

Solovay-Strassen

Robert Solovay and Volker Strassen developed a probabilistic primality testing algorithm [1490]. Their algorithm uses the Jacobi symbol to test if p is prime:

(1)  Choose a random number, a, less than p.
(2)  If the gcd(a,p) ≠ 1, then p fails the test and is composite.
(3)  Calculate j = a(p- 1)/2 mod p.
(4)  Calculate the Jacobi symbol J(a,p).
(5)  If j ≠ J(a,p), then p is definitely not prime.
(6)  If j = J(a,p), then the likelihood that p is not prime is no more than 50 percent.

A number a that does not indicate that p is definitely not prime is called a witness. If p is composite, the odds of a random a being a witness is no less than 50 percent. Repeat this test t times, with t different random values for a. The odds of a composite number passing all t tests is no more than one in 2t.

Lehmann

Another, simpler, test was developed independently by Lehmann [945]. Here it tests if p is prime:

(1)  Choose a random number a less than p.
(2)  Calculate a(p- 1)/2 mod p.
(3)  If a(p- 1)/2 ≠ 1 or - 1 (mod p), then p is definitely not prime.
(4)  If a(p- 1)/2 ≡ 1 or - 1 (mod p), then the likelihood that p is not prime is no more than 50 percent.

Again, the odds of a random a being a witness to p ’s compositeness is no less than 50 percent. Repeat this test t times. If the calculation equals 1 or -1, but does not always equal 1, then p is probably prime with an error rate of 1 in 2t .

Rabin-Miller

The algorithm everyone uses—it’s easy—was developed by Michael Rabin, based in part on Gary Miller’s ideas [1093, 1284]. Actually, this is a simplified version of the algorithm recommended in the DSS proposal [1149, 1154].

Choose a random number, p, to test. Calculate b, where b is the number of times 2 divides p - 1 (i.e., 2b is the largest power of 2 that divides p - 1). Then calculate m, such that p = 1 + 2b *m.

(1)  Choose a random number, a, such that a is less than p.
(2)  Set j = 0 and set z = am mod p.
(3)  If z = 1, or if z = p - 1, then p passes the test and may be prime.
(4)  If j > 0 and z = 1, then p is not prime.
(5)  Set j = j + 1. If j < b and zp - 1, set z = z2 mod p and go back to step (4). If z = p - 1, then p passes the test and may be prime.
(6)  If j = b and zp - 1, then p is not prime.

The odds of a composite passing decreases faster with this test than with previous ones. Three-quarters of the possible values of a are guaranteed to be witnesses. This means that a composite number will slip through t tests no more than ¼t of the time, where t is the number of iterations. Actually, these numbers are very pessimistic. For most random numbers, something like 99.9 percent of the possible a values are witnesses [96].

There are even better estimations [417]. For n- bit candidate primes (where n is more than 100), the odds of error in one test are less than 1 in 4n 2(k/2)(1/2). And for a 256-bit n, the odds of error in six tests are less than 1 in 251 . More theory is in [418].

Practical Considerations

In real-world implementations, prime generation goes quickly.

(1)  Generate a random n- bit number, p.
(2)  Set the high-order and low-order bit to 1. (The high-order bit ensures that the prime is of the required length and the low-order bit ensures that it is odd.)
(3)  Check to make sure p is not divisible by any small primes: 3, 5, 7, 11, and so on. Many implementations test p for divisibility by all primes less than 256. The most efficient is to test for divisibility by all primes less than 2000 [949]. You can do this efficiently using a wheel [863].
(4)  Perform the Rabin-Miller test for some random a. If p passes, generate another random a and go through the test again. Choose a small value of a to make the calculations go quicker. Do five tests [651]. (One might seem like enough, but do five.) If p fails one of the tests, generate another p and try again.

Another option is not to generate a random p each time, but to incrementally search through numbers starting at a random point until you find a prime.

Step (3) is optional, but it is a good idea. Testing a random odd p to make sure it is not divisible by 3, 5, and 7 eliminates 54 percent of the odd numbers before you get to step (4). Testing against all primes less than 100 eliminates 76 percent of the odd numbers; testing against all primes less than 256 eliminates 80 percent. In general, the fraction of odd candidates that is not a multiple of any prime less than n is 1.12/ln n. The larger the n you test up to, the more precomputation is required before you get to the Rabin-Miller test.


Previous Table of Contents Next
[an error occurred while processing this directive]