Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Guide to Elliptic Curve Cryptography.pdf
Скачиваний:
58
Добавлен:
15.03.2015
Размер:
4.58 Mб
Скачать

1.3. Why elliptic curve cryptography?

15

1.3Why elliptic curve cryptography?

There are several criteria that need to be considered when selecting a family of publickey schemes for a specific application. The principal ones are:

1.Functionality. Does the public-key family provide the desired capabilities?

2.Security. What assurances are available that the protocols are secure?

3.Performance. For the desired level of security, do the protocols meet performance objectives?

Other factors that may influence a decision include the existence of best-practice standards developed by accredited standards organizations, the availability of commercial cryptographic products, patent coverage, and the extent of existing deployments.

The RSA, DL and EC families introduced in §1.2 all provide the basic functionality expected of public-key cryptography—encryption, signatures, and key agreement. Over the years, researchers have developed techniques for designing and proving the security of RSA, DL and EC protocols under reasonable assumptions. The fundamental security issue that remains is the hardness of the underlying mathematical problem that is necessary for the security of all protocols in a public-key family—the integer factorization problem for RSA systems, the discrete logarithm problem for DL systems, and the elliptic curve discrete logarithm problem for EC systems. The perceived hardness of these problems directly impacts performance since it dictates the sizes of the domain and key parameters. That in turn affects the performance of the underlying arithmetic operations.

In the remainder of this section, we summarize the state-of-the-art in algorithms for solving the integer factorization, discrete logarithm, and elliptic curve discrete logarithm problems. We then give estimates of parameter sizes providing equivalent levels of security for RSA, DL and EC systems. These comparisons illustrate the appeal of elliptic curve cryptography especially for applications that have high security requirements.

We begin with an introduction to some relevant concepts from algorithm analysis.

Measuring the efficiency of algorithms

The efficiency of an algorithm is measured by the scarce resources it consumes. Typically the measure used is time, but sometimes other measures such as space and number of processors are also considered. It is reasonable to expect that an algorithm consumes greater resources for larger inputs, and the efficiency of an algorithm is therefore described as a function of the input size. Here, the size is defined to be the number of bits needed to represent the input using a reasonable encoding. For example, an algorithm for factoring an integer n has input size l = log2 n + 1 bits.

Expressions for the running time of an algorithm are most useful if they are independent of any particular platform used to implement the algorithm. This is achieved by estimating the number of elementary operations (e.g., bit operations) executed. The

16 1. Introduction and Overview

(worst-case) running time of an algorithm is an upper bound, expressed as a function of the input size, on the number of elementary steps executed by the algorithm. For example, the method of trial division which factors an integer n by checking all possible factors up to n has a running time of approximately n 2l/2 division steps.

It is often difficult to derive exact expressions for the running time of an algorithm. In these situations, it is convenient to use “big-O” notation. If f and g are two positive real-valued functions defined on the positive integers, then we write f = O(g) when there exist positive constants c and L such that f (l) cg(l) for all l L. Informally, this means that, asymptotically, f (l) grows no faster than g(l) to within a constant multiple. Also useful is the “little-o” notation. We write f = o(g) if for any positive constant c there exists a constant L such that f (l) cg(l) for l L. Informally, this means that f (l) becomes insignificant relative to g(l) for large values of l.

The accepted notion of an efficient algorithm is one whose running time is bounded by a polynomial in the input size.

Definition 1.15 Let A be an algorithm whose input has bitlength l.

(i)A is a polynomial-time algorithm if its running time is O(lc ) for some constant c > 0.

(ii)A is an exponential-time algorithm if its running time is not of the form O(lc ) for any c > 0.

(iii)A is a subexponential-time algorithm if its running time is O(2o(l) ), and A is not a polynomial-time algorithm.

(iv)A is a fully-exponential-time algorithm if its running time is not of the form

O(2o(l) ).

It should be noted that a subexponential-time algorithm is also an exponential-time algorithm and, in particular, is not a polynomial-time algorithm. However, the running time of a subexponential-time algorithm does grow slower than that of a fully- exponential-time algorithm. Subexponential functions commonly arise when analyzing the running times of algorithms for factoring integers and finding discrete logarithms.

Example 1.16 (subexponential-time algorithm) Let A be an algorithm whose input is an integer n or a small set of integers modulo n (so the input size is O(log2 n)). If the running time of A is of the form

Ln [α, c] = O e(c+o(1))(logn)α (log log n)1α

where c is a positive constant and α is a constant satisfying 0 < α < 1, then A is a subexponential-time algorithm. Observe that if α = 0 then Ln [0, c] is a polynomial expression in log2 n (so A is a polynomial-time algorithm), while if α = 1 then Ln [1, c] is fully-exponential expression in log2 n (so A is a fully-exponential-time algorithm). Thus the parameter α is a good benchmark of how close a subexponential-time algorithm is to being efficient (polynomial-time) or inefficient (fully-exponential-time).

1.3. Why elliptic curve cryptography?

17

Solving integer factorization and discrete logarithm problems

We briefly survey the state-in-the-art in algorithms for the integer factorization, discrete logarithm, and elliptic curve discrete logarithm problems.

Algorithms for the integer factorization problem Recall that an instance of the integer factorization problem is an integer n that is the product of two l/2-bit primes; the input size is O(l) bits. The fastest algorithm known for factoring such n is the Number Field Sieve (NFS) which has a subexponential expected running time of

Ln [

1

, 1.923].

(1.5)

3

The NFS has two stages: a sieving stage where certain relations are collected, and a matrix stage where a large sparse system of linear equations is solved. The sieving stage is easy to parallelize, and can be executed on a collection of workstations on the Internet. However, in order for the sieving to be efficient, each workstation should have a large amount of main memory. The matrix stage is not so easy to parallelize, since the individual processors frequently need to communicate with one another. This stage is more effectively executed on a single massively parallel machine, than on a loosely coupled network of workstations.

As of 2003, the largest RSA modulus factored with the NFS was a 530-bit (160decimal digit) number.

Algorithms for the discrete logarithm problem Recall that the discrete logarithm problem has parameters p and q where p is an l-bit prime and q is a t-bit prime divisor of p 1; the input size is O(l) bits. The fastest algorithms known for solving the discrete logarithm problem are the Number Field Sieve (NFS) which has a subexponential expected running time of

L p[

1

, 1.923],

(1.6)

3

and Pollard’s rho algorithm which has an expected running time of

π q

(1.7)

 

.

2

 

 

The comments made above for the NFS for integer factorization also apply to the NFS for computing discrete logarithms. Pollard’s rho algorithm can be easily parallelized so that the individual processors do not have to communicate with each other and only occasionally communicate with a central processor. In addition, the algorithm has only very small storage and main memory requirements.

The method of choice for solving a given instance of the DLP depends on the sizes of the parameters p and q, which in turn determine which of the expressions (1.6) and (1.7) represents the smaller computational effort. In practice, DL parameters are

18 1. Introduction and Overview

selected so that the expected running times in expressions (1.6) and (1.7) are roughly equal.

As of 2003, the largest instance of the DLP solved with the NFS is for a 397-bit (120-decimal digit) prime p.

Algorithms for the elliptic curve discrete logarithm problem Recall that the ECDLP asks for the integer d [1, n 1] such that Q = d P, where n is a t-bit prime, P is a point of order n on an elliptic curve defined over a finite field F p, and Q P . If we assume that n p, as is usually the case in practice, then the input size is O(t) bits. The fastest algorithm known for solving the ECDLP is Pollard’s rho algorithm (cf. §4.1) which has an expected running time of

π n

.

(1.8)

2

The comments above concerning Pollard’s rho algorithm for solving the ordinary discrete logarithm problem also apply to solving the ECDLP.

As of 2003, the largest ECDLP instance solved with Pollard’s rho algorithm is for an elliptic curve over a 109-bit prime field.

Key size comparisons

Estimates are given for parameter sizes providing comparable levels of security for RSA, DL, and EC systems, under the assumption that the algorithms mentioned above are indeed the best ones that exist for the integer factorization, discrete logarithm, and elliptic curve discrete logarithm problems. Thus, we do not account for fundamental breakthroughs in the future such as the discovery of significantly faster algorithms or the building of a large-scale quantum computer.3

If time is the only measure used for the efficiency of an algorithm, then the parameter sizes providing equivalent security levels for RSA, DL and EC systems can be derived using the running times in expressions (1.5), (1.6), (1.7) and (1.8). The parameter sizes, also called key sizes, that provide equivalent security levels for RSA, DL and EC systems as an 80-, 112-, 128-, 192and 256-bit symmetric-key encryption scheme are listed in Table 1.1. By a security level of k bits we mean that the best algorithm known for breaking the system takes approximately 2k steps. These five specific security levels were selected because they represent the amount of work required to perform an exhaustive key search on the symmetric-key encryption schemes SKIPJACK, Triple-DES, AES-Small, AES-Medium, and AES-Large, respectively.

The key size comparisons in Table 1.1 are somewhat unsatisfactory in that they are based only on the time required for the NFS and Pollard’s rho algorithms. In particular, the NFS has several limiting factors including the amount of memory required for

3Efficient algorithms are known for solving the integer factorization, discrete logarithm, and elliptic curve discrete logarithm problems on quantum computers (see the notes on page 196). However, it is still unknown whether large-scale quantum computers can actually be built.

Соседние файлы в предмете Профессионально-ориентированный английский язык