CRYPTO '99

Crypto '99

Santa Barbara, CA
August 15–19, 1999

Conference report by
Richard F. Graveman
Telcordia Technologies
Morristown, New Jersey, USA

Crypto '99 is one of two annual conferences sponsored by the International Association for Cryptologic Research. (In 2000, there will be three.) There were over 500 attendees from all over the world. Thirty-eight of 167 submitted papers were accepted, and there were two invited talks as well as the traditional rump session. The agenda and additional information, in particular pointers to more details on some of the rump session talks, are available from the IACR Web site, http://www.iacr.org. Don Beaver was General Chair and Michael Wiener was Program Chair. The proceedings are available as Advances in Cryptology—Crypto '99, Michael Wiener, ed., Springer-Verlag LNCS #1666, 1999.

Session 1: Public Key Cryptanalysis I, Chair: Bart Preneel

On the Security of RSA Padding, Jean-Sébastien Coron, David Naccache, Julien P. Stern

This paper presented a new forgery scenario for RSA signatures. Previous attacks illustrated the importance of a padding function on a message m(m). Without padding, existential forgery is easy because exponentiation preserves multiplication homomorphically. Davida, De Jonge and Chaum, and Bleichenbacher previously exploited this property in their attacks. On the other hand, several schemes with proofs of security against such forgeries date back to Goldwasser, Micali, and Rivest in the mid-1980s.

This new work started by considering the padding function as a black box. Signatures consist of pairs of the form (r, rd). If a new m(m) can be expressed as a product of previously seen r's, its signature can be forged. The approach is to factor m(m) and most of the r's and to find triples
(m, a, b) such that (a
m(m) – b n) is smooth. Similarly to the way factoring works with the MPQS or NFS, one tries to find enough relations over a factor base to be able to factor a new m(m).

The authors examined ISO-9796-1 and came within one bit of a successful attack for certain types of messages, and more recently Coppersmith, Halevi, and Jutla completed the attack. For ISO-9796-2 and a modified version of Ecash, this attack is better than all previously known ones. For PKCS #1, version 2.0, this attack is not feasible.

Cryptanalysis of the HFE Public Key Cryptosystem by Relinearization, Aviad Kipnis, Adi Shamir

RSA is based on a single univariate equation Xe = C mod n. This has the advantage of being based on factoring and disadvantage of being slow for large n. Multivariate schemes, on the other hand, are based on an NP-complete problem and are fast for small fields, but many such systems have been broken.

HFE, presented by Patarin at Crypto ‘96, uses a small field F with q elements and an extension field K/F of degree n. Many multivariate equations over F correspond to a single univariate equation over K. The trapdoor is the method of hiding the univariate polynomial in the multivariate ones.

For |F| = 2 and |K| = 2128, the univariate polynomial P is represented as a system of multivariate polynomials over F. One can write G(x) = T(P(S(x))), where G looks random but can be determined, because S and T can be represented as univariate polynomials. Then T–1(G(x)) = P(S(x)), and to solve for T one obtains a system of quadratic equations. Therefore, they developed a new relinearization method that yields a parametric linear expression after a change of variables. Then the second stage of the process yields a unique solution.

Relinearization proved to be a new, powerful technique that may have additional applications beyond attacking HFE for a given degree. It should be pointed out though, that Patarin's published challenges are of sufficiently high degree to make this particular attack computationally infeasible.

The Hardness of the Hidden Subset Sum Problem and its Cryptographic Implications, Phong Nguyen, Jacques Stern

The subset sum problem, first used in cryptography by Merkle and Hellman, is NP-complete. Lattice techniques can solve low-density subset sums, however. Against the hidden subset sum problem, which additionally hides the weights, exhaustive search is the only known attack. It works by precomputing pairs of the form (k, gk mod p).

They gave a lattice-based attack for the low-density case and a security proof for higher densities. Their process used an incomplete lattice in Zm, a transformation, and lattice basis reduction to recover the hidden coefficients, after which the weights can be calculated. This attack works when the density is less than n/log2M, and it was verified experimentally. But when the density is greater, the output of the generator becomes statistically random in a provable sense. This gives a precise formulation of the security level of the system.

Session 2: Invited Lecture, Information-Theoretic Cryptography, Ueli Maurer

Information theory underlies many different areas of cryptography. It is based on probability theory and allows one to argue about security. An event on a discrete probability space X is a random subset of the points. A random variable is a partition of the space. Entropy of a random variable is a function that is 0 if the probability of the event is 1, is upper bounded by log2|X|, and is maximal when the events are close to uniform in probability. Cryptography makes extensive use of conditional entropy and mutual information, whereby knowledge about one random variable can decrease the uncertainty about another if the events are correlated.

A cipher achieves perfect secrecy if the ciphertext gives no information about the plaintext. Shannon showed that this implies that the entropy of the key must be as great as the entropy of the message. Based on these simple ideas, Maurer gave simple proof that the one-time pad (OTP) is a perfect cipher.

A lot of cryptographic research aims to reduce the assumptions one needs: about computational intractability, an adversary's computing power, the honesty of the entities, and physical properties like noise and randomness. Cryptography without computational assumptions avoids the problems of not knowing the right model of computation and not having reasonable lower bounds.

For information theoretic security, key agreement must be based on some correlated mutual information, so public key (PK) systems cannot be information theoretically secure. By using privacy amplification, however, the correlated mutual information and key agreement can be based on a noisy broadcast channel.

Information theoretic cryptographic primitives are n-party devices that take inputs from all the parties and return an output to each based on some probability function of the inputs. Secure multi-party computation is a primitive that simulates a trusted party. A 2-party example is oblivious transfer (OT). Bit commitment (BC) is another example with retained state. Reductions among different primitives are important, e.g., reducing the binary synchronous channel to OT or to BC. Reductions should be useful, natural, efficient, demonstrable without additional assumptions, and also "beautiful." Reducing the noisy random source with eavesdropper to key agreement is a 3-party example. Weiner's wiretap channel and secure broadcast are further n-party examples.

Conditional independence is a concept that allows one to simplify a large number of results. Events are statistically independent if their probabilities are multiplicative. Conditional independence of A and B means A | C and B | C are statistically independent. Conditional independence holds for random variables if it holds for all events over the random variables. He gave a set of rules for calculating with conditional independence and applied it to security proofs using a pseudo-random function (PRF). Here, a real system is replaced by an idealized model, which is in turn proved to be a perfect system. The proof is based on indistinguishability. Adaptive and non-adaptive strategies can be handled by proving that a non-adaptive strategy is optimal. He illustrated this process for CBC MACs by showing that the success probability of the adversary is at most the probability of random guessing.

He stated in conclusion that computation and communication are physical processes that can be exploited by those building cryptosystems, information theoretic primitives can cover many useful cases, and conditional independence is a powerful information theoretic proof technique.

Session 3: Secure Communication and Computation, Chair: Moti Yung

Information Theoretically Secure Communication in the Limited Storage Space Model, Yonatan Aumann, Michael O. Rabin

The setting is a sender and receiver with a shared secret key and a passive eavesdropper. The key is, however, much smaller than the message. In classical cryptography, the adversary is assumed to be computationally bounded. In this case, the eavesdropper is space-bounded, but the bound is much larger than, say, logspace. The solutions should be simple, efficient, and have information theoretic security.

The protocol uses a large random string available to all players. The shared secret is used to obtain indices into the shared random string to produce a pseudo-OTP. Maurer did similar work in 1992 for the case where the eavesdropper can only access C bits. Cachin and Maurer extended this in 1997. The new result here is that for any strategy by the eavesdropper, the eavesdropper's advantage is exponentially small in terms of a security parameter k, when the eavesdropper is limited to storage space n = 5C. The proof starts by defining a game over one-bit messages.

The All-Or-Nothing Nature of Two-Party Secure Computation, Amos Beimel, Tal Malkin, Silvio Micali

Secure computation must be private and correct in spite of malicious parties. The goal is to model the behavior of protocols with a trusted party. Trivial functions can be computed securely without cryptography (projection, constant, XOR), but AND, OT, and MAX are non-trivial with respect to unbounded adversaries. With cryptographic assumptions like factoring or trapdoor functions, non-trivial functions, indeed any function, can be computed securely. Kilian showed that OT is complete for secure computation with respect to factoring. Others have worked with models such as an "honest but curious" as opposed to a malicious adversary.

Their goal was to find non-trivial functions requiring weaker assumptions than OT. But they actually showed that there are no weaker assumptions and there is no hierarchy. Thus, AND and MAX are also complete, as are all non-trivial functions. Impagliazzo and Luby showed that one-way functions (OWFs) are necessary for OT, but they are likely not to be sufficient. This work generalized their result to all non-trivial functions. Finally, they gave a combinatorial characterization of the trivial functions based on truth tables that do not contain an "insecure" minor.

Session 4: Distributed Cryptography, Chair: Kazuo Ohta

Adaptive Security for Threshold Cryptosystems, Ran Canetti, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, Tal Rabin

Their result is efficient threshold DSS and RSA secure against any adaptive adversary. Threshold cryptography is useful, e.g., for signature schemes, because it eliminates a single point of failure, and malicious faults that remain below a certain threshold cannot corrupt the system. With non-adaptive faults, the adversary has to decide whom to corrupt before the protocol starts. Adaptive adversaries can observe the protocol and coordinate the attack as the protocol runs.

Efficient threshold schemes secure against non-adaptive adversaries were already known for DSS and RSA. Non-adaptiveness is, in the authors' view, a somewhat artificial assumption. Proving security against adaptive adversaries, however, is harder.

He described their solution for DSS by reducing regular DSS security to threshold DSS security. That is, if an adaptive adversary can break the threshold system, forgery is possible in the basic scheme. The construction uses a simulator that calls a DSS oracle to produce a view of the threshold scheme. The simulator must be prepared to handle the adversary's corruption of any single player. The zero knowledge (ZK) proof that is needed is not efficient, so they overcame this problem by constructing a system with erasures.

Two Party RSA Key Generation, Niv Gilboa

The problem is for Alice and Bob to generate two shares of the private key da and db such that d = da + db. Boneh and Franklin showed how to do this for three or more parties. Cocks gave heuristic 2-party solutions. Also, J. P. Stern gave a 2-party solution based on OT. This solution is more efficient.

The protocol works by selecting candidates Pa, Pb, Qa, and Qb, computing N = (Pa + Pb)(Qa + Qb), determining whether N is the product of exactly two primes, and sharing the private exponent.

Three ways to compute N are given based on OT, oblivious polynomial evaluation, and homomorphic encryption. The technique he showed shares a product x + y = ab in a ring R without revealing the factors, a and b.

It is a useful optimization to do an efficient, initial trial division before the full primality test.

Robust Distributed Multiplication without Interaction, Masayuki Abe

Previous work, both in the information theoretic and computational settings used interaction. The idea here is to design a protocol where each party transmits only once, so two-way interaction is not needed. As long as no disruption occurs, secure channels with broadcast are all that is required. The cost is more local computation and communications. He gave an example by examining the Cramer-Shoup protocol.

The players receive shares of A and B and want to redistribute the products C of their shares. The problem of proving that this was done correctly was originally solved by Chaum, who used a four-pass protocol. The main technique presented in this work was to use non-interactive verifiable secret sharing (VSS) according to Pedersen to replace Chaum's ZK proof. This new solution has optimal resiliency (t < n/2). Its soundness depends on the discrete log (DL) problem.

A Simple Publicly Verifiable Secret Sharing Scheme and its Application to Electronic Voting, Berry Schoenmakers

He presented a simple PVSS scheme using standard intractability assumptions and optimal performance, followed by a sample application.

In secret sharing, a dealer gives the shares to the participants, and later, some of the participants may try to reconstruct the secret. Shamir showed how to do perfect threshold secret sharing based on polynomial interpolation. If, however, some participant uses the wrong share, the wrong secret will be reconstructed. McEliece and Sarwate addressed this problem first with ECCs, later others considered cheating dealers, and then Feldman and Pedersen showed how to do VSS. Finally Stadler gave a PVSS scheme.

Public verifiability uses a public broadcast channel. He gave a non-interactive solution and proved computational security in the random oracle model based on the Decision Diffie-Hellman (DDH) assumption. Each participant registers a public key with the dealer, who encrypts the shares and public non-interactive ZK proofs for each participant that all of the shares lie on a polynomial of appropriate degree. Each share carries its own validity proof. At reconstruction, the share plus the proof are released.

Other schemes produce a public key as a bi-product, but at the cost of using a double-discrete-log assumption. He concluded by giving an application to electronic voting.

Session 5: Secret-Key Cryptography, Chair: Andrew Klapper

Truncated Differentials and Skipjack, Lars R. Knudsen, M.J.B. Robshaw, David Wagner

Skipjack is a recently declassified 64-bit block cipher with an 80-bit key and 32 rounds. There are eight A rounds, eight B rounds, eight A rounds, and finally eight more B rounds. It has an elegant design. Each round computes a non-linear 16-bit function and XORs the result with another 16 bits of the text. This work applies truncated differentials to the analysis of Skipjack. There are 24-round truncated differentials with probability 1, which can be extended to attacks on up to 28 rounds. The boomerang attack (FSE '99) can be applied as well.

They showed how differences in each of the four 16-bit words in the 64-bit block propagate through 16 rounds. Using a truncated differential of probability 2–32, they were able to break 16-round Skipjack with 217 plaintexts and 234 to 249 steps. They exploited the interaction at the boundary between the A and B rounds and the bad forward diffusion in the B rounds.

Next, they looked at the middle 16 rounds. After 12 of these rounds, both the first and second 16-bit words have the same differences. So if the two input texts only differ in the second words, they can break the middle 16 rounds with two plaintexts and 247 steps.

Finally, they attacked the last 28 rounds using plaintexts that agree in the third 16-bit word. The attack takes 241 chosen plaintexts. Extending this attack to 32 rounds, however, looks difficult. They noted that Biham had previously shown an attack on 31-round Skipjack using impossible differentials.

Fast Correlation Attacks based on Turbo Code Techniques, Thomas Johansson, Fredrik Jönsson

This talk was about keystream generators that use one or more linear feedback shift registers (LFSRs), for instance with a non-linear combiner. The secret key is the initial state. Correlation attacks are possible if the output of the entire generator is not independent of one of the LFSRs.

Because general linear codes are hard to decode, they looked for linear codes with embedded convolutional codes. The turbo code attack works by precomputing M such convolutional codes having the same information symbols and then applying each of the M decoders. They also showed that it is possible to decode in parallel. They simulated the performance and showed how a larger M results in better performance.

Highly Nonlinear Resilient Functions Optimizing Siegenthaler's Inequality, Subhamoy Maitra, Palash Sarkar

The motivation is to construct boolean functions, which can be represented by algebraic truth tables, with certain properties including high nonlinearity. He defined the degree as the number of variables in the highest order product term and the nonlinearity as the minimum distance from the set of affine functions.

Their goal was to construct balanced functions with maximum order correlation immunity and algebraic degree. Bent functions, for example, are unsuitable because they are not balanced. However, they used a recursive construction that starts with a bent function to get maximum degree with minimum loss of nonlinearity. In the next step they constructed a balanced function with correlation immune order one. Finally, they showed how to increase this order.

They proposed that this construction is more direct than Camion et al. from Crypto '91 or Seberry et al. from Eurocrypt '93. They compared the nonlinearity of the recursive and direct constructions and showed that their results have higher nonlinearity than the previous work.

Session 6: Message Authentication Codes, Chair: Xuejia Lai

UMAC: Fast and Secure Message Authentication, John Black, Shai Halevi, Hugo Krawczyk, Ted Krovetz, Phillip Rogaway

A MAC is used as a signature to verify data origin and integrity, but in the private key setting. It consists of generation and verification algorithms, both of which take a message and key as inputs. Security is defined in terms of existential forgery (Bellare, Kilian, and Rogaway). Two popular constructions are CBC-MAC and HMAC. Carter and Wegman constructed a MAC from a universal hash function followed by a cryptographic operation.

A universal hash family is a family of hash functions. This work considered e-almost-universal hash functions. Several fast hash families are known, e.g., MMH and NMH by Halevi and Krawczyk, which are based on multiplication.

This paper introduced UMAC, which is even faster. It is a fully specified MAC related to NMH. Their work consisted of theory, specification, and code. The key is used to select a hash function from the family. Then parts of the key are added to the message in 32-bit increments, and pairs are multiplied to get 64-bit values, which are in turn added, and so forth. Carries are discarded. The construction had to avoid several problems: high collision probability, need for long keys, parallelism, length constraints, and long hash outputs. They used a Toeplitz construction and several other tricks. Attention was given to efficient implementation and appropriate security level. The performance is roughly five to ten times as fast as HMAC.

Square Hash: Fast Message Authentication via Optimized Universal Hash Functions, Mark Etzel, Sarvar Patel, Zulfikar Ramzan

This work also used the Wegman-Carter design and existential forgery paradigm to produce a fast MAC. They defined an unconditionally secure MAC, which was motivated by information theoretic security concepts. They applied e-almost-d-universal hash functions over an additive group whereby Pr[h(x) – h(y) = d] < e for all x, y, d.

They started with the MMH family, which breaks the message into l-bit blocks, takes inner products, and applies modular reduction. Their scheme, instead, takes the square of the sum rather than the product at each step. By discarding carry bits, they got a more efficient construction with a measurable reduction in security. They also measured performance in comparison with HMAC and MMH and got a 27% to 33% speedup over MMH and about two to one over HMAC.

Constructing VIL-MACs from FIL-MACs: Message Authentication under Weakened Assumptions, Jee Hea An, Mihir Bellare

The model of a MAC and its security is as above. Previous MACs were constructed using block ciphers, HMAC, and universal hash functions. This new work, called NI-MAC, starts by considering FIL-MACs, i.e., "fixed-input-length" MACs. Block ciphers and compression functions operate as pseudorandom functions on fixed length inputs, whereas CBC, HMAC, and UMAC can take variable length inputs. The question is whether secure and practical MACs can be designed, assuming only that the compression function itself is a MAC, not necessarily a pseudorandom function. That is, does the unforgeability of the FIL-MAC imply the unforgeability of the VIL-MAC? Three popular transforms, CBC, Feistel, and a variant of Merkle-Damgård, were examined, and only for the last does the proof go through that unforgeability of the FIL-MAC implies the same for the VIL-MAC. Note that unforgeability is very different from collision resistance, because forgery of any message must be prevented. The resulting NI[ksha1] MAC is slightly slower than HMAC-SHA-1, but it has demonstrable security based on a weaker assumption.

Stateless Evaluation of Pseudorandom Functions: Security beyond the Birthday Barrier, Mihir Bellare, Oded Goldreich, Hugo Krawczyk

Pseudorandom Functions (PRFs) are a keyed family from which a given key selects one member, and they are indistinguishable from a set of truly random function by an attacker who does not know the key. That is, the attacker cannot predict a single bit of f(b) for an unseen input b.

PRFs can be used to build PR generators, encryption (block ciphers associated with PR permutations), message authentication, and challenge-response authentication. The two important properties of PRFs are consistency and independence. Users have to be careful not to use the same argument twice, so a counter can be made part of the input. Otherwise, security degrades quadratically (the so-called birthday paradox). Counters, however, require maintaining state, which may be inconvenient. One alternative approach is to double the input length, e.g., from 64 to 128 bits. Their approach, called the parity method, was to take an m-bit to m-bit PRF, apply it at several points, and XOR. For a 64-bit function, the chance of a birthday collision is reduced from 2–32 to 2–58 or 2–125 when the function is applied twice or three times, respectively.

Session 7: Public-Key Cryptanalysis II, Chair: Arjen Lenstra

Cryptanalysis of the Goldreich-Goldwasser-Halevi Cryptosystem from Crypto '97, Phong Nguyen

This is a lattice cryptosystem based on the hardness of lattice problems. The idea goes back to Ajtai (STOC '96), who showed a relationship between worst case and average case difficulty. A lattice is defined to be all the Z-linear combinations of the rows (or alternatively, columns) of an invertible integer matrix. Multiplication by a unimodular matrix gives another basis for the same lattice. The goal is to find good Z-bases with short vectors.

The closest vector problem (CVP) is hard, even with a good basis (Babai). In practice, the best technique is to compute an LLL or Schnorr reduction, which works best when the "gap" is large. The GGH cryptosystem is a lattice analog of McEliece's system. Using a dimension > 300, choose as public key a "bad" basis and as private key a "very good" basis. The ciphertext is the message plus a perturbation. Previous work by Schnorr and the author broke this system for dimensions up to 150 and 200, respectively. This work took advantage of the fact that GGH leaks the plaintext modulo twice the size of the perturbations, which lets the attacker find a larger gap and solve instances of size up to 350.

The best way to fix GGH is to choose error vectors that are not disclosed modulo any number. Even so, the system would need dimension 400 or more, and the public key would be 1.8 Mbytes, so the system would be somewhat inefficient.

Weakness in Quaternion Signatures, Don Coppersmith

Andrew Odlyzko presented this paper. Several more efficient signature systems than RSA have been proposed, going back to OSS (Ong, Schnorr, Shamir) in 1984, Pollard in 1987, then Cubic OSS, broken by Pollard, Quartic OSS, again broken, and so forth.

OSS uses a large n of unknown factorization, secret key u, and public key k = –u–2. The signature of m is (x, y) such that x2 + ky2 = m mod n. This is very efficient and a chosen plaintext does not reveal the key. It was hoped that breaking this would require factoring n, but this was not the case. Pollard and Schnorr showed how to reduce k and m to get a sequence of equations that can be multiplied together and solved to forge signatures without knowing the secret key.

The quaternions H are a non-commutative fourth degree division algebra over the reals with basis 1, i, j, k. The system works over quaternions with coefficients from Zn. Satoh and Araki generalized OSS to this algebra, where uTky = –1, and xTx + yTky = m in H/Zn. The author first broke this assuming one has the public key and signatures of three random messages. If the three messages are independent, they have a non-zero determinant yielding a low degree polynomial, so the attack goes through as with Pollard-Schnorr. His second attack does not even need the random signatures.

Cryptanalysis of "2R" Schemes, Ye Ding-Feng, Lam Kwok-Yan, Dai Zong-Duo

The 2R-schemes proposed by Patarin and Goubin are PK systems based on the function decomposition problem, which is believed to be intractable. The private key is several invertible mappings, and the public key is their composition. In 1985, Matsumoto and Imai proposed B-schemes based on two secret linear maps. In 1988, they replaced one of these with a quadratic map, but Patarin broke this in 1995.

For the 2R-schemes, the private key is three affine bijections and two quadratic mappings. The public key is the field, dimension, and composition mapping. The authors broke this new system too by reducing it to a linear algebra problem.

Factoring N = prq for Large r, Dan Boneh, Glenn Durfee, Nick Howgrave-Graham

Several proposed cryptosystems have a modulus of the form p2q or prq. This work presented a new approach to factoring the latter form based on lattices. In 1996, better methods of factoring p2q with elliptic curves were demonstrated. This lattice-based method works best when p and q are about the same size. If r = 2, it is O(N1/9), if r = (log p)1/2, it is O(p1/sqrt(log p)), and if r = log p, it is polynomial. In all cases, neither p nor q has to be prime.

The method works by guessing the high order bits of p, forming a lattice, reducing it with LLL, constructing a polynomial from a small lattice vector, solving it, and, if successful, one of the roots will be the correction needed to the initial guess. If the lattice vector is sufficiently small, the polynomial will not be too large in an interval around 0, and if it is less than p2, its root over the integers will be the needed correction.

They succeed in factoring a 1280-bit N with 80-bit p and r = 15 in 21 hours with a 72-dimension lattice. A 768-bit N with 96-bit p and r = 7 was also factored in 7 hours with a 60-dimension lattice.

Rump Session, Chair: Stuart Haber

Prize awards for Hasty Pudding analysis, Richard Schroeppel

Many challenge prizes have been offered in the past for breaking ciphers. He put up ten bottles of champagne, in total, and David Wagner was the first winner for finding equivalent keys. Bart Preneel et al. got the second for their Asiacrypt '99 paper [see next talk].

Belgian remarks on U.S. pudding, Carl D'Halluin, Gert Bijnens, Bart Preneel, Vincent Rijmen

They extended Wagner's observation on Hasty Pudding. For 128-bit keys, there are 230 equivalent keys, for 192-bit keys there are 272. It is easy to repair this.

An attack on ISO-9796-1, Don Coppersmith, Shai Halevi, Charanjit Jutla

This is an attack on RSA/Rabin signatures, which extends the result presented in Session 1. The attack has to be modified so the first word, "a," is different, then the pattern "bce" repeats, and finally there is a "bcd" at the end. For a 1024-bit modulus, the offline work is 230 precomputations plus 215 per signature. On-line, it takes 3000 signatures.

FPGA: A practical tool for cryptanalysis with running examples, F. Koeune, J.-J. Quisquater, R. Sebastien, J.-P. David, T. Gilmont, J.-D. Legat

They announced a FPGA defined in VHDL that does DES encryption in 8 clock cycles. A board of four chips can perform 200,000,000 DES operations per second. For Matsui's attack, 245 keys can be searched in 5 days. Only 130 boards can replicate the machine that EFF built.

Quadruple DES, Paul Kocher (spoof)

DES keys are too short and 2DES has a meet-in-the-middle attack. Triple DES has a 112-bit effective key but is not exportable. His 4DES (Ek1, Dk1, Ek2, Dk2) should be exportable, and reduced round versions may be quite secure.

Crypto law and export control update, John Gilmore

Dan Bernstein won his appeal in the U.S. Ninth Circuit. Source code was ruled free expression protected by the First Amendment, and export controls were ruled impermissible prior restraint. The majority decision goes further and brands export controls as a government attempt to control a science. It continues to state that we have lost an amazing amount of privacy, a situation which freely available encryption can partially balance. The separate concurrence recommended this case be heard in the Supreme Court.

Meanwhile, SAFE is heading to the House Floor next month.

French crypto law changed drastically towards liberalization. Germany considers strong domestic products without escrow indispensable.

Wassenaar is being pursued vigorously by the U.S. Government. Most countries have not implemented the agreement, but pressure is increasing.

Fast precomputation for discrete logarithm cryptosystems, C.P. Schnorr

Several previous papers like (1) Brickell and (2) Lim and Lee have given constant factor speedups of the binary method. Other proposals, e.g., precomputing an array of pairs, have been shown to be flawed. This talk was a proposal to repair such systems and a proof that the previous attack by Stern et al. no longer works.

Issues in tamper resistance, Benjamin Jun

He focused on inexpensive devices like smart cards and tokens. Protocol attacks have exploited RSA padding and MAC construction. Timing attacks and DPA cause leakage. Fault analysis is the third important area. Any one of these can be catastrophic.

Random number generator failure in provably secure encryption schemes, William Whyte, Burt Kaliski

RNGs are usually considered implicitly as secure primitives in the design of a scheme. ElGamal needs an ephemeral key pair, PKCS #1 version 1.5 needs random padding, OAEP-RSA needs a random seed, and ANSI X9.62 DH needs ephemeral keys. In RSA-OAEP, RNG failure reduces entropy and possibly allows either a dictionary attack on the message or Coppersmith's attack. For DH-AES, the adversary can recognize repeated messages.

Arms export: Blazonry for dummies and Oenocryptology, Don Beaver (spoof)

The first part explained the Crypto '99 coat of arms on the cover of the conference handout. The second talk was about the Echelon surveillance system. A FOIA request revealed nothing, but a California winery provided more revealing information.

Fun with cryptography: How not to set a final exam question, G. Agnew

In the graduate-level class at Waterloo, he often poses a new crypto system on the final. He gave a simple example of a system and showed that breaking a OTP can be reduced to breaking this example.

Signature schemes based on the strong RSA assumption, Ronald Cramer, Victor Shoup

This is an alternative scheme to be presented later this year. The signer hashes and encrypts and then performs a calculation using a 161-bit prime. The complexity of signing is 1.5 to 2 times RSA, verifying is 2 times DSA. The security proof is a simulation, and in the random oracle model, the standard RSA assumption suffices. The same analogy applies to Cramer and Shoup, 1998, with respect to the DDH and CDH problems.

Construction of universal hash families from algebraic curves over finite fields, Chaoping Xing, Huaxiong Wang, Kwok Yan Lam

They derived a new class of authentication codes. He compared this result with Helleseth and Johansson's and also with Bierbrauer's and claimed better security for the same size parameters.

What is the PGP key of Bill Gates? A practical experiment with key servers, J.-J. Quisquater

Assume on e receives a properly PGP signed message "I like apples" from Bill Gates and wants to check his signature. About 20 PGP key servers all over the Internet return various PGP keys for "Bill Gates."

How to solve a system of equations inside IDEA, E.G. Giessmann, G. Lassmann

For IDEA, multiplication is the most power-consuming step on a smart card. They showed how to observe differences in power usage when sparse factors show up in the multiplications to solve for the whole key.

Keeping secrets using .o files, Steve Meyer

This was a steganography talk. The problem is protecting digital designs. He claimed that certain object files are difficult to reverse engineer.

Computer license plates, Thomas Cusik

The Intel Pentium III processor serial number was announced with the usual jargon about improving security, but the public fury was unanticipated. He likened this event to the need for and introduction of license plates for motor vehicles early in the Twentieth Century.

A probabilistic poly-time framework for protocol analysis, P. Lincoln, J. Mitchell, M. Mitchell, A. Scedrov

Their proposal is to use a language-based approach and to express security in terms of protocol equivalence. The idea is to adopt the SPI calculus to a probabilistic polynomial time calculus. He gave a couple of examples.

Cross-encryption, Rosario Gennaro, Shai Halevi, Tal Rabin

Their model is on-line shopping with a clearinghouse, but the seller does not want to trust the clearinghouse. They showed two solutions based on the idea of distributing the clearinghouse functionality.

New forgeries with Rabin-like cryptosystems, J.-S. Coron, M. Joye, J.-J. Quisquater

One known attack assumes the attacker sees the public key, chooses messages, gets the corresponding signatures, and then chooses another message. They improved this attack for even exponents.

Cracking kryptos (well, almost), Jim Gillogoy

Kryptos is a cipher on the wall of the CIA building. It is about 860 characters long, it looks like a substitution at the beginning, and later parts have the statistics of a transposition. The final part looks like a polyalphabetic cipher with eight alphabets. He managed to recover large parts of it.

Introducing the T-class of SOBER stream ciphers, Greg Rose, Philip Hawkes

A family of ciphers with different key sizes (8 to 32 bytes) and internal designs oriented to different size processors, but resembling the original SOBER, now exists. Software speeds up to 100Mbit/sec have been measured.

Studying cycles in RC4, Chris Hall

Ignoring the keystream output, incrementing i can be viewed as rotating a ring, which eventually returns to the original state. He suggested studying these cycles further.

Next-generation mobile phone algorithms, Greg Rose

TIA TR45 defines North American cellular standards. An ad hoc Authentication Group is doing algorithm work for security. Collectively, the output is called IS-41: more information can be found at http://scitech.ctia.org/Security/index.html. Authentication includes key management, keys will be 128 bits, SHA-1 will be the hash function, and privacy algorithms will be air-interface specific. GSM is progressing similarly, but the process is not nearly so open.

The P-problem: a solved instance and its implications, Detlef Huehnlein (partial spoof)

For a mapping from W to P, elements in W are defined to be equivalent if they map to the same P-element. W is written words, P is pronounced sounds. Secure and CQRE, for example. CQRE will take place in Germany in late November.

Player elimination in distributed protocols: Can resilience be for free? Martin Hirt, Ueli Maurer, Bartosz Przydatek

This talk was about the cost of resilience against an active adversary. They showed how to use a new technique, player elimination, to achieve resiliency.

Efficient separable fair contract signing schemes based on standard signatures, Jan Camenisch, Markus Michaels

Fair contract signing implies either both parties get signed copies or neither does. Previous schemes without an on-line trusted party all lacked one or more useful properties. Their improvement on prior systems uses ZK proofs and resorts to an off-line trusted party only in the case of a dispute.

Consistency concerns for fair exchange, Paul Syverson

This talk illustrated how definitions of fairness are often inadequate by giving an example where the abort protocol can be run even after the exchange has been successful. The proofs worked because the definitions were flawed.

OEF using a successive extension, Kazumaro Aoki, Fumitaka Hoshino, Tetsutaro Kobayashi

Optimal Extension Fields are of size pn where p is close to the machine's word size. They compared implementations with both affine and projective representations of elliptic curves in terms of the complexity of inversion.

Tricks for a better efficiency of authentication protocols, Marc Girault

The goal is to get fast public key message authentication in a smart card. He and Stern examined DL schemes with precomputation in 1991 and 1994. This work showed how to store about 50 precomputations in a kilobyte of memory.

Public-key cryptography and password protocols: The multi-user case, Maurizio Boyarsky (presented by Cynthia Dwork)

The problem is to optimize the use of humanly memorized passwords and to make guessing on line the best attack. Halevi and Krawczyk gave a solution to the one-user case at ACM '98. Boyarsky extended their encrypted challenge and response protocol to multiple users and dynamic adversaries. Meanwhile, Halevi and Krawczyk pointed out that the full version of their paper considers insider attacks as well.

Correlated coins and applications to game theory, Yevgeniy Dodis, Shai Halevi, Tal Rabin

Given a public list of pairs (ai, bi), pick i and give Alice ai, Bob bi. If, for example, all the aI and bi are distinct, both learn i. Any 2-party probabilistic function in which the parties have no input can be simulated with this game. They showed that with this primitive certain games need no mediator and gave cryptographic constructions and a protocol for the primitive.

Complete characterization of security notions for probabilistic private-key encryption, Jonathan Katz, Moti Yung

Definitions of security for secret key systems are lacking, e.g., indistinguishability, non-malleability, and plaintext or ciphertext oracles. Randomization occurs in padding or choosing an IV. They have shown that adaptive and non-adaptive chosen plaintext security are equivalent, among other results.

Non-malleable non-interactive zero knowledge and adaptive chosen-ciphertext security, Amit Sahai

This was a pre-announcement of a paper at FOCS '99. The scheme of Naor and Yung does not work against dynamic adversaries because NIZK is malleable, which he fixed.

A secure e-mail voting scheme with trusted authorities for one occasion, Josef Lukovics

This is a four-phase scheme consisting of registration, voting, authentication, and evaluation. It has secrecy, fairness, and simplicity.

Riffle-shuffle and gray-code as block cipher components, Jayant Shukla

The AES finalists use a small number of primitive constructions. He presented two new operations that are fast and invertible and have potentially good security properties.

A new blind signature scheme, Pascal Paillier

Starting from his Eurocrypt '99 result, which used Benaloh's construction of a group over Zn x Zn*, he now added an efficient method of blinding.

Efficient dynamic traitor tracing, Omer Berkman, Michal Parnas, Jiri Sgall

The threat is that a pirate may rebroadcast the content, so the goal is to locate and disconnect the traitors who give the pirate the content. In each round, the users are partitioned by "colors" until the traitors are all located. The goal is to bound the number of rounds and number of colors simultaneously. Both upper bounds and lower bounds were given.

On full linear (k, n) traceability schemes, K. Kurosawa, M. Burmester, Y. Desmedt

In the same model as the previous talk, the goal is to identify at least one of the traitors. This talk was a modification of their work presented in Eurocrypt '98, which in all cases avoids the attack given by Stinson and Wei.

Andrew Fernandez

He pointed out a potential trapdoor named _NSAKEY in Microsoft's NT 4.0 Service Pack 5 and suggested how to exploit it to avoid the software enforcement of export controls.

The Real-Time Cryptanalysis of A5/2, David Wagner et al.

They reverse engineered and cracked this GSM cipher based on four LFSRs of sizes 17, 19, 22, and 23 bits. A non-linear function of the 17-bit LFSR is used to clock the others. Getting two frames of 114 bits each is enough to break the scheme. Once the 17-bit LFSR is found by exhaustive search, the rest falls out. Note that the attack can be done in real time.

Session 8: Traitor Tracing, Chair: Daniel Bleichenbacher

An Efficient Public Key Traitor Tracing Scheme, Dan Boneh, Matthew Franklin

The model is to force users to pay for content on a per usage basis. There is a single broadcaster. In their system a single public key corresponds to many private keys. Each content provider has a public key and encrypts content before distribution. Consumers have their own decryption keys and their systems enforce payment when they decrypt, which seems superior to having a global decryption key. When a pirate player is found, the individual private key gives some information about the traitor.

The system consists of four algorithms: key generation, encryption, decryption, and tracing. Tracing must anticipate k-party collusions and work as a black-box oracle.

Each public key has m private keys. The system is an ElGamal-like scheme, but there are m vectors of length 2k, which are public and used to produce different representations of the private key. Encryption works in a group of order q. The ciphertext has length expansion by a factor of 2k. The tracing algorithm uses invalid ciphertexts to expose the representation. An important open question is whether safety against arbitrary collusions is achievable.

Dynamic Traitor Tracing, Amos Fiat, Tamir Tassa

This is also a broadcast system with conditional access, but the new twist is dynamic piracy, whereby the pirate rebroadcasts keys or otherwise sensitive data. The static model only considered fixed pirate decoder boxes. In this model, the data are divided into segments and personalized. A fixed number of fingerprints are assumed to be robust and are applied iteratively after piracy is detected to partition the users and to locate the traitors. Dynamic schemes can adapt to different numbers of traitors as the system is running. The overall system is probabilistic and has a small chance of error. Open problems include finding an optimal (polynomial) deterministic scheme and devising a probabilistic scheme with known or oblivious allocation of codewords.

Efficient Methods for Integrating Traceability and Broadcast Encryption, Eli Gafni, Jessica Staddon, Yiqun Lisa Yin

Again, the model is broadcast encryption and traceability in the secret key setting. The goal is to find the right intersection of users' key sets between the two extremes of sending separate transmissions to each user and having a unique key set for each potential set of authorized users. For example, if each user has r keys, we can exclude any set of m users by using a polynomial of degree at most (r–1)/m. At the same time, it should be risky for c users to pool keys and produce a pirate decoder, while at the same time the probability of false accusation should be small. Two solutions were given: broadcasting shares of the message and assigning sets of keys to each user. Two other issues are resiliency against excluded users and scalability as new users join the group.

Session 9: Invited Lecture, The Evolution of Public-Key Cryptography, Martin Hellman

In November 1976, his New Directions paper co-authored with Diffie stated, "We stand today on the brink of a revolution." Public key cryptography played a part in that revolution, which has also had its evolutionary aspects. In 1881, Kerckhoffs stated that the details of the system should be assumed to be public. He and Diffie identified one-way functions as a low-level primitive. John Gill deserves credit for identifying indices or discrete logs as a mechanism for creating one-way functions. The Pohlig-Hellman system essentially anticipated RSA but missed the composite modulus trapdoor. Diffie-Hellman actually was invented after Pohlig-Hellman, and Merkle also had the idea of public key distribution. Because we have only a few public key systems, alternatives not using number theory are important.

When DES came out, the key size was an obvious problem, but also other trapdoors were suspected.

Nine years before the quadratic sieve, Richard Schroeppel showed that the continued fraction method was subexponential and of the same asymptotic complexity as the quadratic sieve. He also introduced the idea of sieving and tried to factor F8, but because the factors of F8 are of very different sizes, Brent got there first with Pollard's r algorithm.

Hellman made some comments on GCHQ's claim to have invented public key cryptography in the early 1970s and his rationale for his natural inclination to pursue what appears to others as foolish.

Session 10: Differential Power Analysis, Chair: Andrew Odlyzko

Differential Power Analysis, Paul Kocher, Joshua Jaffe, Benjamin Jun

This work involved both engineering and cryptography. The general idea was to measure the power usage of a cryptosystem, e.g., when implemented in a tamper-resistant package like a smart card. Response time, electromagnetic emissions, and physical tampering can be used to mount similar attacks. He showed how the power usage of DES reveals the structure of the cipher. Simple analysis can locate conditional branches, e.g., the difference between square and multiply in modular exponentiation. When the measurements become too noisy, the trick is to use statistical correlations. Applied against DES, the key can be guessed and verified one S-box, i.e., six bits, at a time. Not only the data bits, but also the instructions, timing, register contents, and so forth can be extracted. This allows complete reverse engineering.

To protect against this, the possible approaches are signal reduction, noise generation, leak-tolerant protocols, and other ideas he declined to explain. Many patents on this technology are pending. However, full protection is infeasible today, so many systems are being built with faulty security assumptions.

Towards Sound Approaches to Counteract Power-Analysis Attacks, Suresh Chari, Charanjit Jutla, Josyula R. Rao, Pankaj Rohatgi

Most vendors have been trying to address the DPA attacks described above. Chip cards are typically clock-driven CMOS circuits. Power is consumed when logic states change. Usually a few bits in the overall state determine events and intra-cycle delays. The attacks attempt to distinguish states by separating distributions.

Many ad hoc countermeasures do not work well. Among these are adding clock jitter or spurious clock cycles, reordering operations, adding noise, balancing data accesses, and inserting random delays. More reasonable approaches are randomizing, blinding, and secret sharing. Formal analysis can help with these designs and predict what the statistics will reveal, given a certain number of samples.

Session 11: Signature Schemes, Chair: Daniel Simon

Separability and Efficiency for Generic Group Signature Schemes, Jan Camenisch, Markus Michels

Typically, group members have to go through some public-private key setup, which may affect other users unless the system has the separability property. Separability may be perfect, strong, or weak depending on which keys are separable and full or partial depending on the parties.

Chaum and van Heyst defined group signature schemes. Membership, revocation, and "opening a signature" need to be provided along with the usual operations. Many follow-on schemes have been defined, but they all lack separability except Camenisch and Damgård, 1998, which has partial separability. This work demonstrated the first fully separable scheme. It is built from a DL construction and has a ZK proof of knowledge.

A Forward-Secure Digital Signature Scheme, Mihir Bellare, Sara K. Miner

Using the standard public key framework, the problem is later exposure of the signing key, which allows forging. Secret sharing and tamper resistance are two approaches to protecting the key. If the key exposure is discovered, the key can be revoked, but prior signatures are now suspect. Timestamping is one solution, but it requires another entity. Their solution is for the secret key to evolve from one time period to the next with a one-way function, and to have the keyholder delete obsolete versions. Their construction was proved secure in the random oracle model based on the hardness of factoring. The public key is a Blum integer. The secret key consists of a base key and a list of values that are squared at each time interval. The proof of forward secrecy was first carried out for an identification scheme and then carried over to the signature scheme.

Abuse-free Optimistic Contract Signing, Juan A. Garay, Markus Jakobsson, Philip MacKenzie

Abuse freeness means that neither party can prove to a third party that she has the ability to complete or terminate the protocol. In effect, Alice proves to Bob, "Either A is true or I am Bob." All of the cryptography is based on the DDH problem, and the proofs hold in the random oracle model.

Contract signing, introduced by Blum, is an example of a fair exchange problem, that is it is "neither or both." The two approaches to contract signing are gradual release and trusted third parties. Their protocol has an off-line third party and is constructed from commitment, convertibility, and non-transferability. It is optimistic, complete, fair, and abuse free; there are two messages in each direction together with abort and resolve protocols, which involve the third party.

Session 12: Zero Knowledge, Chair: Jean-Jacques Quisquater

Can Statistical Zero Knowledge be made Non-Interactive? or On the Relationship of SZK and NISZK, Oded Goldreich, Amit Sahai, Salil Vadhan

Blum, Feldman, and Micali defined non-interactive ZK (NIZK) using a shared random string model. Statistical ZK (SZK) is a clean model for many applications. They showed that (1) BPP ¹ SZK iff BPP ¹ NISZK and (2) if NISZK is closed under complement then NISZK = SZK.

In 1985, Goldwasser, Micali, and Rackoff defined SZK as an interactive protocol with an unbounded prover and polynomial-bounded verifier, such that the verifier accepts or rejects correctly with high probability, and the verifier can simulate her view of the interaction on her own. In NISZK [BFM88, BDMP91] the verifier rejects with high probability no matter what proof the prover sends. For a long time, quadratic residuosity was the only known SZK problem, but [DDPY98] showed that IMAGE DENSITY is complete for NISZK. On the other hand, STATISTICAL DIFFERENCE and ENTROPY DIFFERENCE (ED) are complete for SZK. ENTROPY APPROXIMATION (EA) is in NISZK, so they produced a reduction from ED to EA. Hence, any SZK proof can be made non-interactive. The harder result was showing EA is complete and in NISZK.

On Concurrent Zero-Knowledge with Pre-Processing, Giovanni Di Crescenzo, Rafail Ostrovsky

Classical ZK has a prover and a verifier, completeness, soundness, and the zero knowledge property. In modern networks, there are many provers and verifiers, and many proofs may be occurring at once. The danger is that verifiers can team up and coordinate their questions to learn more than they should. There has been a lot of work on concurrent ZK: timing assumptions, pre-processing, number of rounds. Finally Kilian, Petrank, and Rackoff showed there is no four-round concurrent ZK. The various parameters are number of rounds, efficiency, trust, and assumptions. Their new result is a three-round challenge-response pre-processing protocol followed by polynomially many three-round proofs with no limits on concurrency.

The key technical problem they solved is that interleaving kills the simulator. Dwork and Sahai solved this by using a trusted center and straight-line simulation. The authors of this work introduced the idea of equivocal commitment, which is a stronger variant of chameleon commitment.

Session 13: Asymmetric Encryption, Chair: Serge Vaudenay

On the Security Properties of OAEP as an All-or-Nothing Transform, Victor Boyko

All-or-nothing transforms (AONTs) are randomized transforms that have no keys and the property that given all of the output, the input can be recovered, but given part of the output, none of the input can be recovered. The original idea was that AONTs could be used to defend against brute force attacks. Also, part of the output of the AONT can be encrypted with a public key system, which is especially useful with short key systems like elliptic curves. Another application is remotely keyed encryption [JSY99], whereby the smart card encrypts part of the output of the AONT. Since AONTs have no keys, they may actually be exportable.

Heuristic constructions were given by [JMP96] and [R97]. In 1998, Stinson showed a system that leaks partial information. The construction in [JSY99] does not generalize.

Bellare and Rogaway introduced OAEP for public key encryption. It consists of adding randomization and performing a single Feistel round. He showed in the random oracle model that OAEP has strong security based on indistinguishability with non-adaptive and adaptive adversaries. The adaptive model is needed to defend against brute force attacks. His proofs derived upper and lower bounds for OAEP as an AONT.

Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization, Mihir Bellare, Amit Sahai

This is a paper about definitions of security, in particular, privacy under public key encryption. Much more effort goes into designing schemes than defining what they do. Theory, beginning with Goldwasser and Micali, asks, "What is encryption?" One criterion is indistinguishability; another is semantic security. They showed these are equivalent. The formalization of semantic security involves a simulator.

Non-malleability [Dolev, Dwork, and Naor] means that the adversary cannot modify the ciphertext meaningfully. The difficulty is getting a good formalization of what meaningful modification means. One model uses simulation and another uses comparison, which are different, e.g., with respect to chosen ciphertext attack. The authors presented several results, for example, that indistinguishability and non-malleability are equivalent with respect to full adaptive chosen ciphertext attack but not with respect to weaker attacks.

Secure Integration of Asymmetric and Symmetric Encryption Schemes Eiichiro Fujisaki, Tatsuaki Okamoto

RSA is a one-way trapdoor permutation (OWTP), whereas ElGamal is a more general one-way encryption (OWE) system. To get provable security, OWE and OWTP systems are converted, e.g., OAEP can be applied to a OWTP but not to a OWE system. This is the first problem they addressed. The second is hybrid use of symmetric and asymmetric encryption. Their result converts a OWE encryption system to a chosen ciphertext secure system in the random oracle model. Simultaneously, they created a generic method to construct secure hybrid encryption systems.

Session 14: Electronic Cash, Chair: Serge Vaudenay

Auditable, Anonymous Electronic Cash, Tomas Sander, Amnon Ta-Shma

In 1982, Chaum introduced blind signatures, which have been used in many systems. They can provide unconditional anonymity, but several problems have been pointed out, e.g., blackmailing and compromising the bank's signing key. The goal is to get the best of both worlds: anonymity and defense against these and other threats. There has been a long sequence of work based on "trustees," "early detection of key compromise," and so forth.

In their new system, the bank keeps a list of outstanding coins, and withdrawal and payment are essentially list membership operations, which allows auditing. What they built, therefore, are blind, auditable membership proofs based on hash trees, all of which can be made public, but the bank must maintain data integrity. The constructions are based on discrete logs and proved secure in the random oracle model. As an example of the efficiency, he showed the algorithm for root updates.

Session 15: Protocols and Broadcasting, Chair: Phillip Rogaway

Oblivious Transfer with Adaptive Queries, Moni Naor, Benny Pinkas

The subject is private information retrieval over an adaptive list of queries, where each query depends on the previous ones. One-out-of-n OT is an inefficient solution. They used an n-dimensional generalization of pseudorandom synthesizers [Naor and Reingold], where sum consistency does not conflict with pseudorandomness. This work gave two adaptive k-out-of-n constructions, one based on DDH and one general one. In the process, they also achieved verifiable oblivious evaluation of a pseudo-random function.

Compressing Cryptographic Resources, Niv Gilboa, Yuval Ishai

Given n players, they defined two-phase protocols. First, the dealer securely distributes "resources" to each player, and then the players communicate over insecure channels. This model can achieve, for example, private communications, private modular addition, or proactive secret sharing. However, long pads may be required for perfectly secure solutions. By using pseudorandom generators, arbitrarily long pseudo-pads can be created. Still, the system has to defend against malicious coalitions. So each player gets a subset of the replicated seeds, which are used in turn to form the pseudo-pads. They solved a combinatorial problem for general linear correlation by finding the patterns for the replication; the solution is based on graph theory and linear algebra.

Coding Constructions for Blacklisting Problems without Computational Assumptions, Ravi Kumar, Sridhar Rajagopalan, Amit Sahai

The model is broadcast with a large number of users, but occasionally a few users need to be excluded (e.g., surprise party announcements). Each user has an ID and decryption mechanism. The trivial solution of different mechanisms for each user is enormously inefficient with respect to communications and user storage. Also, the constructions should use the encryption and decryption functions in a black box model. Their main result is based on algebraic-geometric codes and the communications overhead is independent of the total number of users. It is the first feasible solution, insofar as the storage requirements are poly-logarithmic in the number of users.

An Information Theoretic Analysis of Rooted-Tree Based Secure Multicast Key Distribution Schemes, Radha Poovendran, John S. Baras

Multicast communication has many commercial applications, e.g., stock quotes, news, and sporting events. Security considerations include integrity, confidentiality, routing, failure recovery, and membership additions and deletions. Several methods of member revocation based on a rooted tree have been proposed. The authors used information theory to study and optimize this data structure for the member revocation problem. The optimal average number of keys per member in the tree depends on the entropy of the member revocation event, which should be maximized for the currently proposed strategies, regardless of the revocation probabilities. They analyzed both the compromises and collusions needed to break the system and gave a simple rule for group size.