5th ACM Conference on Computer and Communications Security (CCS5)

San Francisco, California, Nov 3--5, 1998

The fifth ACM Conference on Computer and Communications Security was held at Fairmont Hotel in San Francisco, CA from Nov 2 to Nov 5. There were four tutorials on the first day: Cryptography -- Theory and Applications by Dan Boneh (Stanford University), Programming Languages and Security by Martin Abadi (Compaq Systems Research Center) and George Necula (Carnegie Mellon University), Authentication Protocol Verification and Analysis by Jon Millen (SRI International) and Emerging models in electronic commerce by Doug Tygar (UC. Berkeley). From Tuesday to Thursday, seventeen papers were presented. There were also several panels and invited talks. The notes taken from the conference are as follows. It was a very successful conference.

[The following writeup gives the invited talks and panel discussion first, followed by the contributed papers --Eds.]

****Key Notes**** ``Risks and challenges in computer-communication infrastructures'' *Peter Neumann

Peter Neumann is in Computer Science since 1953 - before most of the audience was even born. He is well known as the moderator of the Risk forum (http://catless.ncl.ac.uk/Risks/). His home page can be found at: http://www.csl.sri.com/~Neumann.

On November 2nd, 1988, the Internet worm (a.k.a. Morris worm) almost brought the entire Internet to a halt. Today, 10 years later, we are still suffering from the same problems.

On the day of the talk, the newspaper reported that there was a sniffer in Stanford University's main mail server which was active for one month, sniffing passwords and emails. Ken Thompson published a paper 20 years ago which treated a similar scenario.

The Y2K problem is a serious threat today. Multics took care of the problem a long time ago by reserving 70 bits for date fields.

For a long time it was believed that Xenon was a very stable element and could not react. Today NASA uses Xenon ions to propel the latest space probe with previously unmatched power.

The problem is that research does not find its way into the commercial world. Today PC operating systems are a disaster, networking code is full of errors ... Customers have become software testers ...

The number of breakins is steadily increasing as the Internet continues to grow. High school students of Cloverdale (?) high broke into the Pentagon's computers. But nothing happens after this incident. People still use constant passwords. But systems even fail without breakins. Yesterday's vulnerabilities stay today's.

(?) We need a notion of ``Generalized dependence'' which enables acceptance operation despite faults, failures, errors and misbehavior or misuse of underlying mechanisms. Moore, Shannon and von Neumann showed earlier how to build a reliable system out of unreliable components. Today our systems are unreliable: in 1980 the Arpanet collapsed because a single node failure contaminated the entire network. A similar flaw caused the 1990 AT&T collapse. In 1998 we had the Frame Relay outage. We saw cell phone outages, Internet black holes, etc. Six experts testified that the Internet can be brought down in 30 minutes. The PCCI report http://www.pccip.gov and http://www.ciao.gov shows that all infrastructure is vulnerable.

The conclusion is that we have not learned much in this century, what does the ghost of the future look like? We can see two options: Jawbone vendors that do not produce secure software (JavaSoft has done a very good job and is an exception), but this did not have success. The second option: Open source software, but the government has chosen proprietary software. Single vendor solutions are not good, it would be better to take best code of each vendor. Legacy code implies ``lowest common denominator''. Proprietary software further has no interest to cooperate with other systems, making it therefore impossible to get away from it.

What we need is robust open source: meaningfully secure, meaningfully reliable, meaningfully robust and meaningfully survivable.

Examples of proprietary software deployment is the US Navy battle cruiser Yorktown (which is using NT): 2.5 hours dead in the water because of a division by 0 error! What did we learn in all these years?

Windows NT has 48 million lines of code, 7 million lines of test code, but where is the security perimeter? In addition users download web-ware that was pulled down from any Internet site. How can we trust such a system?

How can we assure that code we download is robust? We should have components which are known to be robust, we need evaluation centers and put cryptographic enclosure around software to ensure integrity. We should change software from WORA (write once run anywhere) to WOVORA (write once, verify once, run anywhere).

There is a lot of good research in Java security (e.g. Dan Wallach, George Necula): Will it find its way to the commercial world?

Eric Raymond's halloween document: http://www.tuxedo.org/~esr/halloween.html. (Microsoft claims: ``Open Source Movement is evil'') But we need to support Open Source development. To cite JFK: ``Ask not what the computer communications infrastructure can do for you, ask what you can do for computer communications infrastructure!''

Question: There's little incentive for corporations to have robustness, reliability. How can we do the evaluation of systems which are not yet obsolete? Answer: For big systems, there's no hope to evaluate. The orange book did not handle system composition. The army, navy has interests in this, McArthur has funded Richard Stallman to do research.

Mike Reiter asked: The security of planes, nuclear power plants, etc. use different approaches to security - how is this problem different? Answer: We need a plurality of approaches.

Other comments:

Should not the first step be to define security, robustness, etc? One of the biggest problem is that there is no conception about security. Isn't the reason that the rate of innovation was so high that the robustness, etc could not keep up? Other features are more important.

***Panel on Anonymity on the Internet*****

Paul Syverson was the organizer and chair of the panel. Panelists were Avi Rubin (AT&T Labs - Research), Christina Allen (Electric Communities), Susan Landau (UMass Amherst), Chris Painter (US Dept. of Justice) and David Wagner (UC Berkeley).

Paul started with a brief introduction of Anonymity on the Internet. The main concern is about medical data as well as financial and behavioral data about individuals. Whether this data should remain private, and even further, should it allow any association to an individual? The questions that the panelists were asked to answer were:

- What are the responsibilities of an ASP (anonymity service provider)? - Should ASP's be regulated? Or even illegal? Or should they keep logs?

Avi Rubin was the first speaker. Together with Mike Reiter, they did research on CROWDS, a system to bring anonymity for web traffic. The infrastructure and technology needs to be in place before policy and regulations (technology and policy need to go hand-in-hand). For anybody, it is possible to go to a cyber cafe, use hotmail, set up a phony pre-paid account at an ISP, forge e-mail, spoof IP, etc. to be anonymous. Other methods are to use mixes (hide information and traffic patterns between sender and receiver), Onion-routing (NRL), Babel, ISDN (ppw (?)), Rewebber (gw (?)). Using the Anonymizer, which is a proxy-based scheme for web anonymity, is another system to be anonymous, but the anonymizer can keep information about the clients. LPWA (GGMM) Lucent Pseudonymous (?). Crowds is effective in keeping the privacy of its members because the aggregate of members acts on behalf of an individual member of the crowd. Avi ended his presentation with a comparison of methods (low tech, mixes, proxy, crowds) where the attackers were traffic analysis and collusion attacks.

Christina Allen ``Context-sensitive approaches to managing Internet anonymity''

Christan Allen is from Electric Communities. She said that ``The real question was how to create Internet contexts amenable to the social, commercial, entertainment and the myriad other activities that people seek online, and how to protect these contexts from abuses. One approach is to imagine a wealth of Internet contexts, each of which is designed around the idea of reciprocal credentials.''

Susan Landau: ``The Right to Anonymity in an Internet World''.

Susan Landau is from University of Massachusetts, and co-authored the book ``Privacy on the Line'' with Whitfield Diffie.

Speech, commercial transactions and browsing are three flavors of anonymity. Americans today live in a world very different from that of the past. It is not possible any more to walk out into the field to hold a private discussion with commerce partners. Technology of miniature tape recorders, videos, or communication equipment make anonymity guarantees difficult.

Whatever the US policy for anonymity will be, Internet users can use anonymity services which are located in countries that do not regulate it.

Speech: Since the first days of the republic, anonymity was fundamental in US political speech. But anonymous political speech was not always protected. The Supreme Court ruled multiple times for anonymous speech. Anonymity is very important because it allows the diffusion of unpopular ideas. In some cases people got killed for expressing their opinion. Today the Court relies on the importance of free speech in a democracy. Compulsory disclosure may deter free expression.

Commercial transactions: We won't see anonymity in this context. The Police needs to trace transactions to fight drug money. The bank secrecy act in 1970 requires banks to keep records of transactions, and to release those upon subpoena.

Browsing: Since browsing is similar to reading, and the right to read anonymously was protected multiple times by the Supreme Court, browsing anonymously should also be protected. Unfortunately technology threatens the right to read anonymously since content providers or ISP's can trace what users read. With the continuing replacement of printed text with on-line information, the right to read anonymously could be severely damaged.

Conclusion: Anonymity is new. It only started when we had large cities. France still prohibits it. It's not supposed to be protected except when people push it. Anonymous speech and anonymous reading (browsing) are appropriate and necessary to the functioning of a democratic society.

Chris Painter

There is currently no official Department of Justice policy on ``Anonymity on the Internet.'' On one hand law enforcement needs to find out the originators of messages, i.e. racist mail, or terrorist threats. But on the other hand there are legitimate reasons, including the promotion of free speech, for anonymous communication.

Law enforcement can require ISP's to maintain records for up to 90 days and disclose that information, provided that certain legal requirements are met.

ASP's can also be hold social liability for providing anonymous service. But the marketplace will drive the issues. When users want anonymity, they will get it.

David Wagner, ``Policy for online anonymity providers''

Anonymity is not an end goal, but a technique to achieve privacy. Anonymity service providers should aim for full disclosure of their privacy policies and procedures. There are two key axes for policy analysis of anonymity services. The first axe is for the different purpose of the services, ``good for the society'' or ``good for an individual''; the second axe characterizes services based on ``push'' vs ``pull'' technique. The anonymity providers should also consider when and where to apply anonymity and where to apply it in the protocol stack. Retention policy is also very important -- don't keep potentially damaging information around any longer than absolutely necessary. The abuse of anonymous service is always a problem.

General discussion: - It's trivial to cut big transactions into small transactions with today's technology. So the policy that keeps records for transactions over 100 dollars doesn't work any more. - Escrowed implies less security. - Mondex card provides anonymity. - Bellovin: More privacy is leaked by individual who fill in forms ``paper trail''. Laws prevent information from being sold - Avi: we need seemless anonymity. If tools are very easy to use then people will use it. - Christina: Identity information system (many user names, passwords) - Paul: Onion routing processes 10^6 web connections every month, the demand is present - the market is not huge, but it is present. Onion routing gets connections from 20000 different IP addresses. - Daw: Product from zero-knowledge-systems coming out soon. - We have many decisions to take in designing public systems which affect privacy greatly. But the public does not know how to answer these questions, we need to help them out. - Today, private information is known to people who we don't know. It used to be different. For example, 500 years ago, goods were paid in sacks of gold. - How much demand is there for anonymity? Susan: minority groups, women with breast cancer. - Few people actually need anonymity, but it is important to know that anonymity is available when we really need it.

***Invited talk : The development of public key cryptography*** *Martin Hellman

Hellman said that he always wanted to be different from anyone else. Crypto used depended on the secrecy of the algorithm such as the Caesar cipher. A truly secure scheme must be public and only depend on the secrecy of the keys. Crypto was a branch of Information theory, together with compression and error correction. There were two famous papers by Claude Shannon 1948 (49?). One of the papers were written in 45 but the NSA declassified it in 49. Information theory owns its birth to security. Random encoding argument makes sense in crypto - that's how Shannon came up with his counterintuitive argument that the best error correction is also a random one. Shannons's paper was given to Martin Hellman by Peter Elias at MIT. In fall 1974 Diffie came to visit him at Stanford, from a recommendation of Allan Conheim at IBM, because Hellman and Diffie gave a very similar talk individually at IBM Yorktown. They immediately started discussing about the work and enjoyed working together. They wanted to start a cryptographic complexity class, based on one-way hash functions, as a simplest crypto entity. Trapdoors are present in all crypto entities (it's like a quiz which is very difficult, but with a hint it becomes very simple.). It would be nice to have a trapdoor cipher. ``tumbling around in the dark and the muse of cryptography was whispering into their ear.'' Merkle at Berkeley developed the first public key crypto algorithm: Merkle's puzzle PKD. Hellman said that Diffie Hellman system should be called Diffie Hellman Merkle system. They looked at factoring for one-way function, but they did not find how to create the public-key cryptosystem. They also looked at the discrete logarithm problem. They came up with the Pohlig-Hellman system which is based on the discrete logarithm. There were times that they were really close to the RSA scheme, ``it's like we were dancing around it... But we missed it.'' Later, they came up with the Diffie-Hellman key exchange. Conclusion: the invention was a random walk, they could not see every golden nugget. `` Only a fool would still jump up and down the 100th time when he thinks he finds the solution even though he has failed 99 times before.'' You got to be a fool to do research (Otherwise you would stop after people have told you off many times).

Somebody from the audience asked about ElGamal. ElGamal was Hellman's graduate student. Hellman said he was busy with other things at that time. ElGamal came up with the algorithm by himself. Mike Reiter asked what the next hard problems are that we could use to create the next public key crypto since the quantum computing might break the factoring difficulty. Hellman answered that the Puzzle method, KDC in conjunction with public key crypto might be promising. Has any work been done by the spooks before them? Hellman said he was told that other people have discovered the same before him. But he thought that there are two universes, the classified and unclassified. If a piece of work was done in the classified universe before him , it should just be a footnote in the unclassified world.

***Invited Talk: Trust in cyberspace? A research roadmap*** *Fred Schneider

Trust in Cyberspace is a new report from the National Research Council committee. More information about the report can be found at the following URL: http://www2.nas.edu/cstbweb National Academy Press prints ``Trust in Cyberspace'', the ISBN number is 0-309-06558-5 and the book can be ordered online at: www.nap.edu/bookstore/enter.cgi

Systems should work correctly despite environmental disruption, human user and operator error, hostile attacks, design and implementation errors. This is a holistic and multidimensional problem. Today's systems are not trustworthy. Use of known techniques would improve the situation. But ``better'' is probably not good enough. We need new research to attack the problem. The report is a research agenda, two major information systems were investigated: the telephone network and the Internet. The PTN's trustworthiness is eroding: new services might involve database look-ups, and new equipment is programmable, proprietary and involves authorization.

The Internet Trustworthiness is evolving: IPsec helps, but continuing sources of vulnerability require new research (routing protocols: stability vs reconfiguration, cooperation in presence of mutual distrust, scaling is important).

Developing Trustworthy NIS (Networked Information System) software is central. The necessary characteristics are: substantial legacy content, agglomeration, COTS. We face the usual difficulties plus: information about system internals is unknown, and analysis of dependencies in large-scale systems is difficult. We need to address the assurance (testing, formal verification), and properties of a whole from properties of parts.

New security research is needed. New policies to enforce: availability (including denial of service), integrity, and application-level security. New structures to manage: foreign code, extensible software systems, black-box components (ie. COTS), and security enhancers (firewalls, etc). New problems to solve: network-wide authentication, faster authentication/integrity mechanisms.

New security options include fine-grained access control and research on crypto: improving the key-management infrastructure to allow revocation, scale, recovery from key compromise, name space management, better interfaces for users and system admins.

Intrusion detection is a cat-and-mouse game. Security models: move vulnerabilities around to where they do the least harm.

An important research direction is to investigate on how to build trustworthy systems from untrustworthy components. The issues are: replication for reliability, diversity for resisting attacks, monitoring for both reliability and security, promising new algorithmic approaches are self-stabilization and emergent behavior, and an architecture/placement for trustworthiness functionality.

We need to think in the economic and public policy context. Economic factors do play a role, since we need to manage the risk. Security risks are more difficult to identify and quantify than those that arise for reliability. Should we migrate from risk avoidance to risk management? The market isn't working: consumers prefer functionality, nobody can assess trustworthiness. We need to convey product trustworthiness, but no model will be complete.

Questions and comments by the audience: - Intrusion detection: addition to signature techniques, using statistical models could help. - Regulatory: less and less regulated would be better, for instance having auctions for power. - How could we create diversity? It's not economically practical.

*****Contributed Papers*****

-- Communication Complexity of Group Key Distribution *Klaus Becker, Uta Wille

A contributory group key distribution system is one that interactively establishes a group key such that -- no user is required to hold secret information before entering the protocol, -- each group member makes an independent contribution to the group key.

Based on the ``gossip problem'' presented in the 70's, the paper derived the theoretical lower bounds on the total number of messages, exchanges and rounds required for contributory group key distribution systems with or without broadcasts. Then the author uses Diffie-Hellman based schemes to demonstrate that the lower bounds in the theorems can be attained.

Somebody in the audience gave the comment that the Diffie-Hellman based protocols are similar to the divide-and-conquer parallel algorithms.

-- Key Management for Encrypted Broadcast *Avishai Wool

In applications such as direct broadcast digital TV networks, transmissions need to be encrypted. A set-top terminal (STT) at the user end stores the decryption keys and access control data and does the decryption. The STT is connected to the telephone network and has a 4-8KByte memory, which severely limits the complexity of the key distribution algorithm used. The reviewed existing schemes are the bit-vector scheme and the block-by-block scheme. The author proposed two new schemes, the extended-header scheme and V-space scheme. The extended-header scheme has the design tradeoff between the bandwidth allocated to header transmission and the delay a user would incur when switching channels. The V-space scheme doesn't add any headers. But it has limitation of the possible packages due to the mathematical propertiy of the scheme. But the author demonstrated how to use the natural hierarchy of the programs to get around this limitation (``It's not a bug, it's a feature!''). Some people in audience commented that this scheme can't work for very short term keys and it could be vulnerable to collusion attacks.

-- Authenticated Group Key Agreement and Friends *Giuseppe Ateniese, Michael Steiner and Gene Tsudik

The scheme is used for dynamic peer groups (DPGs). DPGs are usually small in size and have frequent membership changes. The scheme provides a practical and secure authenticated key agreement protocol which is based on Diffie-Hellman key agreement. The security properties of the protocols such as perfect forward secrecy, and contributory authentication are proved in the paper.

-- The Design, Implementation and Operation of an Email Pseudonym Server *David Mazieres and M. Frans Kaashoek

Attacks on servers that provide anonymity generally fall into two categories: attempts to expose anonymous users and attempts to silence them. nym.alias.net is a server providing untraceable email aliases and has been running for two years. Based on the experience with nym, the paper enumerates many kinds of abuse of the system and distilled several principles for designing, implementing and operating anonymous servers. Nym uses the anonymous remailer network as a mix-net: It forwards mail received for a nym to its final destination through a series of independently operated remailers. Only by compromising multiple remailers can one uncover the full path taken by such a message. To use nym, one only needs a copy of PGP. Nym is also reliable due to careful software engineering and redundancy. One abuse was that somebody used nym to post child pornography. But the FBI didn't shut down nym because of this. Nym survived many different attacks such as mail bombs, accounts flooding, etc. Nym is still running with 2,000 to 3,000 active accounts.

-- History-Based Access Control for Mobile Code *Guy Edjlali, Anurag Acharya, Vipin Chaudhary

A history-based access-control mechanism was presented. what a program is allowed to do depends on its own identity and behavior and the currently used discriminators such as the location it was downloaded from or the identity of its author/provider. The paper described the design and implementation of Deeds, a history-based access-control mechanism for Java. The access control policy could be something like it is forbidden to open a socket after opening a file. Different policies can be used for different applications such as editor, browser. This scheme can potentially expand the set of programs that can be executed without compromising security or ease of use. Some comments were given: what's the granularity of the policy? who defines the policy? Some policy says if you ever opened a file you can't open a socket. This might be unrealistic. Some improvements can be done on hashing efficiency. How about covert channel? One solution is to only let one program. But of course this is not a good solution.

-- A Specification of Java Loading and Bytecode Verification *Allen Goldberg

The paper uses data flow analysis to verify type-correctness and the use of typing contexts to insure global type consistency in the context of a lazy strategy for class loading. The paper formalizes the JVM bytecode verifier as an instance of a generic data flow architecture. Specware is a system available from Kestrel Institute which can generate provably-correct code from specifications. The author is using Specware to generate an implementation for the JVM byte code verifier. The paper also gives a good overview of the Java Virtual Machine and the Java type system.

-- A New Public Key Cryptosystem Based on Higher Residues *David Naccache and Jacques Stern

A new scheme based on the hardness of computing higher residues modulo a composite RSA integer was presented. The scheme has two versions: deterministic and probabilistic. The probabilistic version has an homomorphic encryption property whose expansion rate is better than previously proposed schemes. The scheme is slower than RSA but still practical. The homomorphic property could lead to interesting applications such as watermarking. The authors offered an interesting cash reward for decryption of at least 50% of a ciphertext. The ciphertext challenge is included in the paper.

-- An Efficient Non-Interactive Statistical Zero-Knowledge Proof System for Quasi-Safe Prime Products *Rosario Gennaro, Daniele Micciancio and Tal Rabin

An odd prime P = 2q + 1 is called safe if q is prime. A quasi-safe prime is an odd prime P = 2q + 1 such that q = s^m is an odd prime power. A number N is a quasi-safe prime product if N = PQ where P = 2p^n + 1, Q = 2q^m + 1, p and q are distinct odd primes. In the abstract of the paper, it says ``We present the first simple and efficient zero-knowledge proof that an alleged RSA modulus is of the correct form. ... Our proof systems achieve higher security and better efficiency than all previously known ones. ... We demonstrate the applicability of quasi-safe primes by showing how they can be effectively used in the context of RSA based undeniable signatures to enforce the use of keys of a certain format.''

-- Communication-efficient anonymous group identification *Alfredo De Santis, Giovanni Di Crescenzo and Giuseppe Persiano

The paper gave a formal definition of a group identification scheme, an anonymous group identification scheme and a perfect zero-knowledge group identification scheme. An identification scheme was presented based on the problem of quadratic residuosity modulo Blum integers. Communication complexity of this scheme was proven to be O(m+n), where m is the size of the group and n is the security parameter. The scheme was also shown to be perfect zero-knowledge. The paper also showed that the scheme can be easily extended for efficient anonymous identification for groups of at least t users. Paul Syverson commented that this scheme doesn't prevent collusions, i.e. group members can give the secret to their friends very easily. He referred to a previous work, unlinkable serial transactions (http://www.research.att.com/~stubblebine), which can solve this problem.

-- A Security Architecture for Computational Grids *Ian Foster, Carl Kesselman, Gene Tsudik and Steven Tuecke

``The Globus project is developing basic software infrastructure for computations that integrate geographically distributed computational and information resources. Globus concepts are being tested on a global scale by participants in the Globus Ubiquitous Supercomputing Testbed Organization (GUSTO). GUSTO currently spans over forty institutions and includes some of the largest computers in the world. Globus is a joint project of Argonne National Laboratory and the University of Southern California's Information Sciences Institute.'' See http://www.globus.org

To provide security in a global grid computing environment, there are several obstacles, such as the user population and the resource pool are large and dynamic, different systems have different local security policies and implementations. The talk presented the security requirements for the security policies and finally their architecture and implementation. The scheme doesn't require the use of bulk encryption for exportability concerns. The whole system is currently actively running.

-- Design of a high-performance ATM firewall *Jun Xu and Mukesh Singhal

It's difficult to design an efficient packet-filtering firewall in an ATM network because of the potentially large SAR (Segmentation and Reassembly) overhead. Previous work, StorageTek's ATLAS has the limitation that it does not accept IP packets with large IP option fields and it requires manual configuration to add TCP/IP rules for new PVCs (Permanent Virtual Connection). The paper presented a hardware design of a high-performance switch-based ATM firewall architecture. It introduces a new concept, Firewalling Quality of Service which can achieve a nice tradeoff between performance and security. A method LCH (last cell hostage) is used to reduce the policy cache miss latency, where only the last cell is kept when a policy cache miss occurs. One person in the audience asked why we need this while we have IPSec. A short answer is that they are for different purposes, i.e. a firewall can give you centralized access control.

-- A practical secure random bit generator *Elizabeth Shriver

The idea is to use the local disk in a computer to generate random bits. No additional hardware or user interaction is needed. The randomness comes from the rotation of the disk, which speed is inherently unpredictable due to chaotic air turbulences inside the disk. The actual method of measuring the exact rotation speed turns out to be quite tricky. In short, the first and the last cluster on a track are read to determine the time for one rotation. One big problem remains: how many bits of randomness can we extract? The paper presents a theoretical argument and derives that 5 highly random bits per minute can be generated. In another mode which is less secure, 577 bits per minute are generated. More information can be found at the following URL: http://www.bell-labs.com/user/shriver/random.html

The following questions were asked: - Is the source of input bits independent? Could we hash the random bits to get even more randomness? - What about other disk drives? The same method is applicable, but other disks had slower rotation speeds and would therefore create fewer random bits in the same time period. - Random number tests pass pseudo random tests, but true random sources do not pass the tests, so why should we trust these tests? - The real world needs a LOT of random bits? Yes, future work should result in a faster method.

-- A probabilistic poly-time framework for protocol analysis *P.Lincoln, J.Mitchell, M.Mitchell and A.Scedrov

The paper developed a framework for analyzing security protocols in which protocol adversaries may be arbitrary probabilistic polynomial-time processes. ``In this framework, protocols are written in a form of process calculus where security may be expressed in terms of observational equivalence, a standard relation from programming language theory that involves quantifying over possible environments that might interact with the protocol. Using an asymptotic notion of probabilistic equivalence, we relate observational equivalence to polynomial-time statistical tests and discuss some example protocols to illustrate the potential of this approach.'' Most of the current formal protocol analysis are based on a model which uses two assumptions: perfect cryptography and a nondeterministic adversary. This approach established an analysis framework that can be used to explore interactions between protocols and cryptographic primitives and also allowed more other attacks that the adversary can do while the standard model doesn't allow. This approach adopts the spi calculus that was proposed by M. Abadi and A. Gordon.

-- Public-key cryptography and password protocols *Shai Halevi and Hugo Krawczyk

Password authentication schemes should resist to eavesdropping, replay, man-in-the-middle and off-line password-guessing attacks. It's shown that the security of the password authentication protocols strongly depends on the choice of the public key encryption functions. For example, semantic security is not sufficient (An encryption scheme is said to be semantically secure if, given a ciphertext c and a plaintext p, it is infeasible to determine whether or not c is an encryption of p.). The paper presented a Diffie-Hellman based protocol which provides features such as two-way authentication, authenticated key exchange, resistance to server compromise and user anonymity. The paper also introduced a notion of public passwords, a digest of the server's public key. Finally, the paper proved a theoretic result, showing that the use of public key techniques is unavoidable in password protocols that provide defense against off-line guessing attacks. A person in the audience asked whether it's possible to design a scheme which authenticates the server's public key during the protocol without user interaction? The speaker said that there might be a proof that this is not possible, but he was not sure.

-- Cryptanalysis of Microsoft's Point-to-Point Tunneling Protocol (PPTP) *Bruce Schneier and Mudge

The Point-to-Point Tunneling Protocol (PPTP) is used to secure PPP connections over TCP/IP links. The paper analyzed the NT implementation of PPTP and showed how to break both the challenge/response authentication protocol (MS-CHAP) and the RC4 encryption protocol (MPPE). The speaker said ``Buzzword compliant'' is not enough: you have to implement protocols properly. One weakness of the implementation is that it uses the Lan Manager Hash function which makes the dictionary attack very easy. Many other weak points are shown in the paper. The speaker also pointed out that peer review is necessary for good design and implementation. Peter Neumann also commented that proprietary software suffers from the lack of peer review. A researcher from Microsoft commented that Microsoft is designing new protocols, (?) EAPTLS-Internet draft.

-- How to Prove Where You Are: Tracking the Location of Customer Equipment *Eran Gabber and Avishai Wool

The speaker pointed out that it's important to monitor the location of customer equipment in the direct broadcasting satellite industry (DBS) because the service providers would like to prevent unauthorized movement of a customer's set top terminal (STT) from a home to a public venue, or across an international border, due to various financial, copyright and political issues. The paper presented one existing scheme and three new schemes for detecting the movement of an STT using the existing (or emerging) communication infrastructure. One scheme uses ANI and CND (Caller ID), the second one uses GPS, the third one uses the cellular phone's enhanced 911 (E911) service, and the forth one uses the time-difference-of-arrival of the satellite's broadcast. The paper compared the difference of accuracy, additional features, cost and vulnerability among the four schemes. The four schemes all suffer from different attacks. The problem is still an open question. Susan Landau pointed out that even though E911 is an emerging infrastructure, it's not necessary that it can be used for this purpose. Actually there could be some policy about using E911 due to privacy issues and other reasons. Doug Tygar pointed out that the forth scheme suffers from one exact attack as the existing one which uses CND because the signals could be relayed. Another person noted that STTs are also used in RVs. The speaker answered that there will be an additional policy for the STTs in RVs. Another person also pointed out that another easy attack for the forth scheme by just changing the altitude of the SSTs. The speaker agreed with this point.

-- Temporal Sequence Learning and Data Reduction for Anomaly Detection *Terran Lane and Carla E. Brodley

The anomaly detection problem can be formulated as one of learning to characterize the behaviors of an individual, system or network in terms of temporal sequences of discrete data. The paper used an approach based on instance based learning (IBL) techniques. One difficult problem is that the base data is discrete, unordered, which makes the existing techniques such as spectral analysis inapplicable. The paper showed a new greedy clustering technique. The scheme was tested on 8 people for 6 months. A person in the audience commented that large systems have a huge number of users so we need much better accuracy and much lower false identification rate. Another question that was raised was whether we can take advantage of the shell semantics? The answer was yes, there are many possibilities to gather more information.