Report on the Internet Society 1997 Symposium on Network and Distributed System Security, San Diego, CA, February 10-11, 1997


by Richard Graveman, Bellcore
rfg@ctt.bellcore.com)

This fifth annual symposium (the first, in 1993, was sponsored by the PSRG) again brought together researchers, implementors, and users of network and distributed system security technologies on February 10 and 11, 1997, in San Diego. All who participated owe a debt of gratitude to Dave Balenson, whose efforts as General Chair ensured the continuing success of this event.

NDSS can be characterized by a combination of application and infrastructure topics, a focus on current Internet security issues, emphasis on practical and implemented solutions, a mix of technical papers and panel presentations, ample time for discussion of each paper presented, and an entertaining and thought provoking after dinner speech at the banquet. NDSS '97 carried on all of these traditions in wonderful style. Major topic areas included Internet infrastructure and routing security, security for the World Wide Web, Java and ActiveX security, cryptographic protocols, public key management, and protection of privacy.

Dave Balenson welcomed everyone and thanked all who contributed: Donald Heath and Martin Burack from ISOC; Cliff Neuman and Matt Bishop, the Program Chairs; Steve Welke, Publications; Tom Hutton, local arrangements; and Torryn Brazell and others, registration. Cliff Neuman said 13 of 47 papers were accepted. Four panels were added. He thanked the Program Committee for many hours' work, as well as all who submitted papers or suggestions.

In addition to the paper Proceedings, Steve Welke put together a CD ROM with the 1997 and 1996 papers and other useful information. For copies of the Proceedings, contact IEEE Computer Society Press, 10662 Los Vaqueros Circle, P.O. Box 3014, Los Alamitos, CA 90720-1314 USA, telephone +1 714 821 8380, fax +1 714 821 4641, or cs.books@computer.org. Also, speakers' slides will be available at http://www.isoc.org/conferences/ndss97/. This report devotes most of its space to what's not on the CD or in the Proceedings, that is, what was said at the conference.

In tradition, Steve Kent chaired the first session under the motto "Keeping America's bits safe for democracy."

Nick Ogurtsov presented the first paper titled "Experimental Results of Covert Channel Elimination in One-Way Communication Systems," and based on research at the University of Arizona over the last year and a half. Bell-LaPadula security policy only allows data flow from Low to High. Solutions like physical isolation do not even allow desirable Low to High flow, and a one-way fiber link offers no reliability. But even if only ACKs flow from High to Low, covert channels are possible. Solutions running over TCP/IP have included Store and Forward Protocol (SAFP), the Pump, and Upwards Channel. SAFP uses a large, trusted buffer, but the buffer can fill up, recreating the covert channel. The Pump also uses a buffer, but hides acknowledgment rates from the High side with an historic moving average. It is being implemented in real systems, but is hard to analyze information theoretically. Upwards Channel uses blind write up with a buffer, but this is bounded in either data rate precision or reliability. They implemented all of these plus their new system, called the Quantized Pump.

The basic idea is to limit the downward channel information theoretically. The gateway has a buffer manipulated by Low and High trusted processes. The High trusted process can signal the Low one with at most L bits per second. The bits sent down tell the Low side to raise or lower the data rate by some factor. The throughput is equal to SAFP, and the buffer size can be shown to grow as a quadratic in L. If raising means the same, but consecutive lowering is by twice the previous amount, the buffer now grows as L log L. By lowering to zero, growth is linear, but throughput is only 45% of SAFP. In summary, the Quantized Pump is easy to configure and analyze, has a provable bound on the covert channel, and offers comparable performance results with previous methods.

Steve Kent asked about "mass storage" transfer from Low to High: high latency, but good in other respects. This is the "SAFP infinite buffer size" case. Dan Nessett asked about non-military applications involving leakage of extremely sensitive data.

In the second paper, David Martin talked about work he did at Bellcore under the title "Blocking Java Applets at the Firewall." With applets, an outsider can assume the identity of an insider, but applets are supposed to be restricted: they cannot read or write the disk or use the network indiscriminately. They may, however, break the security and be able to run unrestricted Java code or native code. The applet may also be able to get the firewall to help it. The main attack in the paper tricks the firewall by choosing low ftp data channel port numbers. Another example uses html. The idea is the Applet can only open outgoing connections to the server that delivered it. The inside system GETs an applet through its proxy machine with an extra loopback connection at the proxy server. This tricks the inside system about where the applet came from.

Blocking applets can be done by stripping out "applet" tags or detecting the Java class file signature 0x CA FE BA BE, but this requires parsing the stream. Applets may, however, be in zipped archives, for instance, so all plug-ins must be simulated at the firewall. Long term, this is a losing battle. General solutions require changes at the workstation: better sandboxes and digital signatures.

Questions came up about products. Applets have appeared in Usenet News and also in e-mail read by Netscape. Scanners that are being implemented have to "unpack the world," and even then cannot cope with end-to-end encryption. In addition, JavaScript and ActiveX deliver the executable in html, so the html needs to be parsed, and looking for "CA FE BA BE" does not work.

Abdelaziz Mounji gave the third talk of the session titled "Continuous Assessment of a Unix Configuration: Integrating Intrusion Detection & Configuration Analysis." Configuration analysis uses predicate logic to check for known vulnerabilities in the file system. Changes can be tracked continuously. Intrusion detection uses analysis of the audit trail with a rule based language RUSSEL. The main work here is to integrate the two approaches, so, for example, an intruder cannot successfully open and close a hole quickly. Detection rules can vary with the results of the configuration analysis and trigger audit trail analyses automatically. An example showed how the two communicate with each other. Performance measurements showed that real-time configuration analysis is practical and that the two systems are useful in combination. A dynamically adaptive system was built, and several extensions are planned. See http://www.info.fundp.ac.be/~cri/DOCS/asax.html.

One question asked, why not just fix the problem? This is difficult and possibly dangerous. Peter Neumann pointed out the need to address not just outside intrusion but also insider misuse.

Session 2 was a panel chaired by Aviel Rubin: Security of Downloadable Executable Content, Past, Present, and Future. Avi noted that Java, JavaScript, and ActiveX are configured into the Netscape and IE browsers, and people use the same machine to keep personal finances or corporate secrets and surf the Web. He then introduced the panelists in turn.

Ed Felten at Princeton has found flaws in Java and other systems. He said that executable content meets a user need; it is dynamic and interesting. Java accepts code from anywhere and tries to run it safely, but it may let hostile code break the sandbox. ActiveX, on the other hand, puts the burden on the user and risks trusting too many programs. They have found several problems, often attributed to complexity and product development pressures. Java security depends on safe typing. Type safety flaws and implementation errors have been fixed, but even though security is improved, no protection against denial of service exists. Type safety depends on language semantics. Gray areas in the definition or API are potential vulnerabilities that strain the limits of formal methods. Dynamic linking is one such problem area. On the horizon, JDK 1.1 will have new functionality that brings along new security concerns: remote invocation and persistent objects; garbage collection and finalization; flexible security mechanisms including digital signatures; complexity of JIT compilation; and general new release bugs.

Li Gong is Java Security Architect at JavaSoft. Java is 500 days old and has 45 million potential users. It is like X11 and C++ rolled into one. Security requires access control on critical resources (files, network connections, and windows). It also helps construct secure applications. The four cornerstones are (1) language safety (type safety, bytecode verifier, classloader); (2) system security infrastructure (protection domain, access control, authorization, delegation, policy); (3) crypto APIs (SHA-1, DES, 3-DES, MD5); and (4) network and Web security protocols (authentication, SSL, SKIP).

Next, Jim Roskind from Netscape explained that Java security is class based, which makes it private, protected, and isolated. It has no direct memory manipulation and no "evil" casting. Still, Java can "lure" one into calling it, and security problems can occur: system.out is not final, so it could be changed; the code base is large (font attack); type name confusion can occur (load it twice and get casting). There are also DNS or other infrastructure problems or implementation flaws. The traditional Java Security Manager is centralized and has a non-extensible base class. The security semantics are separated physically from coding semantics. But class granularity privileging is both too broad and binary. Too many classes become privileged, and the TCB becomes gigantic. Netscape 3.0x has three states rather than two, a reduced TCB, and performance validation. It also has CallerDepth added to check*() calls (semantics exposed to callers), and other "last resort checks." Class signing will establish identity of authors. Release 4.0 will have additional features.

Peter Neumann from SRI said that the past was concerned with hardware and software capability models. Today the hardware and software do not support these items well. The vulnerability list is huge: OS, telecommunications infrastructure, browsers, and people. What must we be able to trust? Everything may be on the trusted path. A digitally signed Trojan Horse is still a Trojan Horse. The hardware and the network do not offer adequate support for security. Different security policies often cannot be combined. The future will have to take much more of a systems view of networking. The tradeoff between detection and prevention must be balanced. Authentication and accountability will be helped with signatures, but there are no easy answers, even with cryptography. Digital signing is not the whole solution. Firewalls are semi-permeable; mechanisms may interact badly.

Several discussion points followed. Peter Neumann said that formal methods help by forcing one to be precise. Hardware encapsulation is no cure all, if the hardware has potential faults. Building trustworthy systems out of untrusted components is a major challenge. Li Gong and Jim Roskind pointed out that Java security is becoming more comprehensive, and plug-ins can also have vulnerabilities. The need for downloadable executables was questioned. The big gain is distributed computation, e.g., time managers. Jeff Schiller noted that signed code can also be abused. Look at the Chaos Computer Club's Quicken attack. Whose fault is it? Steve Kent predicted that sandbox constraints will give way to need to get to the file system. Then an access control system will be needed. But people have been unable to use fine grained access control effectively. An indicator on the browser as to whether Java is enabled was suggested. User interfaces, however, are vulnerable to spoofing, so decorations like the key do not buy that much. Peter Neumann recalled how the TCB turned out to be much bigger than the kernel: printer drivers, etc. All of this reoccurs with Java. Is risk inherent? How do we use our experience?

Session 3 on Protocol Implementation and Analysis was chaired by Christoph Schuba, who remarked that the emphasis is more on analysis than implementation. The first two papers aim at prevention, the third at something that did go wrong.

Stephen H. Brackin of Arca Systems (brackin@arca.com) described "An Interface Specification Language for Automatically Analyzing Cryptographic Protocols." Using an example of a cryptographic protocol failure, he described an analytic tool implemented in a commercial product. Even in a hostile environment, authentication and confidentiality can be achieved with protocols using cryptographic primitives and shared or confirmable secrets. Protocol failure, however, is a weak link in network security. Tatebayashi, Matsuzaki, and Newman published a protocol at Crypto '89, in which they have found 13 errors. Belief logics formalize reasoning about authentication. His analytic tool called ISL is derived from Gong-Needham-Yahalom logic. It can express sending, receiving, belief, freshness, conveyance, shared secrets, possession, recognizability, trustworthiness, not-from-here checks, message extensions, and feasibility constraints. Subgoals are proved step by step, and the unproved subgoals point to potential problems that, in fact, show where messages are not authenticated. Running this tool on commercial protocols also led to improved documentation. He concluded that analysis is worthwhile, even though it will not find all problems.

Dan Nessett asked what expertise is needed to use these tools. Steve Kent asked about denial of service attacks. Steve Bellovin asked what could not be detected, e.g. non-disclosure violations.

Steve Bellovin then presented "Probable Plaintext Cryptanalysis of the IP Security Protocols." Since secret initialization vectors thwart known plaintext attacks against the first block, he speculated about a rumor that systems with secret IVs cannot be exported. Probable message attacks were common, for example in Enigma. Rotor settings were attacked with a two block probable plaintext approach. DES is a strong cipher, but the key size is too small. DES cracking, however, assumes known plaintext, so he examined sources of known plaintext in the protocol headers. IP-ESP-TCP, IP-ESP-UDP, or IP-ESP-IP-TCP are all possible IPSec structures. The header also has a replay detection counter. A single packet attack can guess many bits and feed them to the search engine. Starting the replay counter at zero or one gives away 30-31 bits of probable plaintext (this has since been changed). Version/header and TOS/precedence yield 15-16 bits. Packet length (16 bits) may be given away with TCP ACKs, and 556 is also a common value. Source and destination addresses may yield another 64 bits. By cracking two packets from the same stream, the cost doubles, but the attack is much stronger. Within the same connection, port numbers will match, sequence numbers change slowly, and flags/window/urgent yield 48 more likely bits. Even the replay counter will not change much. So single IP packet cracking has 54-58 bits to go by, double has 127; if it's TCP, the numbers are 88 and 124; for UDP, 28 and 48. Traffic analysis can identify TCP open; packet lengths can reveal port numbers. Timings also give protocol clues (e.g., telnet or multiple flurries of downloaded Web images). The possible defenses are (1) avoiding host to host tunnel mode; (2) using secret internal addresses; (3) using host pair or firewall pair, rather than per connection, keying. As in VJ compression, drop port numbers; or just DON'T USE DES.

Jeff Schiller asked whether one can get the IP stack brand with this approach. Steve Kent asked about attacks on the message text even if the headers are done carefully, the advice not to start replay counters at zero, and whether bad implementations will proliferate. He speculated that header compression may not be worth it. Matt Blaze reiterated that using longer keys is the better answer.

Bryn Dole's paper titled "Misplaced Trust: Kerberos Version 4 Session Keys" reported on work done at COAST. The problem was in the random number generator; keys had only 20 bits of entropy. A SPARC 5 broke them in 25 seconds, but a library of cribs could break them in microseconds. Three things went wrong. The challenge of RNGs was underestimated; the repaired RNG was in the code but never got called; a code review failed to detect that the old RNG was still in use. This was obscured in a #define in a header file. An operational breakdown in process had occurred. The owner of the code could not get reviews done; multiple code trees existed; no regression testing was done. Software trust is complicated: old, mature, open systems; public source code; secure protocols and standards; design by smart people. Kerberos had all of these! Reverse engineering has shown the futility of security by obscurity, while openness allows public scrutiny. But there is no guarantee. Experts may not look. Old software may have bugs. New features, fixes, and maintenance may introduce bugs, and some are never found. Even provably secure protocols must be implemented correctly and used for the designed purpose. Algorithms and protocols are both important. In summary, the importance of random numbers should not be underestimated, and OS or hardware support is desirable. Open design is good but not a guarantee.

Jeff Schiller pointed out that Kerberos was easy to fix because all random numbers are generated on the security servers. One client based system always picked an all zero key. Commercial software cannot be examined, which has implications for black box testing. Steve Kent responded that closed designs can also be done competently, and that publishing does not guarantee expert review. Others asked how to evaluate commercial software, where the internal quality control processes are not visible. Finally, Matt Blaze pointed out a typo in Section 2.2 of the paper.

Session 4 was a panel chaired by Russ Mundy and titled Security of the Internet Infrastructure. The Internet infrastructure is an interaction of pieces; software from many places; standards built from multiple implementations; IP, routing, name service, and network management. Protocols must support security, software must implement the protocols, and policies and correct usage are also required. What belongs to infrastructure? OSPF, BGP, DNS, ARP, SNMP, IP, and DHCP, yes, but probably not SMTP or telnet. Emerging infrastructure includes IPSec/ISAKMP/Oakley, DNSSEC, and SNMP-NG. But standardization and implementation do not imply use (e.g., MOSS).

SNMP-NG is a long story. SNMP is approaching 10 years old with no security. An advisory team was built from two competing camps (v2* and USEC). The approach is now more modular. Documents will describe the modularity; implementations may or may not follow it. Authentication and privacy, along with timeliness, are the most important issues. The standard working group will be re-chartered, and the documents will be revised prior to the April IETF.

Paul Lambert of Oracle stated that security must address messages, names, routing, time, and system management; it must provide confidentiality, integrity, authentication, non-repudiation, and access control. Key management, PKI, and trust management are needed. IPSec, which leaves outer packet headers unencrypted and allows encryption at hosts, routers, or firewalls, uses key management to create security associations and protect IP datagrams. Before this, only link encryption was available. This is a major advance that has a long history; PLI, IPLI, Blacker, Caneware, and NES were precursors. SDNS was published by NIST as SP3, SP4, and KMP. ISO specified NLSP in the 1990s. Today, ISAKMP, Oakley, ISAKMP-Oakley Resolution, In-line Keying, and the Internet DOI make up the main stream standards work. He also mentioned SKIP and Photuris, as well as S/Wan and John Gilmore's S/Wan- Linux.

W3C is looking at the semantics of signatures. ActiveX has no policy: signatures are binary. W3C has a metalanguage PICS to describe labels or assertions. Trust management and assertions can help support manageable security. IPSec is a good base; there are too many protocol specific mechanisms.

Olafur Gudmundsson discussed DNSSEC. DNS is strictly hierarchical and loosely synchronized. It relies on cacheing. Threats are incorrect configuration, data insertion, fake nameservers, stale data, and incorrect TTL behavior in servers. DNSSEC provides cryptographic bindings with SIG and KEY resource records (RR), which add signatures and provide public keys in the process. Today, it secures nameserver to nameserver, but not nameserver to resolver interfaces. A chain of keys is verified. The NXT RR allows one to deny existence authoritatively; zone security depends on the parents' being secure. The NS and A records are called "glue," because they are hints, not authoritative. The .com domain signs its own A and NS and is authoritative. The root signs .com's KEY, however. A total of 754,789 names in .com were signed in 38 hours, a major undertaking. An exportable implementation exists, and the root will be signed. For the time being, unsecured parents may exist, and last hop (resolvers) may trail behind. Dynamic update has been incorporated using an on-line key.

Routing usually comes up with no pre-configured information. OSPF link state routing can be secured with signatures. Distance vector protocols can be [partially] secured with mechanisms that secure the messages. DHCP currently has no security and is implemented on low-end machines. The standard security mechanisms have been proposed, but it is difficult to secure something that does not know its own name. Some of these methods are "all or nothing." Legacy systems are becoming less of an issue. Solutions are proposed as needed; they need to be standardized and deployed.

Steve Bellovin said he dialed in on a cell phone and used PPP encryption controls, IPSec to the firewall, and SSH to his host. He read some PGP mail, and browsed with SSL. We have a lot of security mechanisms. Transmission protection. Origin protection. Architecture is needed. MobileIP, IPSec, and firewalls interact. DHCP makes naming the endpoints problematical. SSL does not know much about the Web. Higher layer protocols? SET, MOSS, PGP, MSP, S/MIME, PEM, etc. Where do keys come from and why do none of them work together? Can Oakley/ISAKMP be used for anything else but IPSec? The Internet is basically bottom up. PGP is simple and installable. SSH and PGP can be done oneself, without a system administrator. Then there is the international issue. SSH come from Finland. And crypto does not solve all of the problems; maybe half of them. Bugs in code are most of the rest. Half have been in sendmail. Dan Farmer says he can get into 75% of the machines he looks at by exploiting simple problems like sendmail and a wuftpd race condition.

Dan Nessett asked about address resolution. It's primarily a LAN issue. Steve Bellovin says that the routing infrastructure is totally fragile and attacks on it allow the Internet to be broken badly today. The general quality of ISPs is down with their increase in numbers. One can buy an "ISP in a box." More filtering is needed by the larger ones. Steve Kent asked about interdomain routing as well. Jeff Schiller said the PKI challenge was counter to the Internet culture, according to which some root servers are run by volunteers. PKI showed up with the lawyers. VeriSign says the end user is supposed to use a trusted system (like Windows 95?). PKI breeds monopoly behavior, and the Internet snubs this behavior.

The banquet speaker was Jeff Schiller of MIT, also IETF Security Area Director; he titled his talk "Encryption Key Recovery Considered Harmful." After a few introductory remarks about life at MIT and as an IETF Area Director, he got into the main topic. In 1991, a paragraph was put in several bills that it was the "sense of Congress" that keys should be turned over to the government when appropriate. This proposal showed up about five times without making any progress. Then Clipper arrived in April 1993. It set the tone for where the government was going. (Earlier, in 1987, the NSA-proposed CCEP had been rejected by industry.) Now it's 1997, and several window dressings of Clipper have also come and gone. The latest is key recovery, whereby the appeal to business is, "What if employees lose your keys?" It is an appeal to synergy, but it does not hold up under closer scrutiny. Why do we want to encrypt data? Most data are not too sensitive. Laptops, agreed, have low vapor pressure, so encrypting their hard disks is a good idea. But usually the tradeoff is "would I rather lose this or would I rather have it compromised?" Consider, for instance, the spy manual that says, "The following methods are probably illegal, so check first."

Communications security never needs to be recovered. The government wants these keys, at our expense and at our increased risk. What does this mean for individuals, as opposed to business? Key recovery centers may make a best effort, but what is their real liability? How long are keys escrowed? For ever? Key recovery centers are targets, like the CIA Web site. In the real world, we deal with bits; details. In the real world, it is different. TIS proposes a data recovery center. But the first court order may allow law enforcement to seize the entire box, not just one key.

Perfect forward secrecy is different again. The keys are immediately destroyed after use. In 1982, Al Bates of the FBI was asked, doesn't the Mafia know you are listening in? Sort of. Occasionally they would overhear a conversation about a drug deal, and all the agents who showed up would be killed. Bank vaults have a timer that will not open until 9 AM because of kidnapping. What about root keys? What is their protection against kidnapping? Could the person in charge subvert key recovery? Yes. Should new certificates only be signed between 9 AM and 5 PM? No one has said. Note that the key recovery center is an extremely weak link.

In the new rules on the export of key recovery systems, the U.S. government wants to have control over the individuals who will run the recovery center. Foreign governments cannot accept this easily. So this leads to bilateral agreements and dual access. Our own government may not be out to get us, but what is the least common denominator across all of these bilateral agreements? It seems as if one may as well not encrypt. The word "balance" is another funny term. "We must balance the needs of ... ." Balance is not part of venetian blind design. We take it for granted today that our thoughts are private. What about 100 years from now? If someone invents a mind reading device, we would likely have laws against using it. Only the police can use it, and only with a court order. Then someone else may invent a helmet that shuts out the device, and law enforcement screams for "balance." We do not need "balance." The same argument can be made for torture.

Key recovery is not about business. It is about government access. The word "escrow" used to be a good word. This is a ruse to confuse the public; it is a crock; it is wrong. And for the companies that have signed up to make some money from this, shame on you!

Session 5 on Routing Security was chaired by Hilarie Orman, who asked: The experts who stood on the shoulders of giants and designed the protocols that guide packets through the Internet surely would not have forgotten to put security in the protocol, would they have?

Karen Sirois presented work done at BBN titled "Securing the Nimrod Routing Architecture." The focus is on protecting against degradation or denial of service. Security requirements, especially availability, were derived from the architecture and potential attacks. The attacks involved modification, rearrangement, replay, delay, or introduction of new messages, as well as taking control of a point in the network. Nimrod is DARPA funded and in the IETF standards process. It uses service specific information and is highly scalable. Nodes and endpoints have attributes; they form a distributed database and are clustered hierarchically; the active elements in a node are agents (e.g., endpoint representatives, forwarding, routing). Nodes produce link state maps locally and use these to generate routes. Data origin authentication and connectionless integrity are the primary requirements. Access control and a weak form of non-repudiation are also needed. Confidentiality is secondary. IPSec ESP with anti-replay in tunnel mode and optional encryption was selected, since it is more efficient and inclusive than AH. Another protocol is used for the shared secrets. Digital signatures were used end to end to provide non- repudiation and access control for multi-point broadcast. RSA, SHA-1, and X.509v3 certificates were used, with DNS for SubjectAlternateNames. Timestamps are used as an anti-replay mechanism for updates and query responses, and hash values within a window are saved. Access control is identity based; cacheing of specific messages supports weak non- repudiation. Byzantine attacks still pose hard problems, and implementation flaws are a potential vulnerability.

Questions covered clock reliability, formal analysis, certificate management, and management traffic.

Next, Brad Smith presented work on "Securing Distance-Vector Routing Protocols," done at UC Santa Cruz. Messages contain one or more updates with a destination and distance. Routers maintain a shortest path tree. Changes cause the tree to be recomputed, and then new routes are passed to neighbors. Updates can be fabricated; unauthorized nodes can participate; nodes can masquerade or hijack sessions; links can be subverted by an intruder; software in routers can be modified. These attacks can result in "black hole routes" (denial of service, when a zero cost route gets advertised and the network implodes), reconfiguring the logical topology (inaccurate accounting and disclosure of traffic), and routing traffic snooping that discloses path information. The model assumes a PKI exists for routers, information from routers can only be trusted when from direct neighbors, and communications depends only on IP. The countermeasures are message protection and update protection. Message protection works as in RIP v2: a message sequence number plus AH-like security. Update protection uses digital signatures by the originating router, update sequence information, and predecessor network analysis to protect the distance field. Timestamps versus sequence number tradeoffs were considered. Each message has a 128 bit keyed hash and 32 bit sequence number. Each update has these plus a 64 bit predecessor plus 32 bit originating address. Computing time is an additional cost. Subverted routers can still fabricate incident links and delete updates, and any node can snoop routing information. (BGP uses TCP links that can be encrypted, but others use broadcast.) In conclusion, protection from outsiders is relatively straightforward; protection from subverted routers requires sequence and predecessor information as well. The solutions can be applied to many protocols: IDRG, BGP, RIP, RIP-2.

Questions addressed key management and use of the term digital signature for keyed has message authenticators.

Gene Tsudik presented work done at IBM Zurich titled "Reducing the Cost of Security in Link-State Routing." This is a more abstract approach to Dijkstra'a shortest path link state algorithms; it does not look at Ford-Fulkerson distance vector algorithms. Confidentiality is not a requirement, but origin authentication, non-repudiation, integrity, timeliness, and ordering are. The solutions employ PK based digital signatures, one-way hash functions, and hash chain constructs (Lamport). Hash chains work as follows: Alice generates a secret, repeatedly hashes it n times, and gives this to Bob. Then she releases the pre-images in reverse order. In many cases, link state updates do not change much. So, they propose an anchored link state update when the hash chain is depleted or the information changes. Otherwise, just the next hash sequence value is released. The next step is to observe that, in general, links are either up or down. So, each node generates N x K x S hashes, N is length of chain, K is # links, and S is # states, typically two. This handles more frequent state changes. Each node can report on all incident links at once at the cost of one hash per link reported on. Hash functions are all that is required, along with good randomness and loose clock synchronization. "Continuous" link state functions and frequent variations may make this approach impractical. Murphy and Badger at NDSS '96 and Perlman's thesis were used as sources.

Steve Kent asked about hashes chains not being exhausted at the same rate. Another question had to do with changing the set of designated routers. A third asked about running multiple routing protocols.

Session 6, Security for the World Wide Web, was chaired by Win Treese.

First, Brian C. Schimpf of Gradient Technologies presented "Securing Web Access with DCE." Gradient has worked with the Open Group Research Institute to speed the development of secure C/S applications using Web technology. The idea was to use the DCE infrastucture (location independence of servers and security). DCE provides strong (Kerberos based) authentication, data integrity, confidentiality, and a flexible and convenient authorization model. SLP, the secure local proxy, runs under the browser on the desktop. It just passes normal URLs through, but looks up servers when it sees DCE names. Then it gets authorization information (a service ticket) for the user. The http request is then tunneled through authenticated RPC. The DCE-aware server unwraps this and uses the authorization information. A Web toolkit library is available to help build application servers. Identity and group based authorizations are performed at the server by an ACL manager. The ACL model was extended to support sparse ACLs. Attributes are inherited downward dynamically. The target audience for this technology is within an organization and between cooperating organizations, rather than external electronic commerce on the Web.

Steve Kent asked about load sharing: DCE does support this, although somewhat differently. A comparison with SSL 3 was also discussed. The session key granularity is different.

Session 6 continued with a panel discussion. Barbara Fox (bfox@microsoft.com) of Microsoft described Shared Key Authentication for the TLS Protocol. Some essential changes to SSL are being introduced, in particular, shared key or password based authentication with backwards compatibility. Even if weak passwords (PINs) and weak encryption (40 bits) are used, passwords can be protected better. An optional SharedKey message is appended to ClientHello. The proposal is still being modified. This is in the tls-passauth draft by Daniel Simon. SCHANNEL.DLL will be the MS product that uses this.

Fred Avolio talked about Web commerce, both from the client and server sides. Clients have to protect themselves better, e.g., "click here to self-destruct." Web servers on "sacrificial" machines are also a questionable idea.

Asked about the most important issues, Brian Schimpf said management of security and scaling; Barbara Fox listed authentication, digital signatures, and public understanding; Fred Avolio stressed integrity as well; people tend to believe what computers tell them.

What approach is recommended for addressing the mix of authentication methods? The needs for password pass-through, for authentication gateways, and for ways of handling legacy applications were cited.

Steve Kent asked about the ability to issue certificates in the name of an account number and use SSL 3.0. Barbara Fox responded that they want to encourage this. Large installed card bases are still expensive in terms of customer support. It may still cost $10 / certificate.

Dan Nessett asked what is being done about authorization and access control. Brian Schimpf answered that DCE has one model, which also supports a straight SSL interface to it. Netscape 4.0 contains a "capability space."

Except for client and server hello, why have other messages also been added? Barbara Fox said the motivation was to use the Master Shared Secret to maintain exportability. What about Kerberos support? It seems likely to be subsumed by SharedSecret authentication.

A question came up about what the user sees versus what the smart card signs. A lot of user interface work and consumer understanding has not been done yet. People do not really put much emphasis on credit card vulnerabilities, and transparency to the user will be important. User configurability will be demanded. Web purchases are a major convenience internationally. A lot of emphasis is being put on the "wallet" model. The big fear is a loss large enough to undermine confidence. The biggest fraud in credit cards is merchant fraud, and the biggest loss is people who do not pay their bills, so Web commerce amounts more to shifting the risks around.

How will all this security technology impact firewalls? Perimeter defense and end to end encryption are different problems. Firewalls will change somewhat.

Jonathan Trostle of CyberSafe chaired Session 7 on Public Key Management. The first two papers dealt with the X.500 model and enhancements to it. The third paper considered extensions to Kerberos.

Lourdes Lopez of Universidad Politecnica de Madrid presented "Hierarchical Organization of Certification Authorities for Secure Environments." The objectives are generality, openness, and ease of deployment. Version 3 of X.509 has been an important step. They have generalized the model and formally specified the policy. A tool named SecKit implements X.509: generation of keys, sending and receiving secure files (signed and encrypted), and access to the Directory. It runs on X-11 or MS Windows. SecServer generates keys, certificates, and CRLs. It implements an RA and CA. In the experiment, SecServer interfaces both with the X.500 Directory and SecKit users. In the first model, the Communal Security Rules Group implements multiple layers of CAs within different policies defined at the root. This was generalized in the second model, which introduces Group CAs and Subgroup CAs. Certification under different policies or by different CAs is possible. The third model also has policy CAs. Multiple CAs lead to certificate path validation options. Either the lowest common node or the foreign root is chosen.

Andrew Young from University of Salford, U.K., reported on "Trust Models in ICE-TEL." ICE-TEL is a European-wide follow-up to the PASSWORD Project. They have 17 partners from 13 countries, software from COST, GMD, ISODE, and SSE, and they want to build and pilot secure WWW, S/MIME, and X.500 applications. Use of a PKI requires trust in third parties who have attested to public keys. Guarantees and liability are important. Who asserts what about whom? What is the policy? Syntactic and semantic checks? The PGP approach is the "trusted introducer." It has low start up cost and complexity but poor scalability. The PEM trust model consists of a single hierarchy with multiple policies, in which the CAs are arranged hierarchically, the PCAs publish a policy, and the IPRA ties the PCAs together. It scales well, but getting started is hard. You cannot just install two copies and go ahead. PGP is user centric; PEM is organization centric. ICE-TEL aims to support diverse domains from individuals to large organizations and to allow for growth and flexibility. Trust between domains is by choice and need not be mutual or transitive. Each domain contains trust points that cross certify each other; trust points advertise a policy; users advertise a path to a trust point. Each user stores the public key of a trusted user and the public key plus policy of a trusted CA. The model accommodates individuals, small companies, large companies, and their interactions. The advantages are scalable deployment, flexible reorganization, explicit use of policy, and support for embedded high security domains.

Questions addressed revocation and path discovery.

John Chung-I Chuang of CMU gave the last talk titled "Distributed Authentication in Kerberos Using Public Key Cryptography." PKDA is a proposed extension to Kerberos Version 5. Public key cryptography can reduce or eliminate the sensitive data in the KDC and distribute the functionality of the ticket granting service. (Consider, for example, the scalability of an on-line banking application with millions of customers.) PKDA is an RFC 1510 extension and builds on X.509 and PKCS. SSL 3.0, pk-init from ISI, and PKDA are all described in Internet Drafts. SSL supports TCP but not UDP; clients and servers exchange certificates; both cache state information and resend this when needed; revocation is not specified. Pk-init supports TCP and UDP and has no client keys in the KDC. PKDA runs at a higher layer than SSL 3.0 and supports UDP. It has end-to-end encryption across proxies and gateways and ticket reusability, which means the client stores the session key and resends the ticket with a fresh authenticator. No three way handshake as in SSL is needed for re-establishment. Compared with pk- init, PKDA is fully distributed (no central KDC), has enhanced privacy, but requires modification of both client and server code, whereas pk- init only requires modification of client code. Clients in PKDA communicate directly with the application server to obtain a certificate. Clients make up the session keys. Delegation in PKDA is a direct extension of Kerberos. The client has to set the "proxiable" flag. If a PKDA client communicates with a server that does not understand PKDA, a local, replicated TGS can be contacted to retrieve a conventional TGT. The benefits of pk-init are then achieved without having to modify server code. They have a working implementation for CMU's NetBill (using DCE RPCs and an enhanced IDL compiler). They have verified the protocols formally. The I-D will be revised and reissued.

Session 8 was a panel on Web Privacy and Anonymity chaired by Cliff Neuman. The broad issues are that Web browsers, servers, and proxies can learn a lot about users' interests. This information can be hidden by technical means, but servers may also need this information. Social and legal pressure may prevent inappropriate use; acceptable use may be negotiated; and auditing and endorsement may provide assurance.

Jean Camp of Sandia said that privacy means the subject controls information rather than the owner (security). What are you doing on the Web? Communicating, purchasing, retrieving information. Free information can be read anonymously. Lamont vs. Postmaster General was about having to register to get information about communism. The ECPA protects us from observers but not from other participants. In Olmstead, the Court said there was no search: the wires are outside the home. Katz reversed this: there is a reasonable expectation of privacy. In NAACP vs. Alabama, the state wanted the membership list. The Web is about associating and assembling. The Right to Financial Privacy Act limits government. In U.S. vs. Miller, 1976, the Court ruled that a financial transaction is inherently public. Aggregate information poses new problems: cookies, anonymous proxies, and pseudonyms can be used. Anonymous transactions can still be billed effectively. Privacy depends on service provider policy, system configuration, and services. Browsers give away machine and OS information, as well as previous pages, helpers, and our e-mail address. Current debate centers around HR 98, the Consumer Internet Privacy Act; more information about it can be obtained from EPIC (www.epic.org).

Peter Neuman of SRI mentioned the many stories of privacy violations in the Risks Archives, including government and medical records. The Web makes aggregation and inference much easier. Anonymity is important: whistle blowers, violence and hate crime victims. For payments, anonymity is double edged. The risks are also technical. Billing messages can contain covert information. Voting has many interesting aspects. Anonymous systems without accountability will not work. Even with accountability, the infrastructure must be secured.

Gene Tsudik from ISI asked whether anonymity is a blessing or a curse. It is good for commerce, counseling, whistle blowing, voting, free speech, polling, and surveying. What is there to hide? Names, identifiers, account numbers, location, linkability, timing (when do you read e-mail?), and volume. From whom is there to hide? Intended recipients, casual eavesdroppers, professional eavesdroppers, global observers, or an impostor of oneself. Anonymity can apply to the transport (remailers, www.anonymizer, MIXs) or mechanism (e-money). MIXs for synchronous communications are needed. See also the MobileIP work being done at Aachen.

The panel posed several questions:
Relationship of speech to privacy?
What is it possible to achieve simultaneously?
Purloined identities?
The Joe Klein case: identification through writing style?
Next logical step?

Dan Nessett asked for better problem definition. Personal interactions involve an implicit contract. This contract may be misunderstood or explicitly violated. But part of the problem is unequal bargaining power. Banking or getting a driver's license demands your social security number. People may have the desire to browse but be left alone. Unsolicited e-mail and accidental name matches occur. Do we have a model of the problem? Privacy is deep and multi-faceted. Expectations vary from setting to setting. Today, the Internet is a "country you enter voluntarily."

Hilarie Orman distinguished one-time anonymity from repeated identity anonymity (unabomber or deep throat). How much of this is new and how much is just a new setting for what existed before? People are made responsible for errors others make about them. The electronic world makes collection easier and lower cost. "Deleted" data is not deleted but just marked "deleted." Do we have the implicit right to report on our dealings with other people? For instance, people who do not pay their bills?

Tax avoidance is an issue with financial transactions. Keyword search is easy with altavista, and allows, for example, job applicant screening. There are legitimate needs to have a private conversation; also there is a conflict with government interests. Jeff Schiller compared this with the three laws of thermodynamics: You can't win; you can't break even; you can't get out of the game.

I would like again to thank Dave Balenson for the opportunity to provide these notes, and I sincerely hope to see you next February in San Diego.