The 19th IEEE Symposium on Security and Privacy was held in Oakland, CA, May 4 - 6, 1998. Mike Reiter was general chair. There were 116 papers submitted, with 20 accepted. Based on mailing address, papers were submitted from 18 countries, and accepted from 5. The program committee covered 4 countries. My apologies in advance for not knowing the names of some attendees who I quote below.
The first session was Access Control, chaired by Cynthia Irvine.
The first paper was "Access Control in an Open, Distributed Environment" by Jean Bacon (Cambridge University), Richard Hayton (APM Ltd.), and Ken Moody (Cambridge University). Richard presented. Their work emphasized transfer of privilege with formal rules of delegation and revocation, and an architecture to support them. Their Role Definition Language describes preconditions of role entry. Revocation may be predicated on a number of conditions (delegator wishes, delegator is revoked, preconditions fail, or side conditions such as group membership change). Their architecture equates certificates with role entries. Services manage policy related to service objects and roles, and issue certificates to clients. Certificates are specific to a client. Services build their proof of role entry when they issue certificates, based on other certificates, preconditions, and so on. The proof tree is collapsed on revocation. This architecture makes revocation fast. Graphs can span services. When a communication failure occurs, the records are marked unknown and the state is propagated to the children in the graph. Maintaining the state allows for fast recovery when communications are restored. Communications is checked by a heart beat that pushes updates down links from one service to another. Stuart Stubblebine suggested a separate freshness policy in addition to the heart beat. Another questioner was concerned about the scalability of this architecture, particularly in settings such as academia which have predicable peak demand times (beginning of term).
The second paper was "Ensuring Continuity During Dynamic Security Policy Reconfiguration in DTE" by Timothy Fraser and Lee Badger (TIS Labs at NAI). Timothy presented. Their work considered how to safely reconfigure policy. Their problem was that when modules with new policy information was added to their system, they were composing policies, and were not assured of retaining some essential aspects of the original policy. They called that "policy conflict", and decided to address the problem where a new policy causes information to flow in the "wrong" direction. Their policies are broken down into a set of ORed rules. They used their Domain and Type Enforcement work as a basis. For each policy module they generated two sets of graphs; one for information flows, and one for controls. These graphs were logically ORed together, then the transitive closure was taken. Any new flows signaled a conflict. There are restrictions to this approach. It can't model non-tranquil policies such as the Chinese Wall policy, and it can't represent policies based on interference. It's also non-commutative; it matters who gets there first.
The third paper was "Composing Partially-Specified Systems" by Heather M. Hinton (Ryerson Polytechnic University). At some level, composition requires a complete representation of the system, including all environmental possibilities. This level of completeness is impossible. Thus, implicit assumptions always arise. Her research aims to make implicit assumptions explicit. It involves identifying assumptions (vulnerabilities) and using safe constraints to explicitly represent the good behavior. She builds on the Abadi-Lamport composition principle from `90. When composing, you start with a "total environment" where anything is possible. Then, you disallow certain undesirable environment behavior to get a "partial environment". You then continual refine the environment criteria for the partial environment. Ideally, constraints are physically enforced (such as network topology). If not, you must evaluate the reasonableness of the desired behavior. It was clear that this activity requires some thoughtful consideration. Heather's advice was "You should be using intelligent people and you should be taking advantage of them."
The second session was Java Security, chaired by Martin Abadi.
The first paper of that session was "Security Execution of Java Applets using a Remote Playground" by Dahlia Malkhi, Michael Reiter, and Aviel Rubin (AT&T Labs - Research). Michael presented. Their approach attempts to retain the benefits of mobile code while minimizing the threat of mobile code's doing damage while accessing local resources. They have implemented a sacrificial machine, called the playground, to help protect organizations from misbehaving Java 1.1 applets. Mobile code is rerouted to the playground. Only I/O is ported to the user's browser. The playground is a second level of defense beyond existing measures (sandbox, server, author, attributes of actual code). A proxy parses returned pages and identifies references to mobile code via applet tags. It changes those applet tags to point to a trusted applet on the user's machine (a graphics server applet). In addition, the proxy modifies all I/O code of original applet, and ships it to the playground. The playground runs it; I/O is sent to browser. The playground runs each page's applets in a separate JVM, running under an account with tight restrictions to leverage OS protections. Unfortunately, changing tags is not enough. JavaScript can cause classes to be requested and can dynamically emit such tags in response to user events. You can either rely on the proxy to keep these from being loaded (which requires a pretty sophisticated proxy) or directly disable network class loading in the browser. There may be performance issues for some applets, since they are not running locally. Scalability is limited ; replication and load balancing may help. It is also less effective for hostile window opening applets. Digitivity has independently marketed a very similar system. One questioner pointed out that users are not notified when a bomb gets sent to the playground.
The next paper was "Understanding Java Stack Inspection" by Dan S. Wallach and Edward W. Felten (Princeton University). Dan presented. Java stack inspection is corned with attempting to draw a box around the unrestricted features allowed to untrusted applets, thereby defining "safe" violations of the general policy. Pointers are added to the stack frames to enabled privileges. When a privilege is required, you search down the stack frame looking for it. Their work formalized this somewhat ad hoc technique, with an eye to fast implementation. They use BAN logic. The decision procedure proves that the operation is OK in an automated fashion. It has not proved completeness. For some cases, the proof would say X is denied when it's allowed. It is safe and sound, however. They have a finite state system that can examine the current state instead of stack walking and can pass these states as arguments to the functions. This roughly doubles overhead of null method call. However, a very smart compiler might specialize the code and get rid of arguments when they're not used. They extend their work to RPC. Authenticated pipes don't work well for mobile code, since you want to use the applet identity, not the host identity. Their work models the channel as "channel says" from the host, communicating the applet's identity. The security- passing style works with RPCs by taking the intersection of privileges of the host and applet.
The Cryptography I Session was chaired by Mike Reiter.
The first paper in the session was "Efficient Key Distribution of Slow Computing Devices: Achieving Fast Over the Air Activation for Wireless Systems" by Yair Frankel (CertCo), Chris Carroll and Yiannis Tsiounis (GTE Laboratories). Chris presented. This paper discusses the concerns involved in achieving performance and security in cellular phones. There is a recognition that the process was closed in the past, and that it wasn't using the available scientific research. This paper is part of the process opening up. In the past, keys had to be manually entered into secure mobile phones (all 40 million). A third party potentially knows these secret keys. They want to transition to Over The Air Service Provisioning - the cryptography is there when you walk out of Kmart. In addition, Diffie Hellman (DH) is slow on these 8 bit microprocessors (6 minutes per key exchange, with no authentication). They can get it down to 4 minutes on a 16 bit microprocessor. They want to use certification of service providers (SPs). A CA's public key given to mobile units at manufacturing time. Then, the SP sends a certificate. The Mobile Unit verifies a Rabin signature on the certificate and sends a Rabin encrypted session key. The SP decrypts session key and sends an authentication key encrypted with session key. This protocol is 160 times faster than DH, and is authenticated. Each manufacturer will implement proprietary number generator, though they suggest using some pre-loaded randomness and some digits from the user. The major manufacturers are aggressively upgrading to this new protocol but the installed base is still at 75% of old model. They're not sure how long this scheme will last, but they have plans to evolves.
The next paper was "Efficient and Practical Fair Exchange Protocols with Off-Line TTP" by Feng Bao, Robert Deng (National University of Singapore), and Wembo Mao (HP Laboratories, Bristol). Wembo presented. Bao should have presented, but he doesn't have a visa to USA. Two remote, mutually distrustful opponents are engaging in a "practically fair" protocol if neither can have a clear advantage over the other. Wembo reviewed some previous work on other types of fair exchange protocols. Bit by bit exchange of identical length messages assumes the same computing capacity for both parties has a high communication complexity. There are probabilistic and online Trusted Third Party (TTP) models. Their proposed model is a verifiable escrow signature with an off-line TTP. It uses a Certificate of Encrypted Message Being a Signature (CEMBS). It is made universally verifiable with the TTP's public key, Alice's pub key and some hashed values. In the protocol, if Bob receives a valid CEMBS, he releases his own signature to Alice, then vice versa. If Bob stops, he has no useful information. If Alice stops, Bob can go to TTP. This is an improvement in communications efficiency compared to the previous computational and probabilistic models. Use of an off-line TTP lowers the TTP workload to resolving disputes. One party may have to do slightly more by contacting TTP if abnormal termination occurs.
The next paper was "Asynchronous Protocols for Optimistic Fair Exchange" by N. Asokan, V. Shoup, and M. Waidner (IBM Research, Zurich). N. Asokan presented. This work approaches fairness as a security service that guarantees that participants in electronic commerce are not at a disadvantage if they follow the rules. They are also worried that TTPs in the middle of protocols can become a bottleneck. They assume that the communications channels are not reliable, but that the channel to the TTP is resilient (messages arrive in some finite time). Their requirements are effectiveness (things work if everyone plays), fairness, timely completion (either player can complete the protocol fairly without cooperation) and verifiability of TTP. The optimistic approach optimizes for players being honest, as the most common case. It meets the requirements, though timeliness is not guaranteed. The protocol determines the format of the contract, which is considered invasive. And the TTP is not invisible. In the general case, only weak fairness is achieved, which is useful only if a subsequent all-party dispute is possible. For generatable or revocable items, strong fairness is possible. Techniques to add generatability to a large class of items exist. In this case, they use verifiable encryption (while only one party can open the box, anyone can see the contents). In their work, they can use any kind of encryption. A questioner was concerned about the tradeoffs between privacy and verifiability. What's in the box matches some public description. Another asked about state maintenance for the TTP, which must be maintained as long as you want to use it (forever, or define a timeout).
A panel, "Trust Considerations in PKI Systems", closed the first day. It was moderated by Dale Johnson (MITRE). Panelists were Santosh Chokhani (CygnaCom Solutions), Bob Blakley (IBM), Mack Hicks (Bank of America), and Warwick Ford (VeriSign).
Santosh discussed the parties involved in e-commerce and the scope of policies needed. The certificate policy needs to cover the signing CA, the authenticating RA, the subscriber and the relying party, assurance that anyone exposed to the private key is protecting it and that their systems are capable of that protection. He recommended PKIX as covering these issues. He recommended tying namespaces to CAs to contain potential rogue CA. He saw the next big area of interest was how to map policies between CAs; what does 90% the same mean?
Mack gave the banking perspective (We're a big bank and we're going to be a bigger bank through mergers). He called SET an over-engineered solution for taking care of not that much risk. Lotus Notes has the largest installation in most banking infrastructures. The trust structure is not well understood by users; they have to carry around a diskette for some reason. SSL with 40 bits is a good start. They don't care for impenetrable, just enormously annoying. However, SSL is still not understood by wide variety of customers. They have to put up their own icon that says it's secure, and the customer still don't understand what it means. They believe the X.500 promise, but they're still waiting. They are only interested in authenticated requests (user id and password or signed request). They are used to referring to their customers by account number only; that to them is "warm and comfy". They want to try out buying experiences with no value. This keeps them from getting lost in liability and risk. "You'd be surprised how long it takes to do something worthless." You don't want to have to ask someone else's bank for information; they won't tell. Banks have been agents of trust since the middle ages and before. CRLs are of limited use at the root level; look what happened to the book for looking up credit cards. The conclusion is conclusions are yet to be reached. SET is a beautiful design but impossible to explain to anybody, let alone implement it. Experimentation is necessary now.
Bob tackled the issue of names and identity. Certificates bind an identifier to a public key. The hard problems arise from a fundamental error: trying to shift the responsibility for identity management from people to the technical infrastructure. Names are not identities, they're identifiers. While Jekyll and Hyde were the same identity, your experience depended on where you sampled. Names can change, or even be used in parallel. Two names can be found to lead to the same person. We can throw the name away and start from scratch, but they don't do that in real world. In real world, we annotate (Jill Jones nee Bailey). He suggested we use annotation instead of revocation. Identity is something that is managed by each of the users that comes in contact with one of these things, based on their own interactions. People don't want to keep track of all that without assistance. Keys tie sequences of actions to a name. Issues include identity theft and playing identity games. We know how to analyze complicated games from Axelrod's "Evolution of Cooperation".
Warwick addressed a number of technical issues. His number one issue is scalability. There is a need for community standards, along with commitment and enforceability. He does not believe incremental community building (the bottom-up approach) will scale. He is not sure if cross certification between organizations can scale. He believes we need to build a large community first, with a set of rules or standards and ensure that every participant is signed up to those rules, with safeguards to ensure that everyone follows those rules. Hierarchies are the simplest to implement. We need to break away from the notion that it's a power hierarchy. There are problems with CRLs such as timeliness and size. He recommends CRL distribution points where each CRL can cover a subset of the population. The certificate points to which CRL applies. This pointer can be changed in different environments, which is a feature.
The question and answer session touched on e-commerce as a driver for large scale PKIs, the difficulty of training users about security in general and certificates in particular, and how humans cope with name collision problems. Mack pointed out that consumers in general make very bad security decisions in general. Bob suggested that their decisions are not very bad, since most are still alive and spending money.
Mike Reiter asked about liability. The certificate policy covers some. The CA has to take some responsibility for its actions. It's such a new business, everyone is cautious. There is no history of case law. Mack pointed out that letters of credit flow around the world now. If someone signs one they shouldn't, a bank will be taking a hit. In a PKI, no liability means no value.
Some asked what do you do when someone dies while holding a valid certificate. Bob suggested you annotate it with the fact of death. It may mean no more signatures should be forthcoming (except for elections in Chicago :-).
The first session on Tuesday was on Architectures and chaired by Stuart Stubblebine. Stuart suggested it should be called the architecture vulnerabilities session.
The first paper was "An Automated Approach for Identifying Potential Vulnerabilities in Software" by Anup K. Ghosh, Tom O'Connor, and Gary McGraw (Reliable Software Technologies). Anup presented. They instrument programs with a fault perturbation engine and security policy assertions to discover potential vulnerabilities. This requires some knowledge of the types of vulnerabilities in the past. They try it all over the code, not just in obviously security critical code. This enables dynamic analysis and exercises the code. They corrupt program states to simulate flaws and introduce anomalous behavior. They work with both simple perturbations (strings) and complex ones (buffer overflow, composable classes). Fault injection can be used for identifying potential security critical coding errors and developing coding techniques to minimize them. It only detects the potential for security critical errors, not security holes. They tested their technique with application such as Web servers. One questioner asked how does the security point of view change this method from safety-critical arena? Anup stated that they find new classes of faults.
The next paper was "Detecting Disruptive Routers: A Distributed Network Monitoring Approach" by Kirk A. Bradley, Steven Cheung, Biswanath Mukherjee, Ronald A. Olsson, Nick Puketza (University of California at Davis). Nick presented. In this work, every router watches over its neighbors, trying to detect routers that drop or misroute packets. They need to know the router graph to find misbehaving routers. Generally, they count data bytes incoming and outgoing. The WATCHERS protocol exchanges data with all the other routers and they then analyze the data. Routers ask themselves, For each incoming packet, does it make sense that the neighbor sent it to me? If not, they increment the misroute counter for that router. Each router periodically sends all of its counter values to every other router. There is a potential for network congestion; a flooding protocol is used for other router maintenance tasks. Good routers virtually disconnect bad routers. The requirements for this protocol to work are that each router must have one good neighbor, each pair of routers must be connected by one good path, and good routers must be in the majority. Routers may legitimately drop packets, so they allow small differences in flow. There is a potential for false positives or detection failures. Much of the computation could be moved off line to enhance performance. One questioner ask about impersonation. There has to be some authentication. Another noted that an arbitrary attacker can force a link to be dropped by forcing it to expire the packet with a short time to live.
The next paper was "Timing Attacks Against Trusted Path" by Jonathan T. Trostle (Cisco Systems). He used two timing attacks. His first used timing signatures to determine the length of a password and which characters are upper or lower case. His second attack used information on the key map to get the relationship between characters. He posits that you need at least two out of three of the following countermeasures for these attacks: a good password policy, a modified trusted path implementation, and protocols resistant to off line attacks. He looked at trojan horse attacks that can leak bits from a user password enabling a brute force attack. User keystrokes have timing signatures. The length of the password is immediately apparent. Upper case characters have a distinct interrupt pattern, so you can get one bit per character. This attack reduced a 10 character password to about 55 bits and an 8 character password to about 44 bits. The second attack looked at how timings were affected by whether key map is in the memory cache or not (using an X11 server). He typed "rpx" and "rpo" 60 times to unlock his terminal. He found it took 572 microseconds for the x and an average of 565 microseconds for the o. His hypothesis is that o is timing equivalent to r. He figures you can get 19 - 39 bits for an 8 character password. He didn't test on WNT as it doesn't have an accessible microsecond granularity clock. A questioner asked if fuzzy time help. Jonathan said probably. Jay Lepreau pointed out that on WNT, any program can access the hardware cycle counter which has 100's of megahertz granularity. Another questioner asked if one time passwords help/ Jonathan thought so.
The next session was Database Security and Biometrics chaired by Cathy Meadows.
The first paper in this session was "Practical Security Policies to Support Timeliness in Secure Real-Time Databases" by Sang Son, Craig Chaney, and Norris Thomlinson (University of Virginia). Sang presented. Timeliness and predictability are most important in real time systems. Security and timeliness can conflict, forcing priority inversion (low priority task keeping a high priority task from running) or a security violation. The canonical example is lock contention between a high priority, high security task and a low priority, low security task. If the lock is given to the high process, there is a potential covert channel. If the lock is given to the low process, that is a priority inversion. They explored different levels of security policy, each of which guarantee certain security aspects. Their goal was to satisfy minimum requirements of both real time and security. 99% is not necessarily better than 85%; it depends on what information needs most to be protected. Their rules covered static and dynamic cases. For example, some deadlines can be missed one or two times, but not the third. They are working towards a tool that can analyze policies and tell the designer the implications of their tradeoffs. They ran simulations to come up with some partially ordered set of MLS-based security policies. They found a significant improvement in performance as more potential covert channel are allowed. Their work is on a distributed database on Legion. Marv Schaefer asked why not just replicate the data? Sang said that there are issues with managing distributed replicated data.
The next paper was "On Enabling Secure Applications Through Off-Line Biometric Identification" by George I. Davida (Univ. of Wisconsin, Milwaukee), Yair Frankel (CertCo), and Brian J. Matt (Sandia National Laboratories). Brian presented. The considered a variety of architectures for where biometric information is stored, and where it is checked. Checking biometrics at the biometric station is simpler and requires fewer secured channels than checking at an authorization server and sending the results to the station. The station compares against the biometric results that are on a card, encrypted and signed. They are concerned about the privacy of the biometric data, which lets them assume it's a strong key (that it's not publicly available). For both the source and verification data, the biometrics are sampled, a majority decoder votes on each bit, then the two are compared with a bounded distance decoder. A signature over the user's name, public attributes, biometric and check digits is on the card. The biometric information can be combined with a PIN to preserve the privacy of the keys that are stored on the card. One questioner who works at a bank said that they had found that people have a real aversion to sticking their face in a machine for an iris scan. Another questioned whether the privacy of biometrics really needed to be protected. Brian said there are issues of user acceptance, health, and genetic heritage concerns.
The 5-Minute Talks session was chaired by Yair Frankel.
"Experiments with Software Wrappers" by Lee Badger (TIS Labs at Network Associates) touched on imposing interesting policies with wrappers such as intrusion detection, sequencing, time. They think of them as micro firewalls.
"StackGuard 1.1: Stack Smashing Protection for Shared Libraries" by Crispin Cowan, Tim Chen, Calton Pu, Perry Wagle (Oregon Graduate Institute of Science and Technology) and Heather Hinton (Ryerson Polytechnic University) was an update of a Usenix security paper.
"Security Requirements of the Emerging Information Infrastructure: Relationship to a National Study Committee" by Henry M. Gladney (IBM Almaden Research Center) recommended visiting www4.nas.edu/cp.nsf/.
"Security Agility for Dynamic Execution Environments" by Karen A. Oostendrop and Lee Badger (TIS Labs at Network Associates, Inc.) presented a subsystem bolted on to components that maintains security information.
"How to Protect Against Subscription Sharing and Server Profiling" by Stuart Stubblebine (AT&T Labs - Research) (with Paul Syverson, NRL and David Goldschlag, DIVX) dealt with the problems of proving group membership without revealing identity or linking appearances from one transaction to the next (no profiling), and enabling a vendor to deter low volume sharing of credentials and detect and stop high volume sharing and fraud.
"Nested Java Processes: OS Structure for Mobile Code" by Patrick Tullmann and Jay Lepreau (University of Utah) brings their multi-user OS research to active networks.
"Methods for Security Sharing Kerberos Credentials Among Cooperating Distributed Web-Based Client Components" by W. David Shambroom (GTE Laboratories Incorporated) presented an architecture for a large client population that uses SSL for sending the Kerberos username and password.
"Code Verification as a Tool for Formal Systems Development" by Mark E. Woodcock (NSA) presented an idealized LCF prover that shows the synergy between a faster prover and better language coverage.
"Advanced Security Proxies" by E. John Sebes (TIS Labs at Network Associates) takes the functionality of current proxy based firewalls and puts it on network components for very high speed networks.
The final session on Tuesday was Formal Methods I chaired by John McLean.
The first paper was "Strand Spaces: Why is a Security Protocol Correct?" by F. Javier Thayer Fabrega, Jonathon C. Herzog, and Joshua D. Guttman (MITRE). Joshua presented. This is a new method for proving correctness the of cryptographic protocols that focuses on the combinatorial issues introduced by the protocol itself. It clarifies the protocol goals and allows the application of uniform, straightforward proof methods (a bag of tricks). A strand is one principal's experience of one run of the protocol. A strand space is a collection of strands, which might be very large. Penetrator activities are also strands (typically they are short). There are two kinds of causal relations: send and receives, and precedes on a strand. Penetrators can concatenate, decompose, repeat, or discard messages. This method deals with what a principal can know directly, what it can reasonable assume (such as freshness assumptions), and what it can infer. Inferences include real world contains events which could causally explain what the principal saw (someone must have sent this message I received). A bundle is a causally well-grounded collection of interactions. Some limitations of this approach are that it doesn't have a way of representing type flaw attacks (two nonces misinterpreted as session key), there is no good way to reason about cryptographic system protocol interactions, and it doesn't reason about cryptographic weaknesses or implementation weaknesses.
The next paper was "On the Formal Definition of Separation-of-Duty Policies and their Composition" by Virgil D. Gligor (University of Maryland), Serban I. Gavrila (VDG, Inc.), and David Ferraiolo (NIST). Virgil presented. He stated that there are significant difference between policy properties and policies themselves. Separation of Duty (SoD) policies define an integrity policy that partitions applications into operations and objects, and assigns users to those partitions. These application-oriented policies are limited scope and require separate administration. The current solution is to make SoD a special feature of a more global policy, such as Role Based Access Control (RBAC). He discussed attribute properties such as a lattice of secrecy and integrity levels and inheritance of memberships of hierarchy of roles, access management properties such as distribution, revocation, creation and destruction, and access authorization properties such as those found in Bell and LaPadula, Unix and RBAC. You have to have all three types of properties in conjunction. SoD is mostly access management and access authorization properties. Mandatory policies are known to break applications. There are also properties of policies that cannot be enforced in reference monitors (such as availability). Compatibility and administratability, which are desired, are also beyond the scope of a reference monitor. His work clarifies and formalizes previous work by Simon and Zurko that provides a foundational intuition on what separation of duty policies are about. His work shows the relationships between SoD properties and addresses composition issues.
At the Technical Committee Meeting, Charle Pfleeger discussed the options for where future symposiums would be held (after next year's, which is at the Claremont again). He is soliciting electronic suggestions at pfleeger@arca.com and berson@anagram.com.
The first session on Wednesday was Formal Methods II chaired by Heather Hinton.
The first paper was "Complete, Safe Information Flow with Decentralized Labels" by Andrew Myers and Barbara Liskov (MIT). Andrew presented. They have a technique for statically analyzing information flow in programs. It has good performance since it relies on compile time checking. It is an implemented Java extension called Jflow, based on a decentralized label model that expresses the privacy concerns of multiple principals (users, groups or users, and roles). They use a notion of acts-for between two principals that is similar to speaks-for. A label is a set of information flow policies. There is an owner of each policy (considered the source of information) and a set of readers to which information can flow. Every policy has to be obeyed. Assignment (x = y) relabels a value such that for every policy in y, there is a policy in x that is at least as restrictive. Combining values uses the least upper bound of labels. Label checking is an alternate form of type checking. They can handle implicit flows, exceptions, objects, dynamic type tests, label polymorphism and label inference. They consider these safe incremental relabelings: removing a reader, adding a policy, replacing an owner by superior, and adding a superior reader (a superior is a principal that can act for A, up the principal hierarchy). They also defined formal semantics for their system. The interpretation depends on the principal hierarchy, which can change at runtime. They statically check with partial knowledge and allow relabelings that are safe under all possible dynamic hierarchies. They have proven their relabeling sound and complete. This paper caused the most commotion in the question period, into the next break, and beyond. One questioner asked for a comparison to taint in Perl. Andrew responded that taint is dynamic. One questioner was concerned that this paper did not reference existing work from the security community such as information flow tools (GIPC). There was some debate over just what the differences were. There was also a comment about the lack of any mention of MLS in the paper. Discussion during the break included the concern that there was no notion of confinement.
The next paper was "Stack and Queue Integrity on Hostile Platforms" by Prem Devanbu (Univ. of Calif at Davis) and Stuart Stubblebine (AT&T Labs - Research). Stuart presented. This work addressed the integrity of large data structures stored on a hostile device. A memory and bandwidth constrained smart card is responsible for checking on it when it comes back. The adversary can perform an operation on the data structures. The trusted processor analyzes and certifies the values. For LIFO stacks, the card maintains a signature over the current top of stack. It can check whether what's returned on a pop is expected. In addition, each item is made up of the value plus a signature on the next item on the stack. For FIFO queues, the card maintains a signature for everything ever removed from the queue as well as a signature on the rear of the queue, so that it can check whether it's really empty when that is the claim. It validates the next value on the queue on a dequeue operation with a signature over the value and the previous signature. The signatures must be keyed .
The final session was Cryptography II chaired by Michael Waidner.
The first paper was "Necessity and Realization of Universally Verifiable Secret Sharing" by Wenbo Mao (HP Laboratories, Bristol). This work is related to escrowed cryptographic systems. The threat model is that Alice and Bob are potentially bad guys, whereas the Eavesdropper is the good guy! The requirements are compliance and fairness. The system uses multiple good guys to prevent some from becoming bad through unlawful privacy intrusion or destroying the recovery material. There are two-party protocols to streamline the secret sharing with many good guys. EES (Clipper) has poor secret sharing. You must get information from two out of two of the sharers, plus manufacturer can know the secret. More recent work adds fairness, including partial key escrow (which produces a non-trivial cost to searching for the missing portion). Previous fair schemes are all multi-party protocols. Alice has little freedom to choose her shareholders. Reducing the number of shares required will increase reliability but decrease fairness. The honesty of a majority of shareholders can be a dangerous assumption. Also, it is impractical to use large number of shareholders. Unfortunately, it is also unfair or dangerous to use a small number of them (5 seems too small). If a secret sharing can be verified universally (by anybody) then a multi-party protocol can be streamlined into a two-party one with guaranteed correctness in secret sharing. Then a "large" number of shareholders can really mean large. If Alice's public key is a discrete log problem, you can encrypt her private key under a third party's public key with a proof of correctness of the encryption. The verifier can be anybody, using the third party's public key. Their work uses bit by bit encryption, but more efficient block encryption methods exists.
The last paper was "Towards Mobile Cryptography" by Tomas Sander and Chritian F. Tschudin (Intl. Computer Science Inst., Berkely, CA). Tomas presented. This work emphasized protection of the mobile code against tampering by the under lying execution environment (code and execution integrity), against disclosure of the algorithm to the host, and against disclosure of the user's private key to the host (even if it signs something while executing there). They've undertaken some first steps in Executing Encrypted Functions (EEF), Homomorphic Encryption Schemes (HES) and Undetachable Signatures. Protecting mobile code is like trying to protect a good-minded middle American with 100 dollar bills hanging out of his back pocket who is walking around in Rio de Janero, in a bad neighborhood. They want to execute encrypted programs directly. The sender must be able to decrypt the results to extract the return value of running the encrypted function. The specific case they study is polynomials on rings Z/nZ. In addition, they look to glue the signature routine to the function that generates the output to be signed, to keep an adversary from taking an encrypted signature function and using it to sign things. Their technique is function composition. They compose the signing routine and the function generating the output to sign so that the signing routine remains secret. If we could find encryption schemes "compatible" with + and *, general programs could be encrypted! The bad news is that it has been an open problem for a long time in cryptography whether these exist. Tomas stated that we need a sound theoretical foundation for mobile code security, not just beliefs [I would offer the caveat "except in panels!", as facts rarely make for an entertaining panel :-)]. Their first positive results for polynomials and rational functions are only the first steps.
The final announcements of the conference were that Jon Millen will be registration chair next year, and Mike Reiter will be junior program chair.