This year's CSFW took place in Rockport, MA, from the 9th to the 11th of June. As in 1997, the workshop was held at the Ralph Waldo Emerson Inn in Pigeon Cove, which is a more than 100 years old building situated at the coast of the Atlantic Ocean. The house has a nice garden, which was the scene of the croquet tournament organized among the workshop participants every year.
The General Chair of the workshop was Jonathan Millen, the Program Chair was Simon Foley, and the Publications Chair was Joshua Guttman. The workshop had about 45 participants. The technical program contained 17 presentations organized into 7 sessions, and 2 panels.
Session 1: Distributed Services
Chair: Li Gong
Weakly Secret Bit Commitment: Applications to Lotteries and Fair Exchange Paul Syverson
Paul also gave a subtitle for his talk: 'How I learned to stop worrying and love exportable cryptography'. He argued that weak cryptography is not only acceptable in certain situations but it might even be desirable. He illustrated this with two example applications: lottery games and fair exchange protocols. In the lottery games that he proposed, the result of the lottery is determined by a set of public inputs from the bettors, but no one (not even the lottery organizer) can control the outcome or determine what it is until after the game closes. This is achieved by the application of a weakly secret bit commitment (WSBC) function, which keeps the outcome secret in a way that is breakable after a predictable amount of time and/or computation. The second application of WSBC proposed by Paul is a variant on fair exchange that requires no trusted third party at all. It accomplishes this by a short sequence of messages that combine a diminishing disincentive to cheat with either increasing ability to complete the exchange or evidence that cheating has occurred. Thus, self interested parties would prefer completing the protocol to cheating. For this reason, Paul called this type of protocol 'rational' exchange.
On the Structure of Delegation Networks Tuomas Aura
Tuomas presented a formal model for SPKI-like, key-oriented delegation systems. After having defined what a delegation certificate is, he introduced the notion of delegation networks. The delegation network is a graph that represents how keys delegate access rights to other keys through delegation certificates. A key idea was that an issuer key can delegate access rights to more than one (i.e., a set of) subject keys, who must then co-operate to use the authority. Then, he presented a formal semantics for delegation networks by defining how the basic question 'Does key k1 delegate authorization for operation o to another key k2?' can be answered for a given delegation network. He also proved that certificate reduction is a sound and complete decision technique. In certificate reduction, certificates are merged into one. The soundness and completeness of this process means that the reduced network (or the single certificate to which the network may finally be reduced) represents the same authorizations as the original one. Tuomas concluded that the presented model can be used as a basis for development of algorithms for managing certificate databases.
Two Facets of Authentication Martin Abadi
The main idea of Martin's talk was that authentication can have two goals: assigning responsibility and/or giving credit. He explained the concepts of responsibility and credit through example protocols, which were adequate for applications that require responsibility but not necessarily for applications that require credit. He said that it is, therefore, prudent to state whether a new protocol designed is intended to establish responsibility, credit, or both. Similarly, when an authentication protocol is analyzed, it is prudent to clarify whether its properties are sufficient for establishing responsibility or credit. He concluded that responsibility is essential, because it is crucial for access control, but there does not seem to be a consensus on whether an authentication protocol should also establish credit or not. Establishing credit, however, can make the protocol more robust and clear, and can help avoid confusions. Somebody asked if there is any protocol that establishes credit but no responsibility. Is there any?
Session 2: Noninterference
Chair: Stewart Lee
Probabilistic Noninterference in a Concurrent Language Dennis Volpano and Geoffry Smith
Dennis started his talk with an example multi-threaded program. The program was possibilistically noninterfering (i.e., changing the initial values of the high (private) variables does not change the sets of possible final values of the low (public) variables). He showed, however, that if thread scheduling is probabilistic then there is a probabilistic interference (i.e., changing the initial values of the high variables changes the probability distribution of the final values of the low variables). Then he presented their solution for the problem. He proposed that conditionals with high guards be executed atomically. This is accomplished by wrapping the conditional with a new command, called 'protect', that guarantees the atomic execution. He proved that such well-typed programs satisfy the probabilistic noninterference property. He noted that, although, their result rules out all covert flows from high variables to low variables with respect to internal program behavior, if external observation of the running program is allowed, then covert channels are still possible. He emphasized that the application that they had in mind was mobile-code. A key assumption in the mobile-code framework is that any attempt to compromise privacy must arise from within the mobile-code, which is internal to the system. The system can control what the mobile-code can do and what it can observe.
Partial Model Checking and Theorem Proving for Ensuring security Properties Fabio Martinelli
In his paper, Fabio addressed the problem of the decidability of a property called Bisimulation Non Deducability on Compositions (BNDC). In short, a system is BNDC, if interaction with high level users does not change the appearance of the system to the low level users. Thus, no information at all can be deduced by low level users. He proved the decidability of BDNC for finite state systems and presented a tool for ensuring such security properties. This tool can be also used to check functional properties of the system automatically. The tool is built over mudiv, which is a program for checking properties of finite state systems, and a proof assistant developed in Coq for the modal mu-calculus.
Session 3: Protocol Verification
Chair: Catherine Meadows
Formal Analysis of a Non-repudiation Protocol Steve Schneider
Steve presented a CSP based formal analysis of the Zhou and Gollmann non-repudiation protocol. Non-repudiation protocols are used to enable agents to send and receive messages, and provide them each with evidence of the message transmission. The Zhou and Gollmann protocol also aims to provide fairness (i.e., no party should be able to reach a point where they have the evidence that they require without the other party also having the required evidence). The CSP modeling revealed an unusual security property required in this protocol. Namely, most security properties of such protocols are safety properties, while in this protocol non-repudiation also involves liveness. Steve modeled the protocol from the point of view of the judge who would be used to arbitrate in the case of dispute. The modeling of the protocol required a different approach than the standard approaches taken to, for instance, authentication protocols. Here, agents require protection from each other, rather than from an external hostile agent. The CSP modeling allowed Steve to formally state the specification claimed for the protocol, and to verify it. It was also helpful in clarifying issues concerning the protocol and its context.
Honest Ideals on Strand Spaces Javier Thayer, Jonathan Herzog, and Joshua Guttman
The paper was presented by Javier. He introduced the notion of ideals. An ideal is a set of messages closed under encryption and invariant under composition with arbitrary messages. He used this new notion to state and prove general facts about the power of the penetrator, who wants to subvert the goals of a security protocol. In particular, he proved that if a legitimate protocol participant can never utter any message in an ideal I, then a penetrator can never utter any message in I either. He applied the method to analyze the Otway-Rees protocol, and to explain what this protocol can achieve and what its limitations are.
Panel 1: Varieties of Authentication Moderators: Roberto Gorrieri and Paul Syverson Panel Members: Martin Abadi, Ricardo Focardi, Gavin Lowe, Catherine Meadows, and Dieter Gollmann
The aim of the panel was to encourage a discussion on the current attempts to give formal definitions of authentication. Despite the widely accepted intuitive meaning of authentication, formal definitions are rather rare and not widely agreed upon. This is sometimes due to the lack of a formal model on which the problem can be defined and with respect to which a formal definition can be given.
Martin emphasized the need for higher level guarantees of authentication and authorization. He said that when people talk about authentication, then they do not really care whether the authentication protocol used has certain lower-level properties that are concerned with messages and keys. They do not even want to hear about keys. What they need is a higher level guarantee. The question is what this higher-level guarantee is and how we can achieve it by guaranteeing lower-level properties. He mentioned his work on the join calculus and the s-join calculus.
Ricardo raised the issue of exploiting off-line information (information that is not visible to the parties of the protocol but is available before the execution to those who verify the correctness of the protocol) in the verification of the protocol and defining authentication in terms of that. As an example, he suggested to consider the locations of protocol participants as off-line information. Then, authenticating an entity means mapping an entity name to its address. And an attack against a protocol can be discovered by obtaining incorrect mappings of names to addresses. He raised the question if it is possible to exploit other kind of off-line information for authentication.
Dieter suggested that to avoid confusion, we should agree on when to use the term authentication, rather than trying to find its intrinsic meaning. He pointed out that the verification of the identity of an entity, the verification of the origin of a message or access request are all termed as authentication.
Gavin position was that authentication is about what an agent can deduce about the state of other agents (e.g., about its partners in the protocol). This can be something like the other has recently sent a message, run the the protocol, agreed upon some details (e.g., a session key), etc. He emphasized that the difference between authentication and secrecy. The latter is about what an agent can deduce about the state of malicious third parties.
Cathy focused on the issue of authentication in the Internet era. She pointed out that with the widespread use of the Internet and related technologies such as mobile-code, it becomes more likely that a single entity may be authenticated in more than one way and this raises some interesting questions. She talked about her experiences with the Internet Key Exchange (IKE) protocol that she had analyzed.
Although the aim of the panel was to give formal definitions of authentication, no one gave any. Finally, Michael Merritt put up a slide with one line written on it:
(for all t)[(t |- A says m) -> (I(t) |= A says m)]
His purpose was to create more confusion, as he said. According to this formula, authentication means that for all traces (executions?) of the system, if A says m follows from the trace t, then A says m really holds in some interpretation of the trace. Now, what remains is to define what traces are, what the interpretation function does, and what A says m means exactly. After all, this was more specific than the previous discussion of the panel.
Session 4: Protocol Model Checking
Chair: Jonathan Millen
Proving Security Properties with Model Checkers by Data Independence Techniques A. W. Roscoe
In his presentation, Bill addressed the general problem of model checking approaches to the verification of security protocols. Namely, model checkers can only prove that a small instance of the system, usually restricted by the finiteness of some set of resources such as keys and nonces, is free of attacks. In general, this does not prove that the protocol is error free. He showed how techniques borrowed from data independence can be used to achieve the illusion that an infinite number of different keys and nonces are available, even though the actual types used for these things remain finite. This result can be used to create models of protocols for which it is sufficient to run the model checker on a finite system in order to prove that the protocol is secure from attacks. An important limitation of the result is that it does not allow protocol participants to run more than one instance of the protocol at the same time.
Towards a Completeness Result for Model Checking of Security Protocols (Extended Abstract) Gavin Lowe
Gavin addressed the same problem as Bill in his talk: if no attack is found by the model checker, then this means only that there is no attack on a particular small system, and there may be an attack on some larger system. He presented sufficient conditions on the protocol and its environment such that if there is no attack on a particular small system leading to a breach of secrecy, then there is no attack on any larger system leading to a breach of secrecy. The small system that is sufficient to check is simply the protocol with one honest agent for each role that run one instance of the protocol at a time. The conditions that has to be satisfied by the protocol are the following: encrypted components should be textually distinct, identities should be inferable from encrypted messages, the protocol should not use any temporary secrets. He also made some model-based assumptions such as perfect cryptography. His results can be used to verify secrecy properties, but he considers to extend the approach to authentication as well.
Efficient Finite-State Analysis for Large Security Protocols Vitaly Shmatikov and Ulrich Stern
The paper was presented by Vitaly. He addressed the problem of the large number of reachable states in finite-state modeling and analysis of security protocols. He proposed two techniques that reduce the number of reachable states and thus allow the analysis of larger protocols. The main idea of the techniques was that if we have a state s, then those states where the honest protocol participants have the same knowledge as in state s and the intruder has less knowledge than in state s can be safely removed from the state-graph. The first technique was to let the intruder always intercept messages sent by the honest participants. The second technique was to prevent the intruder from sending messages if at least one of the honest participants is able to send a message. Intuitively, both techniques increase the intruder's knowledge. He proved that the proposed state reduction techniques are sound (i.e., each protocol error that would have been discovered in the original state-graph will be discovered in the reduced graph). The techniques have been implemented in the Murphi verifier, and reduced the execution time significantly. Vitaly and Ulrich proposed another technique as well that does not reduce the number of states but speeds up the evaluation of the transition rules in Murphi, and hence reduces the execution time.
Session 5: Composition
Chair: Roberto Gorrieri
Composing Secure Systems that have Emergent Properties Aris Zakinthinos and Stewart Lee
Aris presented the paper, which investigates the existence of emergent properties. An emergent property is one that is not satisfied by all the constituent components of a system, but is satisfied by the overall system. He presented conditions on properties that enable one to infer how the property might emerge. He also gave a sufficient condition for the composition of two systems to yield a system that satisfies a stable property.
Merging Security Policies: Analysis of a Practical Example Frederic Cuppens, Laurence Cholvy, Claire Saurel, and Jerome Carrere
The paper was presented by Claire. She introduced a general approach to analyzing situations wherein several possibly conflicting security policies apply. She proposed a language to specify security policies and a method to detect conflicts by using the technique of consequence finding. For resolving conflicts, she proposed the approach of defining strategies. A strategy defines an order of preference between rules. She introduced the notion of 'good' strategy that enables resolution of all conflicts in a policy. Finally, she showed how strategies can be used to derive the actual norms in a merged security policy with possibly conflicting rules. The approach was illustrated through an example (an organization that has to deal with classified documents, and the problem of downgrading the classification level of them when they become obsolete).
Session 6: Protocol Logics
Chair: Gavin Lowe
Evaluating and Improving Protocol Analysis by Automatic Proof Stephen Brackin
Stephen summarized the results of Automatic Authentication Protocol Analyzer (AAPA) analyzes of 52 security protocols. The AAPA found no problem in 16 that have known problems, and found problems in 3 that were not identified as having problems previously. He explained why the AAPA missed attacks. Most of these types of failure would occur for belief logics in general, and some for all types of protocol analysis tools. There is only one that arises from an AAPA-specific error. Stephen is working on a new version of the tool that overcomes many of the problems of the current version.
A Simple Logic for Authentication Protocol Design Levente Buttyan, Sebastian Staamann, and Uwe Wilhelm
The paper was presented by Levi. He proposed a BAN-like logic, in which encryption and keys are replaced by channels with various access restrictions. This abstraction makes the logic compact because the cryptographic operations that are usually used in authentication protocols can all be described with a uniform notation. Furthermore, channels enable thinking about protocols at a high abstraction level without being concerned with the implementation details. He suggested that the logic should be used for constructing protocols at the first place rather than verifying existing protocols. For this reason, he proposed synthetic rules derived from the logic that can be used by designers to construct protocols in a systematic way. He illustrated the use of their synthetic rules by re-designing the Woo and Lam authentication protocol, which has known flaws.
Panel 2: The Security Impact of Open/Distributed Computing Technologies Moderator: Peter Ryan Panel Members: Dieter Gollmann, Li Gong, Gunter Karjoth, and Gene Tsudik
The panel was concerned with the question: to what extent the tools and techniques developed to design and evaluate secure systems apply to the radically new technologies such as Java, CORBA, ActiveX, mobile-code, etc.? The aim was to identify the key problems and to propose approaches to addressing them.
Dieter suggested that the challenge is to provide solid foundations for the security services in open/distributed systems. These services can guarantee security if they are properly implemented and cannot be bypassed. Furthermore, security relevant information that are needed by the services to function is present in lower layers of the architecture. Thus, if there is no sufficient protection at the lower layers, 'the foundation of the fortress will crumble'. He also pointed out that standardization of policies is an important issue. However, for making standards, we need experience with implementations. This seemed to be contradictory, but Paul Syverson noted that there exists a similar approach today, namely, the Internet standardization process in the IETF.
Li's position was that the new trends present new opportunities for programmers by making programming tasks appear to be easier than they are, but poses significant challenge for security professionals, who have to ensure that software applications provide adequate levels of security. The problem is that today's software developers lack the knowledge and tools to produce secure applications, while their output is more critical for the society. Li suggested that this problem can be addressed by technical and non-technical approaches such as building security features into the platform, providing security primitives and design patterns for programmers, and educating and influencing industry.
Gunter's opinion was that the main characteristics of open/distributed systems that make securing them hard are the complexity, the large scale distribution, the inclusion of legacy code and COTS products, mobility, and flexibility. Security is not an integral part of the distributed computing technology, and integrating it is not trivial. He mentioned, as examples, CORBA and SNMP. Security was not a primary issue in the design of these systems. Although, a security specification for CORBA is now available, the attempt to secure SNMP failed. He pointed out that security and software engineering should converge if we want open distributed computing technologies to be securely applicable on a wide scale.
Gene did not seem to agree with the previous speakers of the panel. He said that new technologies did not always mean new security issues. He argued that papers about secure electronic payment systems have been published in the early 80's, research in mobile-code and its security implications have been done in the 80's (e.g., Cerf's knowbots), the problem of protecting a host from a malicious mobile agent is very similar to the problem of viruses, which have been studied in the 80's, and have been shown to be an intractable problem. Little has been done to address the issue of protecting the mobile agent from malicious hosts. However, this issue arises only for itinerant (multi-hop) agents, the utility of which is somewhat in doubt today. Gene finished his talk with a list of issues that he believes hot research topics today: intrusion prevention; new access control dimensions (location based AC, groups as first class objects); efficient credentials revocation; signature translation, notarization, and renewal; group security services (e.g., group signatures).
Obviously not everybody agreed with Gene. Virgil Gligor noted that the prevention against denial of service attacks in the new environment is really an issue, and existing techniques do not work because of the variability of the user population. Paul Syverson added that the large scale also make existing techniques unusable. The discussion ended with the consensus that the problems are not new but the existing solutions might not work in the new environment.
Session 7: Database and Intrusion Detection
Chair: Robert Morris
A Fair Locking Protocol for Multilevel Secure Databases Sushil Jajodia, Luigi Mancini, and Sanjeev Setia
Sushil presented the paper, in which they addressed the problem of unfairness of existing concurrency control algorithms that are free from covert channels. Most of these algorithms prevent covert timing channels by ensuring that transactions at lower security levels are never delayed by the actions of a transaction at a higher security level. Thus, low transactions have higher priority than high transactions, which can lead to poor performance and even starvation of high transactions. They analyzed the performance of the secure version of two phase locking, which provided insight into the conditions under which this algorithm provides poor performance to high transactions. Based on this analysis Sushil proposed three new versions of the secure two phase locking algorithm that mitigate the performance problems. The approach that he took is based on the restriction of the number of low transactions executing in the system.
Data Level Inference Detection - A Rule Based Approach Raymond W. Yip and Karl Levitt
The paper was presented by Raymond. He argued that using only functional dependencies for inference detection in databases is inadequate, and data in the database should also be considered. He presented their data level inference detection system, which is based on five inference rules: 'subsume', 'unique characteristic', 'overlapping', 'complementary', and 'functional dependency'. Users can apply these inference rules any number of times, and in any order to infer data. These rules are sound, but might not be complete. The proposed system is open, so whenever a new inference rule is discovered it can easily be integrated into the system. Raymond also reported a prototype that he implemented in Perl. The performance studies show that it takes a few seconds (on a Sun SPARC 20) to process a query that returns hundreds of records from a database of 10000 records. Hence, the system should rather be used off-line.
Abstraction-Based Misuse Detection: High-Level Specifications and Adaptable Strategies Jia-Ling Lin, Sean Wang, and Sushil Jajodia
The presentation was given by Jia-Ling. She introduced a high-level language for describing abstract misuse signatures (MuSig). Due to the high abstraction level, a MuSig can represent complex intrusions yet in a simple form. She also introduced the notion of abstract views. An abstract view specifies abstract events derived from the events in the audit trail. The designer of the misuse detection system is responsible for identifying the abstract views and providing programs to support the views. These programs are called system directives. Jia-Ling proposed ways to translate MuSigs into monitoring programs with the help of the system directives. The behavior of the monitoring program can be changed by the system manager by changing the set of available system directives.
The business meeting was held one day earlier (Wednesday) than it was indicated in the program. The topic was the place of the next workshop. Stewart Lee proposed Cambridge, Roberto Gorrieri proposed Bologna as the new possibility. After some rounds of vote, Bologna seemed to be the most preferable among the workshop participants.
By the end of the workshop, the regular croquet tournament had been finished as well. The winners were: Tuomas Aura (1st), Michael Merritt (2nd), and Bill Roscoe (3rd).