1997 Symposium on Security and Privacy

Joshua D. Guttman

The MITRE Corporation
guttman@mitre.org

This year's IEEE Symposium on Security and Privacy contained 20 papers distributed in six sessions. No session had a particularly narrow technical focus, and between them, the papers touched on a broad range of topics. Some years the symposium seems to have an informal center of interest, but this year it had a fairly balanced spread across the field.

I will group the papers into rough subject-oriented clusters, even though they do not neatly match the division of the symposium into sessions. These clusters are:

  1. Authentication, Cryptography, and Cryptographic Protocols (4 papers)
  2. Network Security Issues (4 papers)
  3. Database Security Issues (3 papers)
  4. Intrusion Detection and Prevention (3 papers)
  5. Security Theory (2 papers)
  6. Security Policies and Authorization (4 papers)
I will discuss them in this (arbitrary) order. In some cases I have more to say than in others, and in some cases I have opinions that I will mention after giving a brief summary.

This summary also has an Index of Titles and an Index of Authors.

All opinions expressed here are purely personal positions, and no organization should be construed as endorsing them, in particular, not my employer, nor any agency of the US Government, nor the IEEE Cipher.

1. Authentication, Cryptography, and Cryptographic Protocols

Toward Acceptable Metrics of Authentication

Michael K. Reiter and Stuart G. Stubblebine (AT&T Labs--Research)

In using public-key cryptography, a primary problem is to determine reliably when a person owns a public key. A variety of schemes now exist for allowing authorities to certify bindings between people (or other entities) and their keys. However, one needs to decide which authorities to trust, particularly in cases where one authority may certify a binding for another authority, who in turn helps to certify a target binding. Clearly one wants to view this process as generating a directed graph of bindings leading from oneself to the target binding. An authentication metric, as defined by the authors, is a recipe for annotating such graphs with whatever supplementary information may be relevant, and deciding when the resulting annotated graph yields good enough evidence for a target binding.

The authors criticize a number of earlier authentication metrics, and demonstrate that some have highly counterintuitive properties. The authors do not exempt a scheme they proposed themselves, which appeared barely a month before the symposium. Although major aspects of the paper are matters of judgment, which can appear subjective, this even-handed criticism of the authors' own recent work will prevent anyone from thinking that the paper merely records their own opinions or prejudices.

The authors develop a sequence of plausible design principles for authentication metrics, and illustrate where the previous proposals fall short. The principles require that parameters to the models should have clear meaning and should be determinable independently of the output of the metric itself; that the metric should take into account as much relevant information as possible, and not hide relevant decisions; that the metric should be resilient to manipulation by misbehaving entities and provide meaningful results from partial information. They also require that the output of the metric should be intuitively understandable.

The authors use these principles to motivate a new metric. This metric is based on the idea that when an authority certifies a binding, it should provide an amount of insurance that the binding is correct. This insurance may be regarded as an attribute of each directed edge in the graph.

Then a natural property of the graph is the minimum capacity cut set in the graph. A cut set is a set of edges that if removed from the graph leaves no path from the source to the target binding. A cut set has minimum capacity if the sum of the values on all of its edges is less than or equal to the sum on any other cut set.

This metric has the advantage that its meaning is extremely clear in commercial terms. It is a lower bound on the amount of money to which one will be entitled if the target binding is found to be erroneous and one has to collect the insurance.

This paper combines a searching analysis of previous work (including the authors' own) with thoughtful general principles to guide future work and a particular, well-motivated proposal. It also seems likely that the same principles will apply to trust evaluation problems other than validating certificates.

Automated Analysis of Cryptographic Protocols

J. Mitchell, M. Mitchell, and U. Stern (Stanford University)

Cryptographic protocols are (generally short) sequences of messages involving cryptographic operators such as secret or public key encryption or signature schemes. Cryptographic protocols are used to authenticate one party to another in a network; to agree on a key; or to communicate some other value that will serve as a shared secret. Despite their apparent simplicity, cryptographic protocols are remarkably slippery, and published protocols may be considered sound for a decade or more before disastrous flaws are discovered.

Several analysis approaches are available for avoiding this problem, one of which is to model the protocol and its attacker as a finite state system, and to systematically explore the states in the system to discover whether flaws of various kinds arise in accessible states. The authors apply a tool called Murphi to a number of example protocols with encouraging results. This line of work is conceptually quite similar to other on-going efforts, although the Murphi tool is claimed to be more efficient than its competitors.

Deniable Password Snatching: On the Possibility of Evasive Electronic Espionage

A. Young and M. Yung (Columbia University)

Suppose that you could install a program on a shared computer that would monitor keystrokes. Obviously, you could collect other people's passwords and later impersonate them on the system. Could you also do so without disclosing the fact that you were collecting this information? Well, yes, this paper argues that, using public key cryptography, you can, and it provides a particular scheme that accomplishes this goal. Public key cryptography allows one to store the passwords to disk without anything on disk or in the recording program disclosing that these data are in fact encrypted passwords.

The paper, like the same authors' paper last year on cryptovirology -- combining cryptography with software viruses to provide a new mechanism for extortion attacks -- is written with verve and a sense of mischief. However, someone familiar with what can be done with modern cryptography would have been likely to predict that the authors' conclusion would be true, and that various schemes would suffice to achieve their goal.

Number Theoretic Attacks On Secure Password Schemes

Sarvar Patel (Bellcore)

In this paper, the author explores a number of attacks on various forms of the Encrypted Key Exchange (EKE) protocol invented by Bellovin and Merritt. EKE is intended to allow parties to agree on a key or some similar secret, assuming that they already share a secret password P, which is used to encrypt an initial value e. This raises a special problem because passwords are much more predictable than other sorts of secrets. As a consequence, password-based security is often vulnerable to a dictionary attack, in which the attacker uses all the members in a large collection (such as a dictionary) to decrypt an observed message P(e), and looks for a recognizably correct result. EKE is designed to use the password only in contexts where the attacker will not be able to recognize whether the result of a trial decryption is correct, for instance because any number e is a possible plaintext value.

However, Patel shows that this is not sufficient, and that many variants of EKE will succumb to a more active sort of dictionary attack. This sort of attack arises because the responder in the various forms of EKE returns a value of the form P(f(e)) when presented with P(e), where f preserves some number-theoretic properties. Thus, an attacker may send a value x and receive in response a value x', which the responder computes by decrypting x using P, applying f, and encrypting the result with P. Now the attacker can consider all the passwords in the dictionary, decrypting x and x' and checking whether they have the number-theoretic relation preserved by f. Because statistically a fixed proportion of the passwords in the dictionary will happen to satisfy the test in the case of a particular value x, in successive tests the number of surviving passwords will go down logarithmically. Hence, a relatively small number of trials may suffice to identify a unique password. Some forms of EKE can be repaired so that this form of attack no longer succeeds.

It was pointed out by a member of the audience that a special aim of the EKE group of protocols was to prevent off-line attacks, and to force any attack to involve active responses by the victim. Patel's attacks are not off-line, so they undermine the claims of EKE less than might be thought. However, Patel emphasized that the number of exchanges in which the attacker needs the cooperation of the victim is quite small, in realistic cases on the order of a couple of dozen. Hence the attacks are practically quite difficult to defend against.

The paper exemplifies the importance of combining protocol analysis with number-theoretic reasoning; keeping this connection in mind should provide many fascinating headaches for the designers of security protocols.

2. Network Security Issues

Filtering Postures: Local Enforcement for Global Policies

Joshua D. Guttman (MITRE)

In this paper, the author (also the present reviewer) describes a simple language for expressing global access control policies. These policies are designed to be of a kind that filtering routers are capable of enforcing. An algorithm is then introduced to determine what role the individual routers in a (possibly complex) network topology should play in order jointly to enforce a global policy. A related algorithm determines whether a proposed division of access checking among the routers will correctly enforce a global policy, and if not, exactly what violations may arise.

Analysis of a Denial of Service Attack on TCP

Christoph L. Schuba, Ivan V. Krsuland, Markus G. Kuhn, Eugene H. Spafford, Aurobindo Sundaram and Diego Zambon (Purdue University)

A recently observed attack on Internet Service Providers (ISPs) is the so-called SYN-flooding attack. This attack exploits a characteristic of the TCP protocol for setting up connections, namely the three-way handshake.

When a source wants to set up a connection with a destination, it sends a datagram with just the SYN bit in the TCP header set; the destination responds with a datagram in which both the SYN and ACK bits are set. Normally, the source then responds with a datagram in which just the ACK bit is set. This datagram uses a sequence number following that given in the destination's SYN-ACK datagram. To check that an ACK datagram belongs to a handshake in progress, the destination machine must maintain state for the incomplete ("half-open") connection. The SYN-flooding attack exploits this by sending a large number of SYN datagrams, but no ACK datagrams, so that the destination machine must maintain state for a large number of half-open connections. Since the timeout period for ACK datagrams is long (e.g. a minute), it is relatively easy to exhaust the memory the destination host has available for new connections. Legitimate users are then unable to establish connections to obtain service from the host under attack.

The authors describe several potential approaches to defending against SYN-flooding. One could tinker with configuration parameters, or buy more memory so that more half-open connections can be stored. One could try to protect oneself via a firewall. The firewall could be a relay, transferring datagrams from a TCP connection from the external source to another connection with the internal destination. Alternately, the firewall could be a semi-transparent gateway, generating the ACK datagram to complete the three-way handshake, and sending a RESET datagram if a real ACK datagram from the source is not observed.

But the authors' main proposal is a form of active monitoring called synkill. In this approach, a workstation running synkill monitors the network for SYN packets. If a SYN packet has a source address that is impossible or considered suspicious based on past behavior, then the synkill workstation immediately emits a RESET for this connection, thereby allowing the destination to free its resources immediately. Otherwise, it emits an ACK packet so that the connection moves from the half-opened state to the fully opened state. If a real ACK is not observed, synkill will later send a RESET. The synkill program also monitors which addresses complete their connection requests normally, so that it learns which source addresses to treat with well-earned suspicion.

Anonymous Connections and Onion Routing

Paul F. Syverson, David M. Goldschlag and Michael G. Reed (Naval Research Labs)

In this paper, the authors give a more complete account than previous papers of their "onion routing" mechanism. Onion routing is intended to provide anonymous TCP connections. The intent is to ensure that neither eavesdropping nor traffic analysis will disclose who is connected to whom, nor what they are saying. Since the onion routing network offers a mechanism for making secure socket connections, this mechanism will support many different applications.

The essence of the idea is to have a set of devices called onion routers available on the Internet. Onion routers maintain long-term connections between themselves. These connections are used to transfer data for users or applications. The goal is to ensure that user data cannot be traced through the network, whether by passive monitoring or even by compromising a small number of onion routers. A particular user connection will traverse a number of different onion routers (currently five), in a pattern determined at the time that the connection is established. By contrast, the underlying IP datagrams may traverse many more IP routers in the course of getting onion-routed data through the network. The first onion router in an onion connection serves as a proxy for the initiator of the onion connection; the last serves as a proxy for the recipient.

When user data is transferred through an onion connection, the data is first tagged with the address of the intended recipient and then encypted using a key that will be shared only with the last onion router. This object is then tagged with the address of the last onion router and encrypted using a key that will be shared only with the second-to-last onion router. This layered process proceeds backwards through the intended onion route. Thus, only the initiator's proxy needs to know the whole route. The successive routers will know only to which router they forward a cell. Because keys need to be established for each new connection, a special object called an onion must be passed down the path that the connection will take. The routers along this path extract their keying information from this onion. RSA is used to allow the layers of the onion to be shared only with the intended router.

An MBone Proxy for a Firewall Toolkit

Kelly Djahandari and Dan Sterne (Trusted Information Systems)

The MBone is a network of multicast routers in the Internet; it is used to support multimedia broadcasts and conferencing using the IP multicast mechanism.

However, MBone traffic can cause security problems, and therefore most firewalls do not pass MBone datagrams. A multicast group is determined by an IP address in a particular reserved range of IP addresses. When a host joins a multicast group, the MBone arranges to forward all datagrams with that destination address (call it x) to the host. The key security problem with the MBone is that these datagrams may be directed to any port number. Suppose the host offers service such as (e.g.) NFS. If a firewall allows MBone datagrams to enter, an attacker could synthesize datagrams addressed to the NFS port on host x. When these datagrams were delivered, they could exploit well-known NFS vulnerabilities.

The authors describe an MBone proxy that operates on a firewall machine. This proxy will receive requests from internal hosts to join MBone groups. When the proxy receives such a request, it arranges with the MBone to join the multicast group. Whenever a datagram arrives for that group, the proxy unicasts it to the requesting internal host. The internal host and the proxy must agree on a single port to be used for these unicast packets. If several internal hosts request participation in the same MBone group, then the proxy will unicast separate UDP datagrams to each.

In this way, MBone services can be made available to internal hosts, without MBone datagrams needing to be delivered to them. An application wrapper is needed to support this service; the wrapper handles the initial set-up conversation with the proxy before spawning the standard (unaltered) MBone application software. When the MBone application exits, it presumably informs the proxy so that the proxy can tear down its membership in this group, in case this is the last internal user to exit the conference.

3. Database Security Issues

The Design and Implementation of a Multilevel Secure Log Manager

Vikram R. Pesati, Thomas F. Keefe and Shankar Pal (Penn State University)

This remarkably detailed paper studies how to provide a log manager for multilevel secure database management systems. The essential problem is that a database log manager has very stringent performance requirements because the changes written by any transaction must be logged before it commits. The log data is needed in case of system failure. As a consequence, the authors want to adopt strategies for controlling the way that the log records are flushed onto disk, which gives the paper a distinctively low-level focus. It is hard to evaluate the significance of the algorithms for flushing records to disk, and the timing data that are presented, without a clearer conception of how systems incorporating this technology will realistically be used.

This paper has the most complicated diagram in the proceedings, a Petri net with 27 places, 14 transitions, and 58 arrows between places and transitions. I particularly enjoyed one delicate touch; 32 additional arrows with slightly differently shaped arrowheads have been added to the diagram to connect labels to the places they describe.

Catalytic Inference Analysis: Detecting Inference Threats due to Knowledge Discovery

John Hale and Sujeet Shenoi (University of Tulsa)

This paper concerns the database inference problem. The authors point out that many things may be inferred from the knowledge actually contained in databases, using common knowledge as supplementary premises. Previous researchers (Su and Ozsoyoglu) cited by the authors show that disclosures due to inference in multilevel databases are common, and that discovering how to eliminate the disclosures is an NP-complete problem.

The present paper differs in two main ways if I understand it correctly. First, it applies the terminology of fuzzy logic to the problem, presumably with the intention of modeling the fact that in many cases, the common knowledge that serves as a supplementary premise is not a universal implication, but a correlation with high frequency. Second, it offers a particular algorithm for detecting inference compromises. I assume that repairing these compromises is algorithmically at least as hard as it would have been if the supplementary premises had been non-fuzzy.

My impression is that most readers will find the terminology of fuzzy sets (and the rhetoric of fuzzy sets) to be an obstacle to understanding. Probably the same or similar points could have been made in a more widely understandable form had they been expressed using traditional probabilistic inference.

Surviving Information Warfare Attacks on Databases

Paul Ammann and Sushil Jajodia (George Mason University), Sushil Jajodia, Catherine D. McCollum and Barbara T. Blaustein (MITRE)

This paper, three of the authors of which are colleagues of the present reviewer, consists of three somewhat contrasting portions. The first presents a polished introduction to the issues of information warfare attacks on databases, along with some approaches to preventing, detecting, and containing such attacks, or recovering from them.

The second portion describes an approach to implementing "damage markings" for maintaining the usefulness of a database despite some damage from an information warfare attack. Some such mechanism is needed because it may be operationally impossible to do without the database, even if it is known that some portions of it have sustained an attack. One goal is to sequester data that is irreparably damaged. A second goal is to allow some uses of data that is damaged but that can still play a useful role. Data may also be partially repaired, meaning that the values it contains may be incorrect, but they are approximately right. These sorts of data are called red (irreparably damaged), off-red (damaged but useful), and off-green (approximately correct); undamaged data is called green. These colors are treated as damage markings, and are assumed to be maintained by the database management system as metadata.

The paper then describes a weakened notion of database consistency that may be applied to databases in the face of some damaged data, applying effectively to the green and off-green portions of the database.

The notion of weak consistency then serves as a springborad from which the the authors define when a transaction preserves weak consistency. The authors also describe a particular protocol for manipulating the damage markings, which they call the Normal Transaction Access Protocol, and demonstrate that this protocol preserves weak consistency. In addition to normal transactions, the authors also introduce a notion of countermeasure transactions; these are transactions that examine the damage markings in a database and try to detect damage that may have been done, or to restore it to a more stable state.

The third portion describes a separate algorithm for making snapshots of a database; this is potentially useful in deciding how to recover from an information warfare attack, but seems only loosely integrated with the remainder of the paper.

The paper initiates a new area, concerning systematic ways to analyze the effects of information warfare attacks and to recover from the attacks. We are likely to see many papers in this area in the coming years.

Nevertheless, I think that some of the results in this paper should be regarded as tentative or preliminary. For instance, the paper's Normal Transaction Access Protocol is not the only candidate that could be imagined for this role. Many readers will be surprised that it does not treat the damage markings in a way consistent with the Biba integrity model, ensuring that information does not flow from more damaged into less damaged objects. Thus, we will need a sharper understanding of exactly what security (or integrity) goals the paper's protocol is serving. The paper's protocol also appears to have some counterintuitive consequences as to which transactions would be consistency-preserving.

4. Intrusion Detection and Prevention

How to Systematically Classify Computer Security Intrusions

Ulf Lindqvist and Erland Jonsson (Chalmers University of Technology, Sweden)

The authors present a taxonomy of intrusions; it elaborates on a taxonomy proposed by Neumann and Parker in 1989. Neumann and Parker distinguished external misuse, hardware misuse, masquerading, bypassing intended controls, active misuse, passive misuse, etc. Lindqvist and Jonsson further subdivide by intrusion technique (e.g. password capture or spoofing a privileged program) and intrusion results (e.g. disclosure, or unauthorized service, or erroneous output to authorized users). They illustrate their taxonomy by classifying the intrusions perpetrated by the students in an undergraduate course on computer security.

Lindqvist and Jonsson begin their article by citing their countryman Linnaeus on the value of classification. Linnaeus's classification in botany has largely stood the test of time, presumably because it concentrates narrowly on a single botanically basic aspect of plants, namely the anatomy of the flowers and fruits, which determines how they reproduce. It seems to me that for a taxonomy in intrusion detection to play a corresponding scientific role, it must also seize on a narrow but central aspect of the subject. The Neumann/Parker classification and the authors' extensions to it seem not yet to have the uniformity that a robust, explanatory taxonomy should. A further criterion of success for a proposed taxonomy, I think, will be whether it can leads to uniform answers to questions such as how to detect attacks in a particular taxonomic category; how to prevent attacks in a particular taxonomic category; and how to respond to attacks in a particular taxonomic category.

Execution Monitoring of Security-Critical Programs in a Distributed System: A Specification-based Approach

Calvin Ko (Trusted Information Systems), Manfred Ruschitzka and Karl Levitt (University of California Davis)

This paper appears to summarize the Ph.D. thesis of the first author. It develops an approach called specification-based detection.

This approach is based on the observation that many attacks proceed by abusing a known collection of privileged programs. These privileged programs, which under Unix generally execute as the superuser root, are known to have various flaws. When fed mischievously chosen data or run in odd combinations or at the wrong moment, they allow the attacker to capture some unintended privilege. Specification-based detection is based on the idea that, although it may not be easy to identify or fix all such flaws, it is frequently easy to characterize the kinds of actions that the program is supposed to engage in, and to constrain the order in which they can occur. An attack can (apparently) occur through such a program only when it behaves in a way that contravenes the specification of its normal behavior.

The paper describes a language in which these specifications may be written, and a system to monitor program execution to detect whether the process is staying within its specification.

The paper causes some terminological trouble to this reader. Also, it is difficult to understand the notion of parallel execution grammars from the presentation given in Section 4. These grammars are apparently intended to be analogous to context-free grammars, but they appear (as far as I can tell) to form a Turing-complete notation. Perhaps it would be more understandable -- and equally good as a basis for the execution monitor -- to use an extension of a process algebra such as CSP or CCS, which are well-understood notations suited to specifying sets of traces.

A Secure and Reliable Bootstrap Architecture

A. Arbaugh and David J. Farber and Jonathan M. Smith (University of Pennsylvania)

Many systems have a security gap at the very start, when the system is bootstrapping. The hardware that initiates the bootstrap typically cannot ascertain whether the software to which it passes control is the right software or not. In operating systems (for instance, the usual PC operating systems) that do not have strong access control safeguards, a virus or other attack can modify the system itself, so that after the next bootstrap, any other security mechanisms may be disabled. The authors point out a second problem with ensuring a secure bootstrap process: One would like a secure bootstrap process to pass control from one level of abstraction to the next, validating each level first. But in fact that standard PC boot process has a star-like structure, in which the system BIOS passes control to expansion ROMs, which pass control back to the system BIOS.

The authors' approach to the secure bootstrap problem is to break the boot process into a succession of stages. Each stage verifies a cryptographic checksum on the software implementing the next stage before passing control to it. As a consequence, if the hardware that implements the first stage has not been compromised (e.g. because an attacker physically replaced the motherboard), then a cryptographic guarantee ensures that the final running operating system is the expected one. The authors describe a mechanism in which a PROM card supplements a portion of the PC's BIOS to start the bootstrap process securely. They have made small changes to the bootstrap logic to allow a more linear flow of control than the standard one. In case of integrity failures, the PROM card supports a secure bootstrap from a remote networked server.

It seems to me that assuring the bootstrap process is an important practical issue, and the authors have done a service in developing an architecture to approach the problem.

5. Security Theory

Secure Software Architectures

Mark Moriconi, Xiaolei Qian, R. A. Riemenschneider (SRI) and Li Gong (JavaSoft)

In this paper, the authors apply their work on formally specifying software architectures to the case of software security architectures. The architecture specification method that the authors have developed, and embodied in a specification language named SADL, is intended to formalize the common practice of representing an architecture through box and arrow diagrams. In these diagrams, major components (at some level of abstraction) are displayed as boxes, and the paths of communication among them are represented as arrows. Such diagrams are intended as a compact reminder of what can affect what, and they are often the basis for elaborate annotations of the kinds of data that may be communicated, the style of communication (RPC, messages, pipes, etc.), and the semantics of the components when necessary. In SADL, these same kinds of information are represented logically, using abstract data types to characterize the data involved and logical axioms to characterize the communication paths and the semantics of the components.

In previous papers, the authors have presented examples (not related to security) and have developed a method based on faithful theory interpretations to show that one architecture correctly refines another.

An important ingredient in the approach appears to be that the theories involved are semantically rather shallow. That is, it is distinguished from more traditional formal specification and verified refinement in that this work emphasizes the box-and-arrow level more, whereas much previous work assumes a closer focus on the behavioral properties of the components. I believe that the authors consider this semantic shallowness an advantage, because it will be easier to collect the information needed to construct the specification and easier to read and understand the specification afterwards. Thus, these specifications seem potential candidates for guiding the design and maintenance of systems; they may succeed in a way that might be impossible for more detailed specifications.

On the other hand, the limitation to the box-and-arrow level of description also exacts a price. Although this paper was presented in a session called "Security Theory," it does not contain any new theoretical content about security; instead, it applies axiomatic theories to describing security. The examples in the paper use a very traditional access control model of multi-level security, and a reader will wonder whether this decision is forced by the fact that subtler or more informative models of security may require a more detailed specification of the components' behavior in order to draw any system-level conclusions about security.

The presentation in the paper would be easier to follow if more detail were available, particularly in Section 6, "Relating the Secure Distributed Transaction Processing Architectures."

A General Theory of Security Properties and Secure Composition

A. Zakinthinos and E.S. Lee (Cambridge University, U.K.)

This ambitious paper sets out to characterize the set of all "possibilistic" security properties, and to identify the most permissive member of the set that "does not allow any information to flow from high level users to low level users." A possibilistic security property is one that does not refer to the probability that a system will undergo a particular event, but only the possibility (or impossibility) of its doing so.

The key idea in the paper is to characterize the "type" of property that can be a security property. Suppose that the behavior of a system is represented as a trace, which is to say a sequence of events, and each event has a recognizable security level (either "high" or "low"). Then the system itself may be characterized by its set of traces, which is all the sequences of events which it would be possible for the system to engage in. Two traces are low-level equivalent if they reduce to the same sequence if all the high events are omitted. The low-level equivalent set of a trace (relative to a system) is the set of all traces (of that system) that are low-level equivalent to it. The authors state that a property P of systems is a security property just in case there is a property Q of sets of traces such that a system has property P just in case the low-level equivalent set of every trace (of that system) has Q (Definition 9). Clearly various familiar security properties such as non-interference can be brought to this form.

The paper then identifies a particular "perfect security property," claiming that it is the weakest security property that prevents downward information flow.

However, the paper relies on an "event system" model that considers only traces and makes a static distinction between inputs and outputs. In this it follows a tradition dating back to McCullough's papers of a decade ago, rather than using the more expressive frameworks offered by process algebras such as CCS and CSP.

The definition of security property that the authors propose (definition 9, stated as a biconditional), may be a necessary condition for a security property, but is clearly not sufficient. Consider for example the stipulation that the set of low-level equivalent traces should have cardinality greater than 17, which is of the required form but does not seem to be a security property.

Moreover, there is no definition of "information flow from high to low," so that a reader is not quite sure exactly what theorems 1 and 2 are intended to mean. The proofs use puzzling notions such as an event being "only dependent on high level events" or "dependent on low level events." This seems too unreliable a vocabulary for formal proofs.

The bibliography misses a number of important papers. The CCS-based work of Gorrieri and Focardi is mentioned but hardly evaluated. The paper would have been stronger if previous efforts had been more carefully correlated with the authors' point of view. The paper does however make a contribution to sorting out the adequacy of McLean's Selective Interleaving Functions as a way to define the class of security properties.

It seems to me that some of the material in this paper was already known by 1990. For instance, the fact that security properties are determined by certain characteristics of the low-level equivalent set was appreciated, although there doesn't seem to have been a term invented for it. Sutherland's "hidden" and "view" operators were close to this. Some other material is new, but seems to need a more rigorous treatment.

The paper would also be strengthened by a richer explanation of the authors' modeling decisions. Why are event systems the right model? Why are possibilistic security properties the right properties? What inner connection is there between the proposed "perfect security property" and the notion of "flow from high to low"?

6. Security Policies and Authorization

An Authorization Scheme for Distributed Object Systems

V. Nicomette and Y. Deswarte (LAAS-CNRS & INRIA, France)

There are three main ingredients to this paper. First, the author summarizes an architecture for secure distributed systems in which authorization is divided depending on localization. When an access involves only local subjects and objects, the security kernel of the local machine may decide whether the access is permitted. For access that involves multiple hosts, a centralized authorization server must be consulted. Previous work of the authors and their colleagues has investigated how to make the centralized authorization server intrusion resistant and fault tolerant.

Second, an approach to "access right management" is proposed in which the right to execute a method is governed by symbolic rights. These symbolic rights may govern the the individual object involved in the method invocation, or else some group or class of which the object is an instance. This allows a multi-way check on access depending on the method, the calling subject, and all of the objects given as parameters. A supplementary distinctive ingredient is the concept of a voucher. A voucher is a right that allows one entity e to request a second entity e' to invoke a particular method on particular resources on its behalf. The authors comment that this improves a previous notion of proxy in that a voucher may be granted to e for delegation to e' even when neither e nor e' on its own would have the authority to take the action. Thus, the voucher notion may be used to give a tighter implementation of the least privilege principle.

Third, this approach to access control is instantiated to define a multi-level security policy suited to object-oriented systems. The authors claim that it prevents information flow, while permitting a wider range of actions than a mechanical application of the Bell-LaPadula principles.

The authors cite previous, more specialized papers of theirs describing each of the three main ingredients in more detail. This paper is distinctive in that it conveys the sort of system that is possible if the three ingredients are all present.

A Logical Language for Expressing Authorizations

Sushil Jajodia (George Mason University), Pierangela Samarati (Universita' di Milano) and V. S. Subrahmanian (University of Maryland)

In this paper the authors develop a notation, based on Horn clauses, for reasoning about authorization. They claim that it allows a variety of different access control policies to be represented, which seems to be true. They also claim that all existing access control systems support only a single access control policy, which seems to be false; the Nicomette/Deswarte paper just described is the most convenient counterexample, though not the only one.

In this framework, a system security officer stipulates positive or negative authorization facts for users, groups, and roles using a predicate "cando" (pronounced "can do"). A second predicate "dercando" (pronounced "derived can do") is inductively defined using "cando," group membership, etc. Conflicts may arise, in the sense that the same user receives both a positive and a negative authorization for the same action on the same object. Therefore resolution rules are used; these derive conclusions about authorizations, expressed using a new predicate "do," on the basis of literals involving the previous predicates.

The reader should be careful not to assume that there will be no conflicts involving "do." Both a positive and a negative authorization expressed in terms of "do" may be derivable for a given user, object, and action. I believe the authors' point is that while one must expect that conflicts in "derived can do" facts will occur, a good set of resolution rules should have the property that conflicts are not transmitted to the "do" predicate. Integrity rules that allow an "error" predicate to be inferred are also available, and are intended to allow a compact statement of global properties of the other rules, for instance that there are no conflicts in the "do" predicate.

The authors illustrate their framework by showing rules that express various popular access control policies, such as separation of duty policies.

A natural question concerns the computational complexity of evaluating the various kinds of atomic formulas. The authors refer to a different paper, published in a data management conference, stating "This access control checking can be performed in linear time w.r.t. the number of rules" in an authorization specification. The cited paper says only that this result "follows from well-known results on the data complexity of stratified datalog programs." Claims of this kind seem apt to be fragile though, as adding a little extra notation to the specification language may have drastic consequences for a checking algorithm.

Providing Flexibility in Information Flow Control for Object-Oriented Systems

Elena Ferrari, Pierangela Samarati and Elisa Bertino (Universita' di Milano) and Sushil Jajodia (George Mason University)

Information flow security policies are more robust than policies based on discretionary access control. However, they are also more stringent: When information flow policies are straightforwardly implemented, they prohibit activities that do not compromise semantically unreleasable information. Naturally, an information flow policy cannot simply stipulate that an activity should be permitted if the semantics of information it discloses is releasable. This paper is one in a group by the authors and others that look for defensible ways to loosen information flow policies without disclosing sensitive information.

It uses two main ingredients. One is to have a finer appraisal of when information can flow from one action to another. In object-oriented systems, one method invocation may preceed another without having the ability to pass information to the latter, if the former is an asynchronous invocation, and the process does not block to receive the reply message until after the call to the latter.

The second main ingredient is the idea of a waiver. A waiver may be attached to a method in a particular object to indicate that the information flow policy is to be waived for this invocation. The waiver may be a reply waiver, which stipulates that information may be returned to the caller even though the information flow policy would not otherwise have permitted it. Or a waiver may be an invoke waiver, indicating that potential information flow from the caller (through the parameters to the method, or the fact of the call) should be ignored, so that the object invoked may still write to low classification objects. Waivers are apparently a fine-granularity way to specify that a particular method is trusted in a certain way, because the system designer understands that its semantics are not in fact disclosing sensitive information.

The paper also presents a message filter algorithm that uses these ideas to provide a more flexible reference monitor than traditional information flow policies would support.

Analyzing Consistency of Security Policies

Laurence Cholvy and Frederic Cuppens (ONERA CERT, France)

In this paper the authors consider deontic logic as a method for specifying security policies, following previous work of their own and of Glasgow and MacEwen. A deontic logic is a logic in which an "obligatory" operator may be applied to a sentence. For instance, if "Guttman reads file f" is a sentence of the language, then "It is obligatory that Guttman read file f" is also a sentence of the language. Permission (etc.) may also be expressed; "It is permissible that Guttman read file f" is just "It is not obligatory that Guttman not read file f". A security policy on this view is a set of sentences about what users (or user roles) are permitted, forbidden, or obligated to take what actions. The authors call such a policy a "regulation."

The main problem that the authors address is the problem of normative conflicts. A normative conflict arises when someone is both permitted and forbidden to take the same action, or when someone faces a "moral dilemma" because they are obligated to do incompatible actions. The method that the authors recommend for checking for normative conflicts is to translate the regulation from the deontic logic into predicate logic with an "obligatory" predicate, rather than sentential operator. Presumably the predicate is a property of events.

The authors claim that a conflict exists in the deontic logic if and only if, roughly speaking, the translation of the regulation has new consequences in which the "obligation" predicate does not occur. In essence, there are normative conflicts if the evaluative statements would have purely factual logical consequences. This is an interesting application of the so-called fact/value dichotomy, according to which moral evaluation should go beyond everything contained in the factual details of how the world stands.

One would guess that this property (having new purely factual logical consequences) would be undecidable, since we are working here in the context of full predicate logic. The authors do not explain whether this is the case.

Acknowledgments

I am grateful to a number of friends and colleagues without whose advice this summary would have contained even more errors, irritating quirks, and unfounded opinions than it does now. My stubbornness is undoubtedly reponsible for the ones that still remain. They include Dale Johnson, Catherine McCollum, John McLean, Marion Michaud, Jonathan Millen, Jeffrey Picciotto, Peter Ryan, and John Vasak.

Index of Titles

Toward Acceptable Metrics of Authentication
Automated Analysis of Cryptographic Protocols
Deniable Password Snatching: On the Possibility of Evasive Electronic Espionage
Number Theoretic Attacks On Secure Password Schemes
Filtering Postures: Local Enforcement for Global Policies
Analysis of a Denial of Service Attack on TCP
Anonymous Connections and Onion Routing
An MBone Proxy for a Firewall Toolkit
The Design and Implementation of a Multilevel Secure Log Manager
Catalytic Inference Analysis: Detecting Inference Threats due to Knowledge Discovery
Surviving Information Warfare Attacks on Databases
How to Systematically Classify Computer Security Intrusions
Execution Monitoring of Security-Critical Programs in a Distributed System: A Specification-based Approach
A Secure and Reliable Bootstrap Architecture
Secure Software Architectures
A General Theory of Security Properties and Secure Composition
An Authorization Scheme for Distributed Object Systems
A Logical Language for Expressing Authorizations
Providing Flexibility in Information Flow Control for Object-Oriented Systems
Analyzing Consistency of Security Policies

Index of Authors


Ammann
Arbaugh
Bertino
Blaustein
Cholvy
Cuppens
Deswarte
Djahandari
Farber
Ferrari
Goldschlag
Gong
Guttman
Hale
Jajodia
Jajodia, with Ammann
Jajodia, with Ferrari
Jonsson
Keefe
Ko
Krsuland
Kuhn
Lee
Levitt
Lindqvist
McCollum
Mitchell
Moriconi
Nicomette
Pal
Patel
Pesati
Qian
Reed
Reiter
Riemenschneider
Ruschitzka
Samarati, with Jajodia
Samarati, with Ferrari
Schuba
Shenoi
Smith
Spafford
Stern, with Mitchell
Sterne, with Djahandari
Stubblebine
Subrahmanian
Sundaram
Syverson
Young
Yung
Zakinthinos
Zambon