Twelfth IEEE Computer Security Foundations Workshop (CSFW12)
Mordano, Italy
June 28-30, 1999

by Scott Stoller stoller@cs.indiana.edu

Workshop Program with hyperlinks to presented papers

The 12th IEEE CSFW was held in the delightful Hotel Panazza, in Mordano,
Italy, a small town near Imola.  The opening comments by Roberto Gorrieri
(general chair) and Paul Syverson (program chair) revealed that this was
the largest CSFW ever.  There were 50 attendees, with 25 from the USA, and
12 from the UK.  Ten more people were interested in attending but had to
be turned away.  The 13th CSFW will be held in Cambridge, England.
Details will be available from http://www.csl.sri.com/csfw/

A Formal Framework and Evaluation Method for Network Denial of Service
Cathy Meadows.

In network denial of service attacks on protocols for establishing
authenticated communication channels, the identity of the attacker is
generally unknown, because authentication has not been completed.  A
design technique for making protocols more resistant to such attacks is
the use of a sequence of authentication mechanisms, arranged in order of
increasing cost (to both parties) and security.  Thus, an attacker must be
willing to complete the earlier stages of the protocol before it can force
a system to expend resources running the later stages of the protocol.
Cathy proposed a framework for analyzing such protocols.  The framework is
based on Gong and Syverson's model of fail-stop protocols.  The NRL
protocol analyzer was used to formally verify some properties of such
protocols.

I/O Automaton Models and Proofs of Shared-Key Communications Systems
Nancy Lynch

The emphasis is on modular reasoning using standard proof techniques,
specifically, invariant assertions and simulation relations.  To
illustrate the approach, I/O-automaton models of private communication
channels, key distribution, shared-key cryptography, base-exponent
cryptography, and an eavesdropper were developed, and a modular proof was
given to show that a straightforward cryptographic protocol implements
private communication channels in the presence of an eavesdropper.  The
modular proof requires finding, for each subprotocol, an invariant
characterizing the set of messages that that subprotocol can tolerate
being revealed to the eavesdropper.  A similar proof involving an active
attacker is almost finished.  More ambitious future work aims at modular
probabilistic reasoning; the central problem there is to show that a small
probability of a successful attack on a protocol does not blow up when the
protocol is composed with other protocols.  John Mitchell pointed out that
the invariants sometimes need to be time-dependent; for example, it might
be OK to reveal a particular message after (but not before) a certain
point in the protocol.  Cathy Meadows asked whether this approach can be
used to show that similar subprotocols in a protocol suite don't interfere
with each other.  Nancy replied that it can.

Safe Simplifying Transformations for Security Protocols
Gavin Lowe (speaker) and Mei Lin Hui

The goal is to reduce the cost of model checking by first applying some
safe simplifying transformations to the protocol.  In this context, "safe"
means: if the original protocol is insecure, then so is the transformed
protocol.  Thus, if the model checker does not find an attack on the
simplified protocol, then the original protocol is secure.  If the model
checker does find an attack on the simplified protocol, then the user must
determine by inspection whether the original protocol is vulnerable to an
analogous attack, or whether the attack is an artifact of the
simplifications.  They applied this approach while analyzing the Cybercash
and SET protocols.  They assume that cryptographic keys are atomic.
Someone asked whether this assumption is necessary.  Gavin was uncertain
whether structured keys could easily be accommodated.  Someone asked
whether the simplifying transformations are a special case of forward
simulations.  Gavin said they probably are.

Decision Procedures for the Analysis of Cryptographic Protocols by Logics
of Belief
David Monniaux

Decision procedures for BAN-like logics of belief are not new.  Mao and
Boyd (IEEE CSFW, 1993) describe a tableau-based procedure.  Kindred and
Wing (USENIX Workshop on Electronic Commerce, 1996) describe a decision
procedure that uses forward chaining and backward chaining.  David's
approach uses only forward chaining. Some automatic transformations of the
proof system are used to make all rules suitable for forward chaining.
David considered trying to use only backward chaining, but that seemed
more difficult.  Cathy Meadows asked how decidability of such proof
systems relates to undecidability results such as that in the paper by
Cervesato et al. (described next).  David replied that the undecidability
results apply to semantic models of authentication protocols, and there is
no good such semantics for belief logics.

A Meta-Notation for Protocol Analysis
Iliano Cervesato, Nancy Durgin, Pat Lincoln, John Mitchell (speaker), and
Andre Scedrov

The goal is to find the simplest framework that captures the essential
aspects of the Dolev-Yao model of cryptographic protocols.  The Dolev-Yao
model implies certain assumptions about the adversary, e.g., the adversary
cannot gain partial knowledge of a value or perform statistical tests.
The framework is designed to facilitate meta-theory.  It is also hoped
that other researchers will adopt the framework, to help unify work on
analysis of cryptographic protocols.  The framework is based on multiset
rewriting.  Generation of fresh values (e.g., nonces) is modeled by
existential quantification.  Using this framework, they studied how
various restrictions on protocols affect the complexity of verification.
If the depth of encryptions and the number of fields per message are
bounded, and the number of protocol instances and the number of generated
nonces are unbounded, then the verification problem is undecidable.  If
either one of the latter two parameters is bounded, then the verification
problem is decidable.  There is a typo in the paper: "PSPACE-complete"
should be changed to "DEXP-time complete".

Mixed Strand Spaces
F. Javier Thayer Fabrega, Jonathan Herzog, and Joshua Guttman (speaker)

Principals sometimes use multiple protocols with similar message formats
and sometimes share keying material among them.  This can cause problems.
For example, in the Neumann-Stubblebine protocol, the re-authentication
protocol introduces a violation of agreement in the authentication
protocol.  Such problems motivate the study of techniques for proving that
different (sub)protocols do not interact in harmful ways.  "Mixing
conditions" are proposed for this purpose.  Mixing conditions are
described here in terms of the strand space model, though the idea is
fairly model-independent.  Mixing conditions are related to the invariants
used in Nancy Lynch's approach (described above).  Mixing conditions for a
protocol P are derived from the correctness proof of P in isolation.  If
the other protocols used with P satisfy the mixing conditions, then they
do not cause P to violate its requirements.  Someone asked whether their
work considers type-confusion attacks.  Joshua said no, but extending it
to do so might be possible.  Someone asked whether their approach can be
used when the correctness requirements include non-repudiation; this might
be tricky, because the strand space model distinguishes regular and
penetrator strands, and with non-repudiation, this distinction might be
blurred.  Josh thinks that this is not a problem, because non-repudiation
can be expressed as a property of the form "if a bundle contains a strand
of the form X, then it also contains a strand of the form Y" (e.g., X
involves receipt of evidence that some action occurred, and Y contains
such an action), and their approach handles properties of this form.

Honest Functions and their Application to the Analysis of Cryptographic
Protocols
Al Maneki

The theory of strand spaces is extended to encompass a broader class of
cryptographic protocols.  A class of functions, called honest functions, is
defined; this class include exclusive-or and exponentiation of integers in
a prime number field.  Informally, a function f is honest if either (1) f
is 1-to-1 and its inverse is not easily computed, or (2) for each
y=f(x_1,...,x_n), y has so many pre-images under f that exhaustive search
of them is infeasible.  Some general theorems are given to facilitate
reasoning in the strand space model about protocols that use honest
functions.  The approach is illustrated by an analysis of Roscoe's version
of the TMN protocol.

Panel: Formalization and Proof of Secrecy Properties
Dennis Volpano (chair), Cathy Meadows, Jon Millen, Martin Abadi, and
Riccardo Focardi

The first day ended with a panel discussion.  Each panelist gave a short
presentation before the free-for-all discussion.

Dennis Volpano emphasized the importance of considering secrecy of
bindings (e.g., of a variable to a value), as opposed to secrecy of values
(e.g., private keys).  He distinguished three notions of secrecy:

 Secrecy as a safety property.  This notion is used in the Dolev-Yao
 model and its descendants.  It is typically used for expressing
 secrecy of values.  This notion facilitates automated reasoning
 but does not consider indirect information flow.

 Unconditional secrecy (non-interference).  This approach considers
 indirect information flow and in some cases can handle probabilistic
 violations of secrecy, but it does not handle some forms of
 cryptography (e.g., public-key crypto).

 Conditional secrecy.  In this approach, one shows that obtaining secret
 information is as difficult as breaking the underlying cryptosystem.

Cathy Meadows characterized the Dolev-Yao approach to secrecy as shallow
and wide, because the same approach works for a wide variety of
properties.  In contrast, models of information flow are narrow and deep.
Access control is also shallow and wide.  Bell and La Padula tried to
integrate access control and information flow. That effort foundered on
downgrading.  This led to notions of intransitive non-interference, and so
on.  That line of work might be relevant to protocol analysis; for
example, it might be possible to analyze subliminal channels in
authentication protocols.

Jon Millen discussed operating system secrecy vs. protocol secrecy.
He expressed an analogy between them by drawing two pyramids:

    /  covert channels    \             /      cryptanalysis           \
   /  probabilistic models \           /    probabilistic models        \
  /    non-interference     \         / model checking, inductive proofs \
 /      access control       \       /          belief logics             \

Non-interference is suitable for proving absence of causal flow of a bit
or bits.  In protocol analysis, one deals with secrecy of bitstrings large
enough to be unguessable, so if those bits appear in two places, causal
flow between those places is assumed.  This model is attractive because it
is simpler, but is it model always adequate for protocol analysis?  No: a
large bitstring might be compromised through repeated attacks that each
obtain partial information.  Such attacks are not considered in
Dolev-Yao-like models.

Martin Abadi advocated expressing secrecy in terms of observational
equivalence, as done in the Spi calculus.  The Dolev-Yao notion of secrecy
is good for expressing secrecy of unguessable values but is not good at
expressing secrecy of guessable values (e.g., Booleans).  This is a major
limitation, because application data, which is what really matters,
usually contains guessable values.  (One could argue that keeping
application data secret is easy if cryptographic keys are kept secret, but
it would be nice to prove this.)  Secrecy of both kinds of values can be
expressed using observational equivalence.

Riccardo Focardi described a non-interference-based approach to protocol
analysis.  The basic idea is to show that the intruder cannot interfere
with the protocol.  This approach allows general results about
non-interference to be used, e.g., results that justify considering only
the most powerful intruder.

During the free-for-all discussion, John McLean suggested that
non-interference is not applied more often to analysis of cryptographic
protocols because: (1) we still don't understand the Dolev-Yao model as
well as we understand access control, (2) cryptanalytic attacks are
probably more important than the information leaks that can be analyzed
using non-interference, (3) the theory of non-interference does not handle
some forms of cryptography, (4) reasoning about non-interference is
difficult, and reasoning about probabilistic non-interference is even more
difficult.

Martin Abadi said that, if he were deciding how to allocate resources to
be spent on verification of a protocol, he would formally state and
informally prove the secrecy properties, but he would not spend money on
formal verification of them.  He would spend that money on formal
verification of authenticity properties.

Dennis Volpano pointed out that implementations might have information
leaks that are not evident in the high-level protocol descriptions
commonly used in the protocol analysis community.  For example, the
possibility of timing attacks on a protocol (or on the underlying
cryptosystem) is usually not considered.  Jon Millen noted that such
analysis would occur at the top level of his pyramid.

Authentication via Localized Names
Chiara Bodei, Pierpaolo Degano (speaker), Riccardo Focardi and Corrado Priami

The goal is to provide a high-level characterization of message
authentication and to study how encryption is used to achieve message
authentication.  The formal framework is a variant of the Spi-calculus in
which each sequential process has its own namespace.

A Logic for SDSI's Linked Local Name Spaces
Joseph Y. Halpern and Ron van der Meyden (speaker)

Rivest and Lampson's Simple Distributed Security Infrastructure (SDSI)
incorporates linked local namespaces.  For example, suppose that in my
(local) namespace, "Joe" refers to Joe Halpern (i.e., is bound to Joe
Halpern's public key).  Then, in my namespace, "Joe's colleagues" refers
to the binding of "colleagues" in Joe Halpern's namespace.  Bindings are
created by issuing certificates.  The authors give (1) a semantic
characterization of SDSI's name resolution algorithm, (2) a sound and
complete axiomatization for proving validity of formulas with respect to
that semantics, and (3) a translation of their proof system into Datalog
that yields a polynomial-time implementation of the SDSI name resolution
algorithm.  Martin Abadi had previously proposed a logic for SDSI's local
namespaces; however, Abadi's axioms do not correspond to precisely to
SDSI's name resolution algorithm.

[I missed the session on Interaction and Composition, which contained the
next three talks, so I have included abstracts of the papers' abstracts.]

Trusted System Construction
Colin O'Halloran

An important trend is construction of trusted systems from COTS components
by encapsulating each component in an appropriate software wrapper.
Achieving security without compromising the operational effectiveness of
the system is difficult.  This work addresses the problem of analyzing such
systems to demonstrate that the system is both secure and usable.  The
proposed approach is illustrated using the X windowing system.

Secure Composition of Insecure Components
Peter Sewell and Jan Vitek

Many contemporary software systems consist of numerous components that
interact in intricate ways.  Some components are only partially trusted.
To ensure security properties of the entire system, components are
executed in the context of wrappers that provide fine-grained control over
interactions between components.  The authors propose a formal model
based on the pi-calculus that is suitable for analyzing such systems.

Security Function Interactions
Pierre Bieber

This paper uses a compositional framework to model security architectures
involving heterogeneous and distributed security functions.  The goal is
to assist the ITSEC evaluation of suitability, binding and vulnerability
of a set of security functions.  The author proposes constraints that
security functions should guarantee in order to interact consistently and
securely with other functions.  These notions are illustrated by studying
the interactions of various components of a secure LAN.

A Logic-based Knowledge Representation for Authorization with Delegation
Ninghui Li, Joan Feigenbaum (speaker), and Benjamin Grosof

Trust management systems integrate authentication and access control.
Whether this is a good idea is still an open question, according to Joan.
Trust management systems, such as KeyNote, are starting to be deployed in
commercial systems, but more experience is needed before the benefits of
trust management systems can be properly evaluated.  Delegation Logic (DL)
is proposed to provide a more formal and less ad-hoc foundation for trust
management systems.  DL can be used to determine whether a request
supported by certain credentials should be approved under a certain
authorization policy.  Logic-programming techniques can be used to
automate such authorization decisions.  DL treats delegation depth
explicitly.  This enables expression of policies such as: I give you
permission to delegate permission to perform some operation X to anyone
you trust, but those principals cannot delegate permission to perform X to
other principals.  Whether it is useful for a trust management system to
support bounded delegation depth greater than one is still an open
question, according to Joan.  DL supports a general notion of delegation
to complex principals; this notion can be used to represent k-out-of-n
threshold schemes.  DL is based on general ideas from logic programming,
so it can cleanly support non-monotonicity (useful for expressing
revocation), negation, and prioritized conflict resolution.  Joan
announced that there is a typo in the paper: on p. 169, column 1, in the
numbered list of items, a fifth item was accidentally omitted:
  5. Every ground clause that has the form: 
       $$ delegates(A, pred, [params], *, A ,1).$$
     Intuitively, this represents the fact that every principal delegates
     every predicate to itself with depth *.
A revised version of the full paper is available from 
http://www.research.ibm.com/people/g/grosof/papers.html .

A Logical Framework for Reasoning on Data Access Control Policies
Elisa Bertino, Francesco Buccafuri (speaker), Elena Ferrari, and Pasquale Rullo

A logic formalism is proposed for expressing access control policies,
e.g., for database security.  Authorizations are represented by logic
rules.  Two forms of negation are distinguished: strong negation, for
expressing prohibition (e.g., "Bill is forbidden from accessing that
table"), and weak negation, for expressing absence of explicit positive
authorization (e.g., "Bill has not explicitly been given permission to
access that table").  Priorities can be used to help determine which rules
prevail in case of conflicts.  Various possible semantics for authorization
rules are given, based on semantics of logic programs.

Athena: a New Efficient Automatic Checker for Security Protocol Analysis
Dawn Xiaodong Song

Athena is a tool for automated analysis of security protocols.  To verify
a property, Athena starts from appropriate states of interest (e.g.,
states containing a violation of a secrecy requirement) and performs a
backward search to determine whether (and, if so, how) the system could
have reached such a state.  For efficiency, Athena allows states to
contain variables; thus, a single "symbolic" state represents a set of
concrete states (namely, the states that can be obtained by replacing the
variables with type-correct values).  In the two aspects just described,
Athena is similar to Cathy Meadows' NRL Protocol Analyzer.  A significant
difference is that Athena is based on the strand space model, and during
the backward search, Athena tries to add an entire strand to the state in
a single step.  Thus, Athena can avoid exploring interleavings of
independent events in different strands.  This can drastically reduce the
number of explored states.  Athena embodies a semi-decision procedure; in
other words, Athena may diverge.  This is unavoidable, given
undecidability results such as those in the paper by Cervesato et
al. (described above).  John Mitchell mentioned that a newer version of
the Murphi-based security protocol analyzer by Mitchell, Stern, et
al. explores about 500 states when analyzing the Needham-Schroeder-Lowe
protocol; the figure of about 500,000 states that appears in Table 1 of
this paper is out of date.  John said that the most effective optimization
they incorporated is forcing alternation between transitions of the
intruder and transitions of honest principals.  He asked whether Athena
exploits a similar optimization.  Dawn replied that it is hard to compare
the optimizations used in the two systems.  Cathy Meadows suggested that
it would be interesting to compare Athena's verification of the
Needham-Schroeder-Lowe protocol with her analysis of that protocol using
the NRL Protocol Analyzer, as described in an ESORICS'96 paper.

CVS: A Compiler for the Analysis of Cryptographic Protocols
Antonio Durante, Riccardo Focardi, and Roberto Gorrieri (speaker)

CVS automatically translates protocols described in VSP, a domain-specific
high-level language, into SPA (Security Process Algebra), a CCS-like
process algebra.  The resulting SPA description of the protocol can be
given to the CoSeC tool for verification.  (Gavin Lowe developed a similar
tool, called Casper, that translates protocols into CSP.)  The CoSeC
analysis is based on non-interference: it attempts to show that the
intruder cannot influence the observable behavior of honest participants
in the protocol.  If this attempt fails, the user is shown traces in which
the honest participants' behavior is influenced by the intruder.  The user
decides inspection whether those traces represent violations of the
protocol's correctness requirements; they are working to automate this
step.  Using these tools, they found a simple and apparently new attack on
the Woo-Lam one-way authentication protocol.  Martin Abadi mentioned that
it is not clear what that protocol is intended to achieve, hence it is not
clear what should be considered an attack.  Cathy Meadows suggested that
they consider using Millen's language CAPSL (instead of VSP) as an
abstract notation for protocols.

Process Algebra and Non-Interference
Peter Ryan (speaker) and Steve Schneider

The information security community has had difficulty giving a precise
definition of non-interference, especially for non-deterministic systems.
The authors show that non-interference is closely related to equivalence
of systems, which is a central concept in the theory of process algebras.
Different notions of equivalence lead naturally to different notions of
non-interference.  Thus, the lack of consensus on the definition of
non-interference is closely related to the multiplicity of equivalences in
the process-algebra literature.  The authors demonstrate a correspondence
between some definitions of non-interference and some notions of process
equivalence.  A benefit of understanding this relationship is that results
from the process-algebra community can be exploited, e.g., results
regarding composition and unwinding.  A process-algebraic proof of
completeness of a previously known set of rules for unwinding is given;
the process-algebraic proof is cleaner and gives more insight than the
original proof.  Some ideas are described for generalizing
non-interference to handle partial and conditional information flow.  John
McLean said that the proposed definition of non-interference seems quite
natural, and that this was surprising, because the definition is also
compositional, and he usually takes compositionality as a sign that a
definition does not capture the meaning of non-interference.  Peter
replied that they could discuss this point in more detail off-line, but in
the meantime, John might feel better knowing that their notion of
non-interference is not preserved by refinement.

What is Intransitive Noninterference?
A. W. Roscoe (speaker) and M. H. Goldsmith

Intransitive non-interference refers to information flow properties
required of systems containing downgraders, in which it may be legitimate
for information to flow indirectly between two users but not directly.
The concept is illustrated by a secure file system, with a process that
can downgrade files from high-security to low-security.  Using this
example, they show that the standard definition of intransitive
non-interference in terms of a purge function is undesirably permissive:
it allows the effects of "too many" high-level actions to become visible
in the low-level view after a downgrading action.  As a remedy, a
definition based on determinism is proposed.  Roughly, the
determinism-based notion of non-interference is as follows: high-level
actions are turned into non-deterministic choices, and if the low-level
view is deterministic, then there is no information flow from the high
level to the low level.  This definition can be extended to yield a
definition of intransitive non-interference that is not overly permissive.