IEEE Symposium on Security and Privacy
Reviewed by Hilarie Orman and Richard Schroeppel
S&P was back again at the Claremont for its 21st year after nearly
moving to Portland. Current plans call for the Claremont remaining
the venue for the conference for another few years at least.
The following notes are the personal observations of two attendees;
they are submitted here, not as accurate transcripts nor as definitive
reports, but only as personal views and remembrances of the lively
debates, the audience interactions with the speakers, and as a
stimulant to those who were not at the conference to obtain the
proceedings and read the full papers.
The proceedings booklet has a blue and white cover; this may
be an attractive addition to your bookcase, exactly matching
the 1999 proceedings and complementing the prior array of solid
hues.
Reporters editorial re citations:
Of the 18 papers presented, 12 used at least one citation to
a document on the Internet. No paper cited a reference prior
to 1972, and 5 papers cited no references prior to 1990. At least
one paper cited IETF Internet drafts, which are required to carry
a disclaimer noting that they should never be cited (this is because
the IETF keeps no copies of drafts that are more than 6 months
old). Given the popularity of Internet citations, it seems that
the S&P conference has an important failing towards the research community
because it does not keep on-line copies of its own conference papers.
Building a large enough set of trust relationships to carry on
complex activities without extensive administration cost is
a problem facing electronic commerce today. A "grass roots"
role-based approach fits well with an emerging public key
infrastructure.
Trust Establishment (TE) uses a Trust Policy Language to mediate access to objects based on the subject's roles. The policy language
describes how to use information, particularly certificates, to deduce
the set of roles for a subject. The system can make use of attribute
certificates and the process of actively collecting data from which
roles are deduced.
This deductive system uses an XML expression of rules and XML
certificate encodings; it also has an LDAP interface for certificate
retrieval.
The illustrative example shows doctor at one hospital accessing
database at another hospital, using rules such as:
An implementation note, based on the experience of building this in
Java for a web server, is that the usual trust policy for SSL must
be modified for this sort of purpose; certificates must be accepted
even if the chain of trust for the certificate authority (CA) is not
yet established.
Questions
Audun Josang, Australia, noted that trust is not binary in the real
world and asked if there were ways to express degrees of trust. The
answer was that trust is in groups and roles. Roger Needham suggested
that in the real world sometimes trust cannot be determined from the
available information, or that trust depends on the strength of trust
in the opinion of a third party.
May 15-18, 2000
Oakland, CA, USA
August 4, 2000
Hilarie Orman
Richard Schroeppel
====================================================================
Access Control Meets Public Key Infrastructure, Or:
Assigning Roles to Strangers
Presenter Dalit Naor
Web site
* A hospital certified by two trusted hospitals is trusted.
* A doctor's specialty must match type of database
(e.g., cardiologist to cardiology).
Steve Kent asked if the system had a business-to-business
focus, to which the presenter answered yes. Kent went on to say hat
hospital example was flawed because certification is a much more
diverse problem; there can be "islands of trust" that have no
cross-certification available.
Paper 2: Security Infrastructure for Distributed Java Applications
The experience of implementing the Placeless Documents System led to
this paper about building a security infrastructure for distributed
Java applications.
The first milestone of the project was to implement SDSI/SPKI in Java.
The backtrack goal was to implement an access control logic.
The logics ABLP and FAF can describe SDSI/SPKI, but because
Because the logics are not decidable, their proof rules are not
desirable ones to use for building control logic.
The logic used in the document system defines an access control logic
of about four inference rules with a Java-friendly expression.
Additional inferences rules permit more delegation, such as
"secondary delegation" - delegate to Alice the right to delegate
read permission to this object.
There were implementation challenges resulting from using RMI over
SSL. For example, RMI will download any SSL code, which is Very Bad
for integrity of authentication. Another problem is that the RMI
server needs to know the identity of the call initiator, but this
information, available from SSL, is normally lost to the upper RMI
layers.
Other lessons learned from the Placeless experience concerned the
logic. There are some undesirable results re using "bare keys" because
the "name" is global. Some of early versions of the inference rules
gave surprising results, and the presenter felt that there was no good
way to come up with rules; it is basically trial and error.
Questions
Fred Cohen: congratulations on finding flaws. Please comment on lack of
resiliency in systems with systems built from modules that rely on
other modules, etc.
John McLean: A problem with catching problems as you find them is that you
are never sure when you have got them all. Is independent formalization
useful?
Presenter: Dirk Balfanz, Princeton
Answer: redelegation is part of the reality of the world. Experience
and trial and error is required to build trust in a system.
Answer: The proof is done with respect to a standard logic; the
problem is that semantics are not intuitive, not any more than the
logic itself.
Paper 3: A Practically Implementable and Tractable Delegation Logic
Delegation logic uses third party information for authorization.
There are several logical variants of logic programming for
trust management, including Java and Prolog deductions. This one
is called D1LP.
Some interesting features of the logic include specification of
delegation depth and threshold specifications. The latter can be
static k-of-n thresholds or weighted thresholds or dynamic thresholds.
The dynamic version allows the set of principals for the thresholding
to be determined by a predicate that is evaluated at the same time as
the threshold rule; in this way the set of authorized principals can
change over time while the logic rules remain constant.
One of the examples of specification of a delegation scheme shows a
medical records access scheme for physicians and hospitals.
Another, more complicated example shows how one person can delegate delegation rights to a second person while allowing a
third person to define the set of principals to whom the second
person can delegate. This involves the creation of a "dummy
principal" represented by a public key.
Original semantics are intractable, because delegation queries
involving dynamic threshold schemes cannot be resolved; delegation
chains can be exponential.
Tractability is established by introducing restrictions on delegations
to principals. This reduces complexity to O(N^4 * D^2) (number
of principals and maximum delegation depth).
Plans for this system include implementation, an upgrade to
a different version of the logic (D2LP), study of nonmonotonic expressions,
the addition of temporal information, and a GUI.
Questions:
Paper 1: Practical Techniques for Searches on Encrypted Data
This novel work is for use in a scenario where Alice has encrypted
a document and handed the ciphertext over to Bob. She would like to
ask Bob to perform word searches in the encrypted document, but she
does not want to reveal the document plaintext to him. In the most secure
version of the problem, Alice will not even tell Bob what word she
is searching for (although she will tell him a function of the word).
A simple solution involve encrypting the files using ECB and using
the encrypted search terms as search keys. This is undesirable
because ECB is subject to dictionary attacks (which is exactly why
searches work).
The method proposed in the paper encrypts the data using a modified
cipherstream. In the simplest version of the scheme, Alice encrypts
the file using cipherstream blocks that are formed from the
concatenation of two pieces: a pseudorandomly generated running
cipherstream value and a one-way function of a key and the PRG value.
If she needs to search for a word W, she tells Bob the value of W and
the key for each cipherstream block where the word might occur. Bob
can xor the word into the encrypted block, obtain the cipherstream,
and use the key to validate the two cipherstream block parts. If the
word actually occurred at that point, the validation will succeed;
otherwise, it will usually fail. False positives are not security
failings, as they reveal no extra information.
The first variation of the simple scheme makes each block key a
one-way function of the plaintext. This lets Bob search for a word without
revealing anything about the blocks that do not contain the word.
Alice can use encrypted terms for searching if she first encrypts the
document using an ECB scheme and then applies the cipherstream
encryption. To search for a word, she supplies its ECB encryption to
Bob, along with the keys for the cipherstream blocks of interest, as
above. This suffers from a minor problem: Alice cannot decrypt the
version of the document that she gave to Bob because the cipherstream
depends on the ECB encipherment. In order to disentangle the two,
Alice need only make a slight change in the encryption method that is
applied to the document. She must base the second cipherstream block
part on a substring of the ECB value (instead of the running cipherstream).
Other enhancements allow restriction of searching to "at least one
occurrence", "at least N", or "at most N".
The scheme has provable secrecy and requires only a single master key.
The most important open question about the scheme is what other kinds
of functions can be performed on encrypted data.
Questions:
Steve Kent: The requirement for a completely specified search is an
important issue. This means that the method cannot cope with
overlapping ciphertext; the search terms must be matched to the blocksize
of cipher.
Paper 2: Efficient Authentication and Signing of Multicast Streams
on Noisy Channels
Two protocols, TESLA and EMSS, provide solutions to the difficult
problem of associating a verifiable identity with each message in a
multicast data stream.
The presentation of TESLA, Timed Efficient Stream Loss-tolerant Authentication,
builds successively on 5 pieces to achieve security properties.
It relies on delayed authentication and loose time synchronization between
a sender and the multicast receivers. A message authentication code
(MAC) is tied to each message; the MAC is based on a secret key. The
key for the i'th message is revealed in message i+1; in this way each
message can be used to authenticate the previous message.
The method is efficient computationally if the MAC is efficient, but
a lossy multicast environment introduces problems because a message
cannot be sent until all receivers have seen the previous message;
providing this guarantee implies that the basic transport protocol
is reliable, but this violates the basic assumptions of the system.
TESLA solves this by using time intervals to determine when the
MAC key gets changed and by delaying key disclosure for several
intervals. The scheme can accommodate dynamic packet rates within
certain bounds.
If the receivers have widely differing roundtrip latencies, the
sender can use multiple time intervals (with different keys). The
nearby receivers can use short intervals, thus validating messages
without incurring latency, and the far away receivers can use the
longer intervals, thus avoiding the need to drop messages due to
late arrival and key expiration.
The EMSS (Efficient Multi-chained Stream Signature) protocol
addresses the more difficult problem of non-repudiation. This
requires a public key signature algorithm, but a signature on
each message would be computationally onerous and would also
increase the size of each message by at least hundreds, if not
thousands, of bits. The basis of EMSS is a signature over
the hashes of several packets. This amortizes computational
cost and eliminates concerns over packet loss.
Experiments with EMSS indicated that the overhead be brought down
to 40 bytes per packet under realistic scenarios. However, the
problem of selecting the messages that go into a signature
group is harder to solve. If the losses are correlated, the
grouping should not be correlated to the loss pattern. This
means that groupings should have some randomness to them.
TESLA is being considered as part of the IETF research work
on secure multicast schemes.
Q: (ed. inaudible question re EMSS and non-repudiation)
The debate rules limited heckling to be directed at contradictions and
absurdities. Each heckler was permitted one comment per speaker time
slot with a four word limit
Proponents:
Reiter: Anonymity
Do They Work?
Caveat User - Java applets leave channel back to the origin. Attacker
might have enough resources to break the underlying cryptography. One
need always look for bugs in the implementation. The services do
raise the bar for the surveillers. Anonymous services are abused to
stir up trouble; this leads to shutting them down. Law enforcement
has the authority to open up the service for search (in the US)
Forward secrecy systems are "largely immune" to law enforcement
searches
Rebuttal to Reiter
Needham: Steganographic File Systems
Rebuttal to Needham
Felten: Real world ("meat space") requires computers; there are
threats that are independent of computers. Data mining exists because
of computer technology. Our everyday interactions give up a lot of information.
Cohen: Sufficient motive and resources will overcome any protection. There
are steganography discovery software programs. Keyboard recording will get
any keys used for stego. The means to get information can be the means to
disclose it.
Dave Marvit: Email Policy Systems
Rebuttal to Marvit:
Cohen: The threat model is not realistic. It only defeats law
enforcement; they are not the threat. The system represents a
fundamentally foolish attitude. In practice, one can always find
deleted material. Cryptographic key generation and storage of the keys can
expose them later. The trusted party might give over the keys anyway.
Stefan Brands: Privacy and Public Key Infrastructure
A recommendation is to use blinding on the certificates to hide the
identity from the issuer. This allows the principal to disclose individual
attributes to observers one at a time, as needed, under principal's
control. Privacy is not anonymity, it is merely the controlled and
willful disclosure of information.
Rebuttal to Brands
Felten: This is a good description of real-world. There are recent
increases in number of times you are asked for ID. Mathematical
impossibility is beside the point; practical problems are more
important. Felton has had his own encrypted email subpoenaed.
Cohen: There might not be any long-term unbreakable cryptographic
system. Zero knowledge doesn't say anything about source integrity,
which is important to privacy. Lies can violate privacy. Driver's
licenses can be forged. Mass privacy happens because it is too hard
to search everything in the whole world.
Ross Anderson: Privacy Technology Lessons From Healthcare
Secretary Shalala is working on legislation that gathers health information.
NIST is proposing a security model that is the Orange Book warmed over.
The UK did same thing in 1994-1995, and it breaks spectacularly. HIV
prescription is problematical. Should all providers have access to
all patient information?
Anderson recommended using a compartmentalized approach in 1996. It
controls information sharing as in paper records. It uses an
ACL-based model. Does it work? There is a system running in 3 UK
hospitals. It went to role-based model. Nurses can see records of
their own patients, good for 90 days. The system uses capabilities in
the form of certificates and smart cards.
One hard problem is dealing with research use of information. Drug
representatives are interested in deducing patients from doctor's
records of what they prescribed. Sanitization of data is highly
application specific; it involves removing identities while leaving
enough information for research purposes.
Large databases are assets and will not be given up easily.
Privacy is about power. Government definition of privacy protects Tony Blair.
Rebuttal to Anderson
Schaefer: Denning showed that given enough queries you can defeat
de-identification. Health providers may need to know all diseases of
the patient. You cannot hide too much from the doctors without
impairing theiir ability to treat the patient.
The Opposition
Marv Schaefer (Dave Bailey was slated to substitute for
Marv, but Bailey was at Los Alamos where wildfire threatened the facility).
Schaefer notes that the "fire started as control burn; security
measures incurred the disaster is was supposed to prevent. Not all
actions will lead to the goal. White hats can destroy what they are
trying to preserve."
Technologies are often applied to a problem without deep understanding
of the goal. With respect to privacy, out of band channels always
exist. Humans will always make mistakes, no matter what the designers
intended.
Reply to Schaefer
Q&A
Q; Isn't it true that problems don't come from personal info being
captured and put on web sites, but from identity theft?
Q: is there anyway to put the genie back in the bottle? Given that
Internet detectives can find out so much about you?
Q, Paul Karger: US government about 25 years ago had embarrassing tape
erasure that compromised privacy. White House email backup problems
were relevant to Privacy Act.
Kent: Reading email while disconnected renders the disappearing email solution
ineffective or undesirable (because a copy of the email must be kept
on the disconnected machine). Financial transactions can result in loss of
identity. A credit card number leaves a noticeable trail of information.
Paper 1: Searching for a Solution: Engineering Tradeoffs and Evolution of
Security Protocols
An innovative approach to the design of new authentication protocols
is to use genetic programming in conjunction with BAN logic to
discover a message sequence that achieves the security goals. This paper
reports on experiments using hill climbing with an objective function
to find sequences of belief states that constitute a protocol that is
correct and efficient.
The protocols are first encoded into bitstrings for manipulation by
the genetic algorithms. Multiple participants in the protocol are
allowed. The algorithm generates bitstrings, and the results are
interpreted as though the generator had selected message components
(beliefs) and recievers; the system them updates the belief states of
the sender and receiver to add derived beliefs, and checks for goal
satisfaction. At the end of the protocol, the sender and receiver
should be in a state where they agree on a session key that is "good"
(derived appropriately). The method generates protocols that are
honest by construction.
The genetic algorithms require the ability to evaluate a protocol in
order to prune out unpromising sequences. The paper describes the
experiments with initial and refined evaluation functions, noting the
successes and difficulties of pruning out useless protocols early
while still having enough latitude to discover solution protocols.
The speaker noted that these were experiments and that the methods
in the paper were not definitive of best practice in this novel
area.
Paper 2: Authentication tests
For a given protocol, it is desirable to determine which
authentication and secrecy goals are achieved by the protocol. The
method presented in this paper does this using syntactic matching to
find "regular transforming edges" in strand spaces. This is a way of
tracking protocol information, such as nonces, between honest
participants. A regular transforming represents information that is
received and later sent as part of the protocol.
The analysis method is useful for showing whether or not a protocol achieves
its authentication goals, but it has the added advantage of indicating
where there is a possible abuse of the protocol by dishonest parties.
Weaknesses in a protocol can arise from having too many transforming edges
(reflection attacks, etc.) or by having free variables on transforming
edges. The pattern matching technique on edges shows which edges can
be used to compromise the protocol security.
The methods use the notion of transforming a value into an altered
form. The basic security question is whether or not the
transformation was done by a "regular" participant or a penetrator.
A penetrator might use a regular participant as a dupe.
The examples in the paper illustrate applications of the analysis
to Needham-Schroeder-Lowe, Otway-Rees, Neuman-Stubblebine, and Woo-Lam
(which can be shown to contain an exploitable transforming edge.
Paper 3: Protocol Independent Secrecy
Protocols for key exchange require that the key remain secret. This
paper describes how to simplify formal proofs of secrecy by using a
protocol dependent proof part and a protocol independent proof part
that does not require induction.
The terminology used for the proof method are worth remembering, even
if the formal methods using ideals and co-ideals fade: spells are a
book of secrets (session keys), Cabals are agents, the secrets of a
spell are the book and the long-term secrets of the cabal.
A trace is a sequence of legitimate messages, spells, and fake messages.
An intruder can only construct messages based on available information
from the protocol run.
The proof method relies on introducing "spell events" as artifical protocol
events; these denote the transmission of a secret to the legitimate
participants. Correct protocols will not be able to transmit these
secrets to intruders, no matter what sequence of messages the intruder
can generate. By appropriately defining traces with and without
intruder messages, it is possible to use only first-order logic to
show that the protocol dependent parts are safe from intruders.
Examples done by hand in paper are Otway-Rees and the modified
Needham-Schroeder. Proofs are done by case analysis on messages in
protocol
For the Proposition
Eric Raymond:
Why rely only on the algorithm for security? Because it is hard to protect
lots of bits that comprise the algorithm, but the key is only a few bits and
thus easier to protect. Open source has same property - the "many eyeballs"
effect. Developers change behavior based on presence of open source.
Internet software has always been largely open.
Detection of malicious bugs in open software are usually caught and
fixed in hours.
Brian Witten:
Against the Proposition
Microsoft makes bad software, Microsoft is closed source, therefore closed
source is bad. Disentangle it - Microsoft engineers can examine their
own source code, but it doesn't make it better.
Java: if we keep fixing holes, then eventually we will fix them all.
Each new JDK introduces new vulnerabilities. Penetrate and patch is
not optimal.
Many eyeballs. The wu-ftpd had a bug that was undetected for many years,
only to be exploited in DDOS. RST had noted it in scans. There are tools
that will find low-level bugs like buffer overflow, but if the overall design
is bad, fixing them doesn't make the system more secure.
Fred Schneider:
More attention to assurance is important. 1. Test. 2. Analyze and
restrict descriptions. 3. Analyze and restrict the construction process.
Open source cannot do item 3 because no one is trusted. The software producer
could be made accountable for problems in all three areas.
"Amateur capitalism to professional communism does not scale."
Lipner:
PC Week has example of penetration that was enabled by source code
availability.
The fun part of open source is writing it; reading it isn't fun. The
best approach is to pay people to do this and to be successful in the
marketplace. Someone has to read the code; it is best done in a closed
source environment.
Q&A
Ross Anderson:
Until 1982, one OS was shipped with source code. Comment?
Fifth Session: Intrusion Detection
Paper 1: Using Conservation of Flow as a Security Mechanism
in Network Protocols
The WATCHERS distributed network monitoring protocol attempts to
identify and isolate misbehaving routers. Each router counts
messages in several categories, and the counts are checked for
consistency (the Conservation of Flow condition). This paper
considers various shortcomings of WATCHERS, and suggests some
possible fixes.
Paper Logic Induction of Valid Behavior Specifications for Intrusion Detection
One approach to intrusion detection is to make behavior profiles
for privileged programs, and notice when such a program does
something unusual. This paper uses machine Learning to automate
the construction of profiles. The profiles are based on unordered
sets of system calls. Inductive Logic Programming is used to
process example program executions, and create first-order logic
formulas that distinguish ordinary from unexpected behavior.
In experiments, the generated formulas detected attacks on the
Unix programs imapd and lpr.
Sixth Session: Assurance
Paper 1: Model Checking for Network Vulnerability Detection
A model checker is different from a rules-based network. Model checkers are
good at searching large state spaces to determine whether an assertion is true
or false.
An exploit is a technique used to carry out an attack; exploits in
this system are modeled by Prerequisites and Results. Attacks are
changes that increase the vulnerability of the target system.
The model of an attacker is one who or that which:
The attacker is trying to reduce the level of security of the overall
network below the security requirements. An example requirement is
that attacker cannot get access to Private File Server or root access
on Public Web Server. The model checker can automatically derive a
pathway from compromise of the password file to a login on the web
server in order to access to private file server.
Question: Performance? Scalable to large network?
Paper 2: Verifying the EROS Confinement Mechanism
Jonathan Shapiro:
Lampson defined confinement as program that can only communicate outward
through authorized channels. A confined subsystem cannot be inspected by
the user.
EROS, in a simplified view, has two kinds of objects: data pages and
capability nodes. It is a microkernel system without performance
penalties.
An EROS Constructor certifies confinement of a program or subsystem by
examining the capabilities to be assigned to it. If the capabilities
are either safe or already known to be authorized, then the subsystem
can be considered confined. Capabilities are safe is they do not
convey mutation authority, or are read-only or weak or limited by a
constructor that generates confined products.
Sam Weber:
Shapiro:
System calls either succeed or they don't; there is no hidden state in
the operating system, no kernel stack. Actions in capability systems
only have local effects on the direct access graph, which simplifies
verification. EROS process creation time is only 2/3 that of Linux's
fork and exec. This shows that confinement can be enforced in
capability systems.
Question: The result sounds like type safety and soundness for programming
languages
Paper 3: Fang: A firewall analysis system
Do you know what your firewall is doing? This is not a joke. System administrators
face a lot of hard questions in managing firewalls. The
rules are order dependent, written in arcane language, with poor GUI's,
and the problems are multiplied if there are multiple firewalls. Firewall
consultancy is a well-reimbursed career.
The objective of this work is to allow high-level analysis of a
configuration. It can be used to gain understanding of the
configuration via queries.
Names are taken from configuration files; each firewall gets a portion of the
namespace, and each network interface gets a portion of the namespace.
The Gateway Zone Graph is a data structure used in the analysis. A
bipartite graph, it has firewalls, gateways, network interfaces, zones.
The analysis system needs to check all paths from source to
destination. It does not model routers and thus doesn't depend on their
correct configuration as part of the system security.
FANG tests for spoofing of the source IP address in network packets.
It can test that a firewall drops faked packets. In comparison
to current active vulnerability testing tools, Fang is unique in that
it can test the entire Internet as fast as individual host. The spoof test is
an abstraction, not actual testing, so it is faster.
Q&A
Q: Performance?
Chenxi Wang - Software Tamper Resistance through Obfuscation of
Static Analysis
Lee Badger - Wrappers and Emerald intrusion detection
Clay Shields - Using IP Multicast for anonymous IP addresses;
multicast lead to families of protocols with different properties. We
are developing a logic of anonymity. Onion routing style approach.
Dan Lough- A Taxonomy of Attacks in the Internet
Extended MATT Bishop's work on studies of attacks. Added integrity flaws and
trapdoor from McPhee and Neuman. Added others from several sources.
Have ambiguous categories. Matches up attacks across taxonomies.
Contrasts principles of security vs. characteristics of security flaws.
Future work IS to create taxonomic systems
Dan Lough - Tamper Resistent Software
The encoder cloaks software, which remains executable
Tampering will introduce bugs and is thus detectable
The program graph is expanded using randomly inserted constructs
that preserve functionality
Fun to play with compiler
Need to define what tamper-proof means
Patents pending
Sejong Oh, The RBAC Agent Model on Enterprise Web Environment
Using SSL and RBAC agents to mediate access
Asirelli, A deductive tool applied to the definition and verification of
firewall policies.
The tool is easy to use, produces concise definition of the firewall policy,
easy to understand and analyze.
Will use it on real networks and in security policy management for a
radiological department
Douglas Kilpatrick - Napolean Wrappers, Data Driven Policy Management
and Enforcement
Role based policy generation.
Napolean will generate graphical policy and wrappers will enforce it.
Layered architecture.
Generates policies easily and enforcement generated in seconds.
Intermediate language had poor mapping to Unix semantics. Had to duplicate
policy attributes for similar policies; could lead to performance penalty.
Need to add secure policy distribution. Need to secure interfaces.
John Reisner - An Agent Model Countering the Security Problem of Malicious
hosts can encrypt portions of agents, can sign the code. But this requires
that the agent creator know the hosts in advance.
5 components that might need protection:
Dave Goldberg - Self-Authenticating Paper Documents
Richard B. Neely - Information Engineering for Security Risk Analysis
Heiko Mantel - A New Framework for Possibilistic Security - A Summary
Information flow for representation, comparison, and construction of
"possibilistic" security properties. Security properties are assembled
from basic building blocks. Two dimensions to the building blocks.
See papers at the CSFWorkshop this year.
Sven Dietrich - History and Future of Distributed System Attack Methods
Categories;
Dawn Song - APG Automatic Generation of Security Protocols
Simon Foley - Malicious Code on a Palm Handheld
Dan Wallach, Rice University - Termination in Language Based Systems
How to enforce resource limits by killing an applet or servlet or
whatever? Soft termination - thread.stop and thread.destroy
are not safe. Can terminate threads, thread groups,
classes. System classes execute normally. Blocking I/O can be
interrupted. Deadlock is ugly. Done via byte code rewriting.
Seventh Session: Key Management
Paper 1: A More Efficient Use of Delta-CRLs
This paper addresses the problem of timely distribution of
Certificate Revocation Lists. Information about newly revoked
certificates (Delta-CRLs) is fetched by certificate users as
needed. This paper considers details of update frequency,
caching strategy, and network bandwidth. The paper argues that
the original method of using Delta-CRLs is not especially
efficient, and suggests Sliding Window Delta-CRLs as an
improvement. The audience discussion brought out the problems
associated with the bursty nature of revocations.
Paper 2: An Efficient, Dynamic and Trust Preserving Public Key Infrastructure
Verifying a certificate chain can require considerable arithmetic.
The paper introduces Nested Certificates, wherein an Authority
testifies that the arithmetic of one or more certificates in a path
is valid. The notion is that Certificate Authorities would generate
these with their spare cycles, and that verifiers would find it more
efficient to retrieve and check NCs than check a whole certificate
chain. There was some audience skepticism about the usefulness of
the idea.
Paper 3: Kronos: A Scalable Group Re-Keying Approach for Secure Multicast
Secure multicast requires some way of updating member keys as
members join and leave the multicast group. This paper looks
at several approaches to rekeying, and models their behavior as
a function of group membership volatility. Kronos is introduced
and contrasted with other solutions. The authors feel that
Kronos is more scalable, while not requiring intermediate nodes
in the key distribution hierarchy to also do packet reencryption.
The audience objected that Kronos unfairly relaxed the forward-
backward confidentiality requirement.
Eighth Session: Access Control II
Paper 1: LOMAC Low Water-Mark Integrity Protection for COTS
LOMAC is a kernel resident access control mechanism based integrity
protection using the "low-water mark" model. The question for
operating system designers is whether or not this can this be done so
that the users perceive it as valuable and painless.
The system introduces two-valued label: low integrity, high integrity.
Compatibility is achieved by loadable kernel modules in Linux.
One can consider a Venn diagram of privilege revision models. The
revisions can be based on what the process observes, modifies, or
invokes. This leads to the following sets:
Because LOMAC is based on what a process observes, it can change the
access permissions for a process while it is running. Unix processes
expect mediation to be done only once, when an object is opened, and
this means that the processes will not be robust when access is revoked.
LOMAC defines conventions that allow communicating process groups to minimize
revocations and still be subject to integrity restrictions.
Q&A
Karger: Is LOMAC the same as Biba's original model?
Paper 3: IRM Enforcement of Java Stack Inspection
The implementation of the enforcement is done via an inlined
reference monitor. It allows efficient stack inspection for enforcing
security policies. The challenges are:
The system works by rewriting the JVML classes; it uses guarantees
given by the JVML verifier. The applications are transformed into
security applications with runtime checking.
The transformation is done by a Policy Enforcement Toolkit which takes
a policy specification and a Java application as input and produces
a Java application with an inlined reference monitor as output. The
policy enforcement state is not visible to the original application.
The IRM must inspect the stack in order to understand the security
state. n Java, each call stack is in a protection domain, each
protection domain has a set of permissions
Stack inspection is done for the most common primitive in Java,the
method call, and also for checkpermission and doprivilege.
The paper contains the timing information indicating the overhead
introduced for the checking.
IRM has the advantage of allowing the security policy to be specified
and inserted into the code at the boundaries of an organization
Q&A
Q: What fraction of Java bugs would this have prevented?
========================end===============================
(or Delegation Logic: A Logic-Based Approach to Distributed Authorization)
Presenter: Ninghui Li, New York University
Fred Cohen: The delegation is uncontrolled if B is a bad guy?
Ans: Yes, it (the game) is all over.
Q: Power of attorney has restrictions, computer languages don't have these
semantic restrictions.
A: The delegation is only for a particular right.
Audun Josang: Suggest adding levels of trust, with dilution of trust on
each delegation, thus automatically limiting chain depth; the trust can drop
off quickly or slowly.
Ans: (ed. not recorded)
Thomas Quinn: Is there a way to get sequencing of atomic actions?
Ans: It could be done.
Q: The transitivity of the second party cannot be constrained by the logic?
(ed. this discussion, relative to the three-party delegation example
above, resulted in a discussion in which the presenter and questioner
could not agree on terms but did agree to continue offline).
Fred Schneider: Bad policy is easy to write; one needs have a language
that expresses either good or bad policy. The language isn't the
issue. No one can articulate a yardstick by which to measure languages.
A: Yes.
Second Session
Applications of Cryptography
Chair: Steve Bellovin
Presenter Dawn Song
Ans: The parameters for length of check block are variable. The paper
does address variable length.
Q: Can this method search for an arbitrary length value independent of
cipher blocksize?
A: There is a tradeoff; substring matches require tricks in the encryption
that have additional overhead.
Fred Cohen: There is a covert channel in searching; match on X reveals
that Y is not at that location.
A: Performing many searches on one document does reveal some
information. We recommend re-encryption if many successful searches have
been performed.
Presenter: Adrian Perrig
A: Multiple signature packets; receiver doesn't need to check them, only
presents them to 3rd parties
Steve Kent: With respect to the claims of low overhead per packet; what
assumption on size of basic packet?
A: The real packet size doesn't matter. Overhead is fixed at 20 bytes.
Q: That might might be 20%-30% overhead; must look at actual application.
Comment: Unicast overhead of IPSEC ESP is 12 bytes.
A: Signature of stream is for free. This is lowest overhead today.
Q: What layer would this be applied?
A: TESLA could be in the application layer; one could imagine in network
or transport as well. Many tradeoffs are possible.
Q: Non-repudiation generally done only at application layer; network
layers doesn't need to consider it.
Panel Session:
Debate: Is Electronic Privacy Achievable?
Chair: Cynthia Irvine
Mike Reiter, Roger Needham, Dave Marvitt, Stefan Brands, Ross Anderson
Opponents:
Marv Schaefer, Ed Felten, Fred Cohen
Anonymity services are viable. One can use remailers, anonymizers,
etc. Sender-receiver unlinkability is defined by Chaum. It uses
source routing, layered encryption, and one trustworthy mix is enough.
Remailers, onion routing, ZeroKnowledge, are examples of real-world
anonymity services. "Crowds" is dynamic and probabilistic routing
using a lightweight protocol; it tolerates small collaborations.
Felton: Mundane information disclosure is common.
Cohen: Server breakins, keystroke patterns, etc. are all ways of
deducing identity Law enforcement gets search warrants because they
are easier than any other technology. Secrets get taken all the time.
One must defeat all attack mechanisms, but one cannot find them all. Human
susceptibility is a problem.
Schaefer: Where are the trusted platforms to run the anonymity systems?
What's the isolation or confinement mechanism?
Can you counter threats that are unique to the electronic revolution?
With regard to personal privacy, people have long wondered if God
could see everything; apparently governments saw a vacancy to be
filled. It is easy to steal laptop machines that have lots of
information on them. This is a reason to invent countermeasures. It
is a good idea to overwrite deleted files with garbage because local
file systems have hidden information. Steganography is attractive
because it interposes the problem of discovering that files have
hidden data. Technology is neutral to political correctness, and
information can be good or bad depending on local conditions.
Schaefer: If a disk is removed, it can be analyzed. Alternatively,
Trojan Horse software might have secreted away information and
will reveal it later.
Proactive deletion of email by a trusted authority protects against
surreptious and warranted searches. There are implementations of
such systems done as Microsoft Outlook plugins and trusted mail servers.
Mail is encrypted with a key shared with the client for a limited time;
after that, the server discards the key.
Schaefer: Cleartext data oftens exists in the paging area and won't be
deleted for months. Visual Basic for applications could let Outlook make
copies of everything.
Felten: In real life there would have a printer somewhere; the user's
mother-in-law might print all email. Attachments opened in Word will have
cleartext local copy.
Brands describes himself as hard-core privacy fanatic. Why do we need
privacy in PKI? Certificates provide a name and key certified by an
authority. The issuer must communicate with databases, exposing who they
are checking, their age, driver's license number, etc.
[Ed. note: The rebuttal team repeatedly held up a camera to illustrate
the point that the attendees were not appearing at the conference
anonymously; their pictures could be taken at any time and used to
reveal their whereabouts.]
Cohen: Anderson's talk bolstered the opposition. One cannot determine
if de-identification does what it claims. Minimal privacy can be
hand-waved; serious assurance cannot co-exist with utility. Data
aggregation has the big problem of covert channels. Social
engineering can break the whole scheme like a house of cards.
Guessing can break the system because data space is too small. Even
weak encryption is probably good enough.
Reiter: Moving the problem somewhere else is an acceptable defense.
Needham: Status quo is maintained if it is as easy to investigate people now as
it was before electronics.
Felten: Electronic privacy isn't real privacy, it's just that people
can't use your computer to learn everything about you. One must address
real world. All information exchange leads to a reduction in privacy.
Marvit:
Free flowing information can mitigate privacy. The academic
question can be put to rest; technology can make things a little bit better.
Brands:
Non-traceable information is not a violation of privacy. Avoiding database
aggregation over a period of years does protect privacy.
Cohen:
The main problem is overtrusting privacy technology; leads to unnecessary
disclosure. People won't pay the 20% necessary protection. Perfect
protection is infeasible. [Fred Cohen is not speaking as employee of Sandia or
DOE in this panel.]
Anderson:
System ownership leads to insecurities; billing information is at odds
with records privacy. EU directive on data protection in 2006 will
introduce tension between Europe and the US. Ther is economic incentive for US
companies to get their act together on privacy.
Cohen: Governments aren't the problem.
Reiter: Corporations are the problem
Marvit: An email deletion system is currently run by his company.
Obstruction of justice issue is only to raise bar for what requires a
subpoena. One can set the system to delete email after 7 days.
Cohen: I sent you email 10 days ago refuting that.
Cohen: no
Syverson: We have worked on all the objections that Cohen raised wrt
to anonymity.
Marvit: There is an offline version of the system; keys can be locally cached.
Regular document destruction is legal.
Cohen: high value transactions require audit trail
Cliff Neuman: Privacy is an intrinsic problem.
Brands: Digicash had mismanagement.
Felten: Fedex still brings goods to the door.
Fourth Session: Protocol Analysis and Design
Chair: Paul Syverson
Presenter: John Clark
Presenter Josh Guttag, Mitre
Presenter: Jon Millen
Day 2
Debate Panel: Will Open Source Really Improve System Security?
Chair: Lee Badger
Panelists: Gary McGraw, Eric Raymond, Fred Schneider,
Peter Neuman, Brian Witten
Peter Neuman:
(projecting a Dilbert cartoon for the audience)
Software supplier: "We can fix these bugs for $20,000."
Dilbert: "I'm starting to question our single source strategy."
Open source is truly a double-edged sword. Given that software
engineering is abysmal, government procurement is inherently broken,
..., we face the exacerbating point that proprietary methods get
product to market as quickly as possible. Would you accept
cryptography where you didn't know the algorithm? One source examination
case smoked out DES with an effective key length of effectively 9 bits.
In a examination of voting machine software, although there were no
obvious vulnerabilities, there were at least 23 ways to compromise the
election. Open box concept in principle lets you do things you
otherwise couldn't; may get proprietary concerns to clean up their
act.
Although there be a case where closed source aids security, I have
never seen a single example.
(Not speaking as a member of the US DoD)
Open source will improve security. Corporations will spend real money on
reading open source; they will hire experts. Fielding a single product
means making tradeoffs affecting security; one size doesn't fit all.
Attackers who get access to closed sources can compromise them.
Gary McGraw:
No.
Fallacies: the Microsoft, the Java, the many eyeballs. Open source is a
red herring for security. Analysis of source code is great, good software
engineering is good. Open source is orthogonal to these issues.
'Many eyes will find buffer overflow, pointer problems, time-of-check and
time-of-use.' This is dogma that is questionable. Using a better programming
language than C is a better solution than political approaches.
Academic review of algorithms is good and can be done. Source code is
on a different scale. There was an exercise in security evaluation
that made "openish" source available over the Internet. This
generated no feedback on the security of the product, and left the
impression that no one even tried to examine the security. The
product was shipped with source code to customers, and none of them
came back with security flaw notices. They merely asked "How do you
use this?"
Schneider:
What? Is the issue about who writes the code?
Anderson:
Until 1982 there were many eyeballs, one set of hands.
Schneider:
There's no harm in widely distributing source code, but I don't want to depend
on volunteers to read the code.
Raymond:
That is a typical model in open source world. In practice, there is
a distinguished maintenance group. The "diff" set gets sent to the maintainer
for consideration. There is peer review with a single point of accountability.
John McHugh:
It's a debatable issue; is there any evidence, preferably published, that open
source does change programmer behavior?
Raymond:
The "Fuzz" paper. This fed random inputs to Unix utilities to locate
bugs. Open source is consistently more robust. That there are no controls for
programmer characteristics makes this more interesting. One might
conclude that amateurs consistently beat professionals.
McGraw:
GNU tools for NT are worse on the fuzz test than the Microsoft tools.
Neuman:
Having someone to sue for problems is an argument that is invalid because the
time to settle a lawsuit is so long.
McGraw:
There is more concern about branding.
(unidentified audience member):
The is an implication that college students (or worse) write open source.
Professionals are being paid to write open source software.
Schneider:
I don't want to debate merits of college age programmers. An economic
model can work. I can't see how to invest in assurance and then give
away high quality and well-tested code. Anyone can recoup the
investment. Software will move at a rapid pace and ruin assurance.
One must invest repeatedly.
Raymond:
There is a good business model for open source. Look at my paper
"Magic Cauldron". Assurance is an upfront payment to establish one's self as
a trusted party. Experience levels of open source programmers exceed
those of the closed source world.
Paul Karger:
(not speaking for IBM). This debate is a red herring. There is a
long series of excuses by the security industry: "no market" "export
control" etc. Windows 2000 is a bug-prone monolith. We need third party
evaluators that are competent. We will never have security along the
current path.
Raymond:
The claim that we don't know how to build complex systems is bullshit. We do
build complex systems, like suspension bridges (McGraw flashes "WRONG" on
projection screen). The method institutionalized is to have a transparent
process and do independent peer review.
John McLean:
Paternalistic socialism vs. democracy. DoD builds crypto but they regularly
have problems detected by independent review.
Raymond:
Is that open source software???
Schneider:
I'm not opposed to code review, but open source doesn't improve security.
Lipner:
It doesn't help unless someone looks at the code.
Neuman:
Without discipline, you get garbage. Years of methods, defense in
depth, but we get weakness in depth. The digital world isn't like the
civil engineering world, but the discipline of engineering is
important.
McGraw:
What does this have to do with open source?
Raymond:
The panel sees a gap between open source and security. Open source in
the stronger sense says peer review requires equal access to source
code. This sets up the right set of incentives. Posit that there's a huge
corporation that shows the source code of Microsoft Outlook to the
world as a "source code under glass" license. A developer will not want
to help someone else profit from improvements that I recommend. The community
will not bootstrap itself, but a symmetrical power relationship will make
the process work. Participants can get power or reputation.
Lipner:
Yet it is to be seen if it really scales. I'm skeptical about reward
structure.
Schneider:
The audience will have to think about what they've learned. Source code for
Windows NT being public won't make it better. When something grows up
as open source will it be better? Eric says "if everyone has equal access
will the system be more secure?"
Raymond:
Open source activity is approximately 7K active projects, developer
and tester and contributors number about 750K. This is two orders of magnitude
more people than any closed source process.
Mark Crispin:
Assurance has a problem that it is boolean. Is is too expensive, and
most projects aren't assured. Open source gives you more effort
devoted to penetrate and patch.
Neuman:
AES is wide open in cryptography; 'many eyeballs' is working. Formal
methodists would love to get their hands on some software. Those
eyeballs would be quite interesting to add. They would get rid of weak links.
McGraw:
The crypto analogy ignores the fact that security is not cryptography.
There's a lot more to it than that. You have to keep the key secret in
the software. Security is an emergent property of software.
Lipner:
Analyzing security of crypto algorithms is more tractable and interesting than
analyzing software.
Neuman:
If it's 60M lines of code, yeah.
(unidentified audience member): The Fuzz results are biased. AT&T is
old stuff. It's folly to ignore progress. There is only a small
market for secure systems. Microsoft is not interested in a small market.
Raymond:
The Fuzz results might show that new code gets written.
Syverson:
I haven't been in the field as long as Peter (laughter) but they've
been dog years. With respec to the 'many eyeballs' argument and
formal methods for slogging through lots of source code, it doesn't have
to be open.
Neuman:
Analyzability requires a formal specification and also structure to
allow abstraction layer mapping. Open source enforces information
hiding, etc.
Lipner:
Stan Ames said something about "mathematicians work on toy problems,
somebody cheats and formal methods cheat on toy problems." In a real
case of a 50K product, formal methods couldn't handle it.
Raymond:
Formal methods are only applicable in ex ante view. One must apply it
beforehand. Open source is an aid even when you look at it after the
release and bug is found.
Virgil Gligor:
Why not talk about open design? Salzer and Schroeder recommended this
in 1995. Open source doesn't help without insight into the design.
The only thing left is black box analysis. You must have some idea of
what the designer had in mind.
Neuman:
That's part of good software engineering.
Raymond:
Open source is devoted to adherence to standards.
Marv Schaefer:
A comment. I participated in an analysis of WinNT with Office95 with
no access to source code. We did formal methodism, flaw hypothesis
method, and read documentation. We found many flaws this way. Source
code access would have revealed deeper flaws, probably. We told
Microsoft about the flaws. Open source might not have resulted in
reports back to Microsoft. One cannot assume no problems just because
no one speaks out. Open source will help enemies more than friends.
Neuman:
Security by obscurity is always there. Reliablity requires openness.
This is an intriguing mismatch. Mismatches are often the reason that
OS's fail. I still want some kind of open analysis; reports from
friends. It is a difficult thing to set up.
Schaefer:
The design was open; the implementation had problems, implementers did
not follow design principles.
Raymond: Open source doesn't guarantee good peer review.
McGraw: That's the Microsoft fallacy!
Lipner: My colleague here manages research source licenses.
Ladder of Microsoft:
There are 110 users who have access to source and many companies.
(unidentified audience member): Openish doesn't cut it. Open source
is white box. Grey box ... we have closed binaries. Can you tell me
with straight face that customer is better off if he cannot inspect
anything???
McGraw: What is the question?
(): If open source is not better it should follow that closed binaries are
better. ASP's restrict access even to the binary.
Lipner: Security depends on a whole batch of things. ASP might be better or
worse, it all depends.
Chair: Phil Porras
Authors: John Hughes, Tuomas Aura, Matt Bishop
Presenter: Calvin Ko
Chair: John McHugh
Presenter: Ronald Ritchie
* Chooses a host to analyze
* Tries to find an exploit
Answer: The flexibility of checker allows more sophisticated analysis.
Question: If the model checker finds a particular attack, will it find
others?
Answer: No, but it could be changed to keep going.
Question: Could you discover exploits by analysis of the model?
Answer: No, I don't think so, we need to start with known exploits.
Question: Have you considered tying this to a scripting engine? (Laughter)
Presenters: Jonathan Shapiro, Samuel Weber
EROS is a fast, capability-based operating system. Higher-level
security policies can be implemented on capability systems if there
is confinement assurance. This motivates the verification of the
EROS confinement mechanism, which provides runtime guarantees of
process confinement.
The model is constructed as a state transition system. The model is
more powerful than the system. It implicitly quantifies over all user
states. Confinement only requires that processes can't increase their
authority by means of object creation. If A creates B, B gets no more
than subset of A's authority. For each kernel call, the system must
name the state objects it mutates or creates. This constitutes
a formal model of capability systems
The Verification strategy
Answer (Weber): Indeed.
A (Shapiro): It is different in several ways.
A (Weber): The computation is more compicated.
Q: What is the performance wrt number of systems calls to effect a result?
A: That isn't the right metric. Instead, how much time is spent in systems
services? EROS does many kernel operations, but is still faster than Linux.
Q (Cliff Neuman): Some of us have been playing with capability systems for
over 30 years. What about the realistic nature? I'm delighted to see
the continuation of KEYCOS chain of approaches. What is the usefulness of
all this research that has never seen the light of day?
A (Shapiro): Why do an oddball OS? There are contexts arising where
people need to execute code where parties don't trust each other.
Unix doesn't facilitate controlled collaboration; it does allow
Balkanization. I don't see this taking over the desktop. Second,
a capability system that does indirection can do revocation.
Q (Karger through Neuman): AS400??
Presenter: Avashai Wool
A: Even with thousands of rules, the whole thing works in seconds.
Q: What about human behavior, say cabling things incorrectly?
A: We are working with Bill Cheswick to detect rogue connections, then
we will use it as the definitions that are input to Fang.
Q (McHugh): What about the utility of rule transformations to
translate between vendor formats?
A: 1. It is feasible, but we haven't done it.
2. We modeled what a firewall can do and fit everything into that. If
we don't model something that the firewall supports we could err;
we are pretty good on core network protocols and have captured them
pretty well. More elaborate protocols force us to fudge it somewhat.
We tried to take a fairly inclusive model of what firewalls can do.
Q: Balkanization in telephone switches some years ago resulted in AT&T
producing a tool to translate between configuration languages was
really useful.
A: My paper last year was relevant to that.
5 Minute Talks
Originator
Originator data
Acquired data
...
8 operations
Use models to develop secure agents based on analysis of agent
requirements and vulnerabilities.
Write glyphs onto each page of document
Scan the document
Print digital signature of the compressed bits on the document
Need to scan and get the content, not the appearance
Do symbol-based compression, dieselpaper has higher compression ratio
but lower quality
Can get 24K postscript down to 3003 bytes
Compression tries to find similar connected components; keep pointers
to connected components
Store pointers to images
Assets -(Impact) Threats, - (Impedance) Controls
Relevance Applicability
Want to apply this to products for general analysis capability
Information gathering
Remote sniffing
Denial of service
Remote execution of code
The problem is how to find the attack topology quickly.
Increase targeting of infrastructure
New protocols are necessary
Method is to take requirements and specification, generate
possible protocols, screen them, select an optimal protocol.
The "Athena" protocol checker relates specification to
protocol
Two party authentication and key distribution with TTP
is 10^12 state space.
Can be explored efficiently.
Didn't find any malicious code, despite searching
Asked, what does it take to run malicious code on such a device?
Did construct a virus that goes into the code resource of target
application database
Easy to infect once a virus is on the device
Infection from handheld to workstation is very unlikely
At any rate, infection does not spread on workstation
Applications may facilitate virus spread through mail
Day 3
Chair: Audun Josang
Presenter: David Cooper
Presenter: Albert Levi
Presenter: Sanjeev Setia, Samir Koussih, Sushil Jajodia, and Eric Harder
Chair: Lee Badger
Presenter: Tim Fraser
Invoke - Chinese Wall, Clark-Wilson, DTE, RBAC
Modify - Chinese Wall
Observe - Chinese Wall, Low Water Mark
A: Yes. The model had to be modified to accommodate shared memory.
Lipner: Have you tried enough useful work to see if it really is easy
to use, considering upgrading and downgrading frequency.
A: Yes, I used it for web building. LOMAC trusts the system log objects.
Q (Cynthia Irvine): How does setuid work?
A: LOMAC doesn't have identities. A level 1 root user remains at level
1 and cannot affect level 2 objects, cannot kill higher level processes,
cannot mount file systems, etc.
Presenter: Ulfar Erlingsson
* to guarantee integrity of the monitoring when it is embedded
in the application
* to observe all relevant effects,
* to maintain the functional correctness of application.
A: Most bugs were found in the verifier; we rely on the verifier.
However we could stop things like an applet opening too many windows,
or you cannot use the network if you have used the file system - we
can do this.
Q: I built that in 1995.
Q: What are the implications of JIT?
A: The JIT is a complicated compiler in the Java TCB. We have the first formal
representation of what stack inspection is.
Q: (question re hardware failure)
A: We rely on the hardware, always.
Badger: Can the IRM's share policy state information?
A: They can communicate with IPC if they aren't in the same process. That
capability is part of our system.
Q: McLean: can you take information flow out of this?
A: We have given thought to extensions, yes.