New Security Paradigms Workshop (NSPW '99)
Caledon Hills, Ontario, Canada
September 22-24, 1999

by Mary Ellen Zurko mzurko@iris.com

Workshop Home Page

NPSW 1999 was held at the Millcroft Inn in Caledon Hills, Ontario, Canada,
September 22 - 24, 1999.

The first session, Protection, was chaired by Bob Blakley.

The first paper was "Secure Dynamic Adaptive Traffic Masking" by Brenda
Timmerman, California State University, Northridge. She dedicated her
presentation to the memory of Jon Postel. He felt the Internet was in
danger and wanted to protect it. He went against doctors orders to not get
into antagonistic situations. And he gave her a lot of encouragement. Some
Internet users need protection from traffic analysis, including ,
government agencies, and industrial and  financial institutions. Traffic
Flow Confidentiality (TFC) provides this protection. It can protect the
length, source and destination of a message (though her work is not
covering source and destination confidentiality yet). But, TFC can consume
network resources. The use of padding and delays can impact performance.
Adaptable approaches can reduce overhead. She used the classic "pizza to
the pentagon" example. You need a model of what you want to hide. Adaptable
transport layer traffic masking may show some of the higher peaks, but can
save packets. She quoted Jon Postel as saying, "if you're not getting the
system administrator mad, you're not really doing research." With Secure
Dynamic Adaptive Traffic Masking (S-DATM), if you know what your peaks will
look like, you can program intermittent peaks. She implemented a prototype
using peer modules on the sender and receiver. The security model formally
represents system security policy. It includes ranges of acceptable
behavior and rules for dynamic adjustments, using statistical techniques.
It can provide precision, reduce processing and storage, address
statistical anomalies, and satisfy security policy with critical values. It
is scalable to Internetworks. The matrix math assumes all nodes are fully
connected and have global knowledge (but there are other approaches). You
track past behavior (what you want it to look like) and current behavior.
Throughput, inter-arrival delay, and burst size are the characteristics
studied. Her current assumption is that each message has the same size. An
attendee considered that the technique might also be used to make the
adversary to come to an incorrect conclusion. During questioning Brenda
emphasized the need for enough phony peaks to disguise the real ones. There
was also consideration of whether the technique was powerful enough to
disguise second or third order statistics. She compared performance,
efficiency and protection of three approaches: nonadative random, adaptive
random, and S-DATM. The performance is the average delay per message. The
efficiency is the amount of padding. The adaptive random gave the best
performance and the same efficiency as S-DATM. Protection is given by
correlation coefficients. Below .3, the correlation is fuzzy. Adaptive
random had a correlation of  .7, S-DATM has one of .028. She tested with
real mail traffic patterns collected from ISI (a 20 minute window). S-DATM
provides improved performance over nonadaptive random and it is tunable
with no loss of protection. There was some discussion of using information
theory as a basis fo
r such techniques. Brenda had tried but couldn't make it work. She is
having students act as protectors and the enemy, and will look at the
impact performance.

The second paper was "Security Architecture-Based System Design" by Edward
A. Schneider, Institute for Defense Analyses. Defense Goal Security
Architecture (DGSA) is a little known paradigm. It can be applied to system
design. Based on a threat space it produces a protection profile and helps
with the selection of mechanisms. It is applicable beyond defense. It deals
with a mix of independent concerns from difference information domains,
which partition the information space such that each domain has homogeneous
protection requirements. The domains include security associations, policy
specification/enforcement separations, and specific strength of mechanisms.
You don't have a piece of information in multiple domains. Transfers
between domains are limited by policy. There is a policy for each user that
is consistent throughout an entire domain. They are similar to , but
different from , protection domains. Changes don't effect polyinstantiated
copies between domains. A domain is a set of information objects, users,
and policy. You make the partitioning fine enough to cover all aspects that
need to be different. There was much discussion about how doing that might
impact functionality. An attendee said that the policy for releasability
may be independent of classification. It would need to be replicated for
each domain. The Secure Management Information Base represents the policy.
They have implementation experience with such concepts in Tmach and virtual
machines. But maybe you don't want a direct implementation. There is a
multi-dimension information space. The object dimensions lies in
intersection of the information domain, the network location, and the
application/manager. The application often has to interpret policy, and it
defines the threat space. An attendee pointed out that if you need meta
policies for applications that span domains then you have a covert chanel
problem. Thinking this way has you determine what are the pieces of
information you are trying to protect,  what type of protection you need,
and what partitions or walls are needed. First you need to take an
enterprise view of how to protect information, then think through its
distribution. The information (type), application, and end system defines
the information space, and defines which threats we're worried about. The
threat model might take into consideration location in the network,
application, risk tolerance, policy, and importance of the information. The
protection profile in the common criteria is a statement of requirements.
Organizational policy and threats leads to assumptions/objectives which
lead to requirements. The common criteria is oriented towards components;
there are composability issues. You can use the multi-dimension space to
move from the system to component requirements. When you have policies that
don't compose, you need to decide whether or not to share certain
information.

Session 2 on Availability was chaired by Anil B. Somayaji.

 The first paper in that session was "Survivability - A New Technical and
Business Perspective on Security" by Howard F. Lipson and David A. Fisher,
CERT Coordination Center, Software Engineering Institute. David presented.
In the technical and business context for security, confidentiality is seen
as most important. There are closed, physically isolated systems with a
well defined periphery and central control and administration. A complete
knowledge and visibility of the system is assumed, and insiders are
trusted. Systems run dependable software (correct, reliable, trusted) and
there is a bounded value of potential loss. In the technical and business
context for survivability, business risks are most important. There are
open, highly distributed systems with an unknown or ill defined periphery
under distributed control and administration. There is incomplete and
imprecise knowledge of the system with visibility limited to local
neighborhood. There are untrusted and unknown insiders, unknown software
components (COTS, Java applets, CGI's, etc.), and business competitiveness
and survival are at risk. Survivability is the ability of a system to
fulfill its mission, in a timely manner, in the presence of attacks,
failures, or accidents. The security premise states that it is always
possible to increase the cost of compromise beyond its value to an
intruder. It includes the notion of binary success and focuses on the
confidentiality of information. Integrity and accessibility often also of
concern, but typically viewed as technical issues. The criteria for use of
anything is industry standard practice. An attendee pointed out that in
Europe, dependability spans correlated and uncorrelated events. The
survivability premise is that individual components of a system cannot be
made immune to all attacks, accidents, and design errors. There are degrees
of success. It focuses on mission specific risks. Mission objects often
include information accessibility, integrity or confidentiality. Mission
objectives/ business risks are primary. The criteria for use of anything is
mission specific risk management. An attendee pointed out that the two
premises not comparable, as phrased. Another suggested the comparison would
be you can always do more vs. you can never do enough. The problem space is
unbounded systems, with an absence of central control (no one has
administrative control), an absence of global visibility (no one knows the
extent or participants), with mission objectives/a purpose of the system
(critical functionality and essential quality attributes) and emergent
properties (global system properties, often not present in individual
components). Examples include the Internet and national infrastructures
like the electric power generation and distribution grid, financial
systems, transportation systems, oil and gas distribution, social, economic
and biological systems, neural nets, and (non open source) COTS software. A
security/survivability architecture supports critical mission requirements,
functionality, and non functional global properties. They look for emergent
algorithms such as distributed computations that fulfill mission
requirements by exploiting the characteristics of unbounded systems.
Emergent algorithms produce emergent properties that are self stabilizing.
There is cooperation without coordination; all parts contribute wherever
they are needed. No individual part is essential. EASEL is an automated
tool they are working on for simulating unbounded systems.

The next paper was "Optimistic Security: A New Access Control Paradigm" by
Dean Povey, Queensland University of Technology, Australia. He took the zen
approach of doing slides by hand. He claimed it was liberating not to use
Powerpoint. Dean started with a scenario; there is a hospital, it is a dark
and stormy night, and the only access, by bridge, is down. The night staff
need access to information they don't have normal access to. Traditional
security disallows it, and the patient dies. Having the system aware of
changing circumstances is virtually impossible. There is a gap between what
organizations need and what access control mechanisms can enforce. Systems
don't understand context; they can't directly use a policy based on things
outside the system. There are two sets of actions; legitimate actions and
dangerous actions. The overlap between them is questionable actions. It is
sometimes legitimate to read another user's mail, but more often than not,
it isn't. With traditional security we assume unknown actions are
dangerous. With optimistic security we assume unknown actions are
legitimate. An attendee asked if this was the same as audit. Dean replied
that they want to be optimistic but still have some constraints. It's a fit
for purpose thing. You can explicitly state what actions are questionable.
There are requirements on these actions. They must be accountable,
auditable, and recoverable. A compensating transaction recovers system. An
attendee pointed out that you can prohibit things that you haven't analyzed
to avoid complication. Another mentioned that in some systems, once you're
inside the firewall, it's wide open, to get the job done. This can tighten
things up in that situation. But you need audit in that case. The formal
model uses well formed transactions from Clark & Wilson. There are
certification rules, enforcement rules, and partially formed transactions.
He dropped notion of CDI (data with known properties), since he needed to
have a way to verify that the data is valid. An attendee pointed out that
the risk analysis and thresholds are important. This technique assumes a
large population of benign cooperating users. There was much discussion
about whether this could be applied to hostile automaton, such a content
filtering and sandboxing 'somewhat trusted' code and applying intrusion
detection strategies to the transition information.  He concluded with;
systems can't always enforce security policies, optimistic security
provides a means to allow questionable actions while preserving integrity,
the model can be related to Clark & Wilson  Well Formed Transactions, and
it has useful applications .

The final session of the first day was a discussion, chaired by Marv
Schaefer.

It was based on the paper "Strike Back: Offensive Actions in Information
Warfare" by Donald J. Welch, Nathan Buchheit, and Anthony Ruocco, United
States Military Academy, West Point. Don presented. He pointed out they are
speaking as private citizens and do not represent the US army, DoD or
government. They are not advocating illegal actions. To the best of our
knowledge these ideas do not represent current policy. They state that the
policy should be reexamined. Information war has national security
implications. The information infrastructure is a vital national resource.
The US has fought to protect vital resources before (War of 1812, Box
Rebellion, Persian Gulf War). The Information Infrastructure is owned by
private corporations, many of which are multi-national. Who is responsible
for defending the national information infrastructure? They consider
cyberspace as the 4th dimension we can fight in. Next they asked, what is
information war? The DoD says it is information operations during war.
Denning says it involves every high schooler with a modem. Schwartau
categorizes it as criminal activity and up. The authors state that we are
presently fighting many information wars at many levels. An attendee asked
if they should be classed as acts of war? Don answered yes. Another
attendee drew the analogy to a war on disease like ebola, or a war on
drugs, indicating the same strategy could not work effectively in those
arena. Another pointed out that an actual war declared by Congress
authorizes us to do several things, like use the military. Don pointed out
that we've fought a lot since the last time we actually declared war.
Someone asked what a treaty to end the war would look like. Don maintains
that if the information war is a war and we treat it as a criminal action,
we will lose. We cannot fight an information war today legally except in
very limited circumstances. Someone asked if we need to win or if a truce
is good enough. Don said that Saddam Hussein is still in power, George Bush
is out. He hopes we never have to fight a total war like WWII. Someone
asked who do we attack? We try to identify the proper adversary and take
steps to neutralize them. An attendee pointed out we need to worry about
automated agents. History shows that defensive wars are not winnable
(French at the beginning of WWII, Vietnam War). The adversaries found the
center of gravity and won. In defending through offense, we identify our
adversaries, identify our adversaries' center-of-gravity, and strike their
attack capabilities (example: Libya air-strike). Someone asked if we
authorize formerly unauthorized things, someone might feel threatened, so
would we be a less or more likely target of  an information war? Don said
that there is a similar situation in that some feel there is already an
open season on Arab groups. Someone asked if US citizens are enemies. They
could be, though that is not necessarily a good thing. The Melissa virus is
a smart munition. Who is the enemy who created it? Someone replied,
Microsoft. Don then presented an adversary taxonomy. There are individuals
(crackers, criminals), non-governmental organizations (organized crime,
terrorists, corporations), and governmental organizations (friendly
governments, enemy governments). Someone asked if we should commit military
forces to the Chaos Computer Club. Don suggests measured force. It was
pointed out that they tend to be members of friendly nations, so that would
be highly provocative. Don said that if  we took offensive action, we could
stop them before they get the autonomous agents out. If we had penetrated
their systems and had some idea of what they were doing, we could tell they
were going to launch a massive virus. It could become a huge resource
drain, but what's the alternative? Someone pointed out that Pakistani
attacks could look like India, to turn us on them. Don then went on to
discuss the neglected principles of war - offensive, maneuver, security,
surprise, and economy of force. Offensive - seize, retain and exploit the
initiative. Maneuver - place the enemy in a position of disadvantage
through the flexible application of combat power. Security - never permit
the enemy to acquire unexpected advantage. Surprise - strike the enemy at a
time or place in a manner for which he is unprepared. Economy of force -
employ all combat power available in the most effective way possible;
allocate minimum essential combat power to secondary efforts. Participants
suggested that we might better use these resources to closing our
vulnerabilities, and that like biological warfare, governments might pledge
not to engage in information warfare. Don said that a decentralized way to
fight such a war needed. One attendee asked if we have the right to bear a
laptop. The conclusions to the presentation were: The best outcome of a
defensive war is a stalemate; Current policy and law do no support
offensive defense; Policy and law must change to win information wars; and
National security is at stake. Discussion at the end of the talk touched on
the issue that lack of accountability is a problem in the government and a
classic necessity in security systems. One of the side effects Don sees is
that to be effective the military has to know what's going on in all
systems; we'd have to give up some privacy. Someone suggested that we
should instead create a law saying that everyone has to have decent
security in their systems; another that we should legislate that critical
infrastructures can't rely on computers.

The first session Thursday morning, Caveat Emptor, was chaired by Cristina
Serban.

The first paper that day was "Security Architecture Development and its
Role in the Seat Management and Total Cost of Ownership Environments" by
Ronda R. Henning, Harris Corporation. As best she can tell right now,
security doesn't fit into the title issues. Total Cost of Ownership (TCO)
means how much money does a user cost? This includes desktop hardware,
software, network support, help desk, and people. Seat Management (SM) is
about how to keep the cost per seat low. Techniques includes running on a
single platform, standard software,  "rightsizing", and reengineering the
infrastructure for efficiency. As you drop the cost because of
standardizing, you decrease the probability of survivability enhancement.
Diversity goes down, there are fewer targets, and it takes less
intelligence to attack. Someone pointed out that if devices were more self
managing and configuring, diversity would not be such a drain. What's
missing from this picture? Security management  (virus scanning, firewall,
web monitoring). It's an unspecified component; out of scope with no
mapping of policy to architecture and mechanisms. Should you also cost out
security? What it's not is a one time benchmark, quantitative, or an
"outsourced" infrastructure. What it may be is a tradeoff between spending
more money and having more staff, the ability to use my network, and the
consolidation of services. How much is that security policy in the window?
They looked at having a  service level agreement (SLA), with contractual
accounting for "quality of security". There could be gradients of service
- assurance, frequency, responsiveness. Something like gold, silver, or
platinum practices that map back to an organization's policy. Policy begets
requirements which answer the question, how much security can you stand?
There are tradeoffs between cost, vulnerability, and countermeasures. They
decided to experiment and derive a trial SLA in 13 areas such as
documentation, testing, and recovery. Vendor review produced nothing
substantive, which meant the bar was too low, or they didn't understand it
(yet). The customer wanted "measurable stuff" like minutes, frequency, and
numbers. In discussion it was noted that everything here was process; there
is a feeling that good process will produce good product, but no evidence
that this happens. Certainly bad process gives bad product. It could be a
CYA things from the  point of view of a bureaucracy. These things
contribute to security but they aren't security. It's adversarial; war
isn't a science either. Brenda responded that the timeliness of security
incident reporting has potential, and that they're off trying to find other
numerical stuff. The next steps are trying to refine to numbers and do a
trial pilot. Someone pointed out as an analogy that health care doesn't
guarantee life, just a a suite of treatments. This could be managed care
for security. SLAs could include intrusion detection, incident response,
tracking down the interlopers, some number of hours of service, and some
number of scans per year for clients like E-trade and Yahoo. Small
companies need insurance.

The next paper was a "Cursory Examination of Market Forces Driving the Use
of Protection Profiles" by Kenneth G. Olthoff, from Maryland, USA (he
refrained from officially giving his work affiliation, as his employers did
not want to be associated with this work in any way :). Ken started off
with a rousing preacher-like call to consider security, using the Orange
Book instead of the Good Book, and calling "Can I get an A1?" (instead of
an Amen). There was also an injunction to beware the Gates and Windows of
Hell. Getting down to the detailed content of his position, he restated the
assumption behind protection profiles; if the user writes a profile for
what he wants, a vendor will build the device with the security properties.
Market size dictates vendor eagerness to respond. As an example, he said
that Marv doesn't represent a large market, and John has a slightly larger
market. Is John's profile close enough for Marv? What portions of the
environment are left uncovered? Everyone is taking some amount of grief to
get something. Someone suggested it was demand side economics. The vendor
gets a few standards, and the testers like standardization. Espousers of
the common criteria think it will happen differently. Several other aspects
of commercial models then came up, such as that a steady revenue that grows
a company can produce COTS, or that a vendor tries to meet several markets
and the customer evaluates. Someone pointed out that the market and DoD
have gotten behind NT and Solaris, which would be hard to overcome. If NT
is not evaluated to a profile, who wins? Ken is questioning the existing
paradigm of Common Criteria (CC) economics; "if you profile it, they will
build it".  Attendees pointed out that many users don't know their security
needs, that they may say what they want as opposed to what they need, and
that there is an opportunity for open source. One person wrote a protection
profile. They looked through the CC and asked how each bullet effected
them. This was helpful, as they hadn't even thought about traffic analysis,
for example. Then a lot of time was wasted in writing it up in CC (though
the writing in English was useful). None of the vendors want to read the
CC-eze. Someone commented that it's a way to facilitate customers and
vendors getting together. Why won't the market just work out using the
useful bits? You can always try to create demand by, saying, releasing an
appropriate virus. The Orange Book was the result of 3 years of criticism
and 10 years of work. Protection profile turnaround time is 1 year. There's
a question of whether it's a meaningful process. The CC decouples mechanism
from assurance. European evaluations were cheaper and faster than those for
the OB. There was an economic motivation. Some people were not in a high
threat environment and wanted to spend less on evaluations. The ISO 9000
drivers are the required compliance of large vendors, and they need
compliance from their suppliers. The victims must enforce. There's also a
European government mandate; it is needed to sell to the government. The
DoD tried to do something similar in the US, and it didn't work. It
violated its own rule; only Multics was B2 so the rule was determined to be
non-competitive; the first procurement was deemed illegal by the
authorities. Companies may find a large market group to write/adopt a
profile that matches their products. What's the useful half life of a
profile? Is it really that much different from vendor declarations?

The next session, Authorization, was chaired by Cathy Meadows.

The first paper in that session was "Paradigm Shifts in Protocol Analysis"
by Susan Pancho, University of Cambridge. Her subtitle was, "Needham and
Schroeder again?". N&S was published in 1978 and is used as an example in
many discussions. There are wide variations in the conclusions of analyses.
They have found weaknesses not detected earlier. Why were there gaps
earlier? So, her main question is, Why are there wide variations in the
results of different analyses of the Needham-Schroeder protocols? Maybe we
have better, newer tools now? In today's situation, authentication
protocols are believed to be error-prone. What if it's a gap in the
analysis? Her conclusion is that differences in modeling the protocol led
to differences in analyses. It led to different "flaws" or claims of
attacks. There were new weaknesses from different interpretations and
environments. The question is whether we can find all flaws in all
protocols if we build better tools. We interpret protocols to produce a
model. We may forget the initial goals and assumptions. For example,
Denning and Sacco state "If communication keys and private keys are never
compromised (as Needham and Schroeder assume), the protocol is secure (i.e.
can be used to establish a secure channel)." They go on to say "We will
show that the protocol is not secure when communication keys are
compromised, and propose a solution using timestamps." In Lowe's New
Attack, he states "There is a well known attack upon the full protocol
[DS81]... The attack we consider in this paper is newer, and more subtle
... we show that it fails to ensure authentication... (impersonation by
participant)." A sends to I; I pretends to be A to B. Someone joked that
this could be a feature - delegation. If you just wanted to find out if A
is alive, this is not considered an attack. In Lowe's model, an intruder
can be a protocol participant. What if someone in the protocol misbehaves?
N&S model was trusted participants on a wicked network. Lowe adopted a new
computer security paradigm; we don't trust each other anymore.  N&S
explicitly state they  "provide authentication services to principals that
choose to communicate securely". Someone noted that we need to look at the
original sources to see if it's an attack on the assumptions, or the
protocol. It's like stating p implies q, and you've got not p. Someone else
stated that Newton said I don't do hypotheses, except in Latin, so it
sounded cooler. How do we capture the intent of the developer, to
categorize misuse or abuse rather than failure. An example is the attack on
smart cards using the oven and microwave. Stating the limits indicates
forms of attacks. Stating the warranty gives a toehold to attackers.
Someone related a tale where a tiger team used fire safety to defeat
security; they pushed the system outside of its normal mode of operation.
Pushing the threshold gives you the attack. For example, retransmissions
can give known plaintext. Susan went on to consider Meadow's analysis. An
attacker could be a protocol participant (like Lowe's assumption), but it
also considered type confusion (unlike Lowe and others). It was a
modification again of the protocol environment, and a new attack was
discovered. A nonce should not be revealed in the run. A thought a nonce as
a name, and sent it in the clear. She relaxed the assumption of how smart
the protocol implementer is. It was hard to tell the difference between a
message with a nonce and a name and one with two nonces. In summary, the
difference in modeling the protocol could reflect paradigm shifts in the
security community. Concluding discussion noted that one can expect
vulnerabilities in any protocol outside its original environment. One
question is, can you find all your assumptions? That's the difficult part.
Vulnerabilities change in magnitude as experience changes. Have you found a
class of attacks or found a flaw in the protocol? Many protocols rely on
unprovable assumptions. We have the deductive system, but aren't sure the
axioms are valid.

The next paper was "Secure Group Management in Large Distributed Systems"
by J. Bret Michael, Naval Postgraduate School and John McHugh, CERT
Coordination Center. Bret and John co-presented. They asked, What is a
Group and What Does It Do? A "Darpian" view of the environment of the
future for secure group management includes large groups whose memberships
both change continuously and morph rapidly 100,000's, many to many
relationships regarding communication among members of groups, and
reliance, in part, on cryptography for authentication and access control.
Which new security paradigm should be explored? The authors suggest,
probably none. Are we (and DARPA) searching for a technology based solution
to a problem of a social nature? They could not think of any examples from
today's communication structure. Examples of few to many and many to few,
may be composed. They are looking for examples where all messages are
delivered to all participants, it is equally likely that any member sends,
information should be retained in the group, and there are immediate
changes with membership changes. They know of no examples where secrets are
kept by thousands of  members of a group. Automatons, yes, but not people.
Group membership does not change as rapidly and continuously as imagined.
There are dynamic coalitions. A DARAPA white paper suggests large dynamic
group management as an area of research. Retention of confidentiality to
the current group is required as well. A suggested example was an auction,
but the auctioneer responds to the many. In the minefield with intelligent
agents, mines broadcast status. The mines are a centralized few. Someone
pointed out that in publish/subscribe systems where participants are
interested in messages matching a pattern there is very dynamic membership.
What is the risk if someone gets a message if they're already cleared to
self subscribe for anything? Someone else suggested troops with intelligent
weapons, full sensors sending information; full perfect information. In the
military,
 information flow is based on the chain of command. Information
disseminated few to few. The sensors could know who your friends are and
display them. In which case you can forget about confidentiality; you want
to ensure availability. The hard problem is figuring out the policy for a
dynamic coalition. In this case, you only want to know about local events.
If you fall into enemy hands, you want to remove from the group. It could
also be useful for anarchists. Distributed network topology applications
break down the problem to few-to-many or many-to-few. Routing is done by
neighbors. In IP multicast there is dynamic binding, membership is not
persistent, there is authentication of hardware address and group address,
and they maintain access/membership list at routers.  In ATM, each switch
has a list of peer groups and peer group members. Group management by key
change may be forcing a solution. In the maneuver warfare doctrine, you
want to move faster and better, not more secretly. But still try and
minimize the information the enemy gains. There can be a trade off with
getting information to your own forces, and they can be overwhelmed with
information. Decisions are made in crisis using partial information; it's
most important to know the accuracy of what you know. Maybe they need local
aggregators. Knowing everything all the time may just boggle the mind. You
could optimize the protocol for local information in the many-to-many case.
What would you do with these semantics if you had it?

The next session, Policy, was chaired by Erland Jonsson.

The first paper of that session was "SASI Enforcement of Security Policies:
A Retrospecitve" by Ulfar Erlingsson and Fred B. Schneider, Cornell
University. Ulfar presented. Reference monitors (RMs) monitor execution to
prevent bad behavior. They are able to enforce many interesting policies.
Other mechanisms help to implement RMs. They can capture policy-relevant
events and protect the RM from subversion. They range from kernel supported
to interpreter to modified application. SASI stands for Security Automata
SFI (Software Fault Isolation) Implementation. It implements RMs by program
modification by generalizing SFI. SFI guarantees hardware like memory
protection. Several issues are raised. Does the application behave the same
(not if it is turn into the stop program)? You can only input programs
generated with a high level language so that changes are invisible. You can
also write one on purpose that behaves differently. An attendee pointed out
you want the malicious ones to behave differently. Ulfar says you want to
truncate their behavior. Another issue is, can the application subvert the
inserted RM? There are advantages to SASI enforcement. The kernel is
unaware of the security enforcement. There is no enforcement overhead from
context switches. Enforcement overhead is determined by policy. Each
application can have a customized policy. It can enforce policies on
application abstractions (e.g., restrict MSWord macros and documents). An
attendee pointed out there is still a composition problem with nested
objects, RMs, and policies. SASI policies are security automata. One
notation for specifying security policies can specify any Execution
Monitoring enforceable policy. They are easy to write, analyze, emulate,
and compile. The language is a textual version of diagrams. SASI halts a
program when it fails. Liveness and information flow properties are not
covered. It can enforce any safety property, which is what we've been doing
with RMs in the past. After modifying the application, SASI checks its
policy before every machine instruction. It can enforce traditional RM
policies, restrict access to system abstractions (filesystem), enforce
memory protection within an address space (SFI), disallow division by zero,
and enforce application-specific policies like "no network sends after a
file system read" and "don't allow too many open Javascript windows" and
"MSword macros may only modify their own document".  There was a question
about who would write these policies. Someone pointed out that checking for
division by zero could be more expensive this way if you otherwise have a
hardware trap. Your policy also has to prevent circumvention. Some noted
that colluding objects might be able to remove each others' policies.
Checking at every machine instruction is slow and often there is no need to
check at an instruction. For example, if you have a policy that forbids
divide by zero, you only need to check before divide instructions. SASI
simplifies checks by partial evaluation. They have prototype SASI
implementations on X86 and JVML. You input a SASI security policy and
target application, and it outputs an application modified according to the
policy. X86 modifies gcc assembly applications. You can use X86 SASI to
enforce to SFI, which in turn can protect the RM. JVML SASI modifies JVML
class files. They reimplemented the JDK 1.1 security manager. This provided
a more flexible implementation and produced runtime performance the same or
better. The JVML verifier provides safety (can't subvert inserted RM) and
knowledge. "Higher-level" JVML semantics identify accesses to methods and
data and make it easier to restrict operations like "network send".  An
attendee pointed out you can use it to retrofit Netscape to implement the
Sun JVM policy. It can support stack inspeciion in JVMs that don't support
it yet. The Security Automaton Language state notation not expressive
enough. It must encode all security-relevant information. It is not enough
to check machine instructions. Applications use abstractions (methods). An
attendee
 pointed out this relies on a lower level RM; the JVM and JVML. Another
mentioned that Gypsy dealt with language based guarantees.

The final paper of the day was "Security Modeling in the Commercial Off The
Shelf Environment" by Tom Markham, Secure Computing Corporation, Mary Denz,
Air Force Research Laboratory, and Dwight Colby, Secure Computing
Corporation. Tom presented. We build distributed systems today by test and
patch; that wouldn't work for bridges. Someone mentioned we also have
bridge inspectors. In the past, we would design and validate the security
first, then develop the software, then field it. We want to specify inputs
and outputs and policies, get COTS hardware and software, and build it.
Electrical engineering students have training for design and component
composition. An attendee pointed out that EE students are not usually asked
to provide properties that doesn't know how to define or implement. Tom
wants theory and models to allow computer science students to build COTs
systems in a week. Today, the adversary getting is smarter, components are
purchased, systems change frequently, and there is a lack of tools for
expressing distributed systems security. What would 98% integrity under
some conditions sound like? Someone pointed out that the EE students have
appropriate properties of the components. Tom stated that we have created
empirical models for the human body. Someone commented that we need to
reduce software evolution to a biological pace. Tom said there is perpetual
change; tactical reconfigurations and new strategic capabilities. Attackers
and vendors are making changes. We can take a game view, considering
covert/overt vs. chance/strategy. Modeling in COTS is something like poker.
You can have some strategy, but you use the cards you're dealt. Comments on
this included that you can assert what you have and see if the other side
folds, that with poker you at least know what's in your hand, and that all
the betting is in the ante. Engineering models reduce the system to the
fundamentals. Theory destroys facts; the periodic table replaces volumes of
alchemy books. What are the fundamental elements of a security engineering
model? Tom suggests confidentiality, availability, and integrity. A layered
security model is a useful tool. It should be scaleable, flexible, and
heterogeneous. It needs to deal with the mission, humans, applications,
middleware, OS services, node hardware, the environment, delivery,
development, network services, and data transport, with relationships
between applications and services. The file system might require storage
integrity from the hard disk. Layers are connected by the confidentiality,
integrity, and availability services. What about non-repudiation? Should it
be a base service? Someone commented that revocation involves jurisdiction,
revocation policies, and so on. It is not technical, it is legal with a
technical underpinning. Someone else noted that there is a component of
time under all this, and place (like a notary in France). Someone asked,
What are the attributes of something to make it fundamental? Perhaps that
it's measurable and has composition laws. The act of making assumpt
ions introduces a vulnerability, but it's necessary for a model. It's hard
to measure likelihoods of vulnerabilities. In a nuclear power plant, you
can plan for sequences of correlated events with probabilities of failure
modes to get reliability; you can model both independent and dependent
probabilities. We can measure existing systems; we don't even have the
facts for the theory to subsume. Comparing individual systems may be OK.
Value, loss and time are possible fundamentals. Tom wants to create
equations to talk about file system security. Failures disrupt service, and
they are propagated. Failures can be inserted for known vulnerabilities and
probable vulnerabilities to perform worst case analysis. Someone commented
that this treats the CIA as a boolean. It could be throwing good bits after
bad; you may know something about one application, but no others. There is
no monotinicity property, at least in any way we understand. What worked
depended on history and specifics. No two NT boxes are configured
identically. You get a security argument instead of a security metric.

The final session, Integrity, was chaired by John Michael Williams.

The first paper of the session was "On the Functional Relation between
Security and Dependability Impairments" by Lars Stromberg, Erland Jonsson,
Stefan Lindskog, Chalmers University of Technology. Lars presented. They
are trying to unify the concepts of security and dependability. This can
help get security addressed earlier in program development. Dependability
is reliability, availability, safety, and security. Security is
confidentiality, integrity, and availability. Introducing threat into an
object system is an environmental influence. They differentiate between
delivery of service to a user (authorized) and non-user (unauthorized).
There is delivery of service to user; denial of service to a non-user.
Someone commented that an attribute of the object system is to protect
itself from the environment. There is integrity protection for the
environment, correctnesss of the object system, and trustability in the
resulting behavior (output as anticipated or specified). Faults in the
environment may occur. The object system is exposed to a threat which could
exploit a vulnerability in the system, which could introduce an external
fault. This causes an error state in the system. Someone called it a
correctness failure. Someone else remarked that in one ISO standard
reference, you can find 4 different definitions of error. Radiation in the
environment could cause a bit error, resulting in a crash or unencrypted
message. A writeable .login file would be a vulnerability, the attacker is
a threat, and the reliability failure is files being deleted. Someone
suggested that you never view users a human beings; here's always a proxy.
Internal systems states are correct/error and non-vulnerable/vulnerable.
Non-vulnerable is not very common; it's something to aim for. External
states are correct/failure. A threat, an attack, creates an internal fault
or breach that causes the system to go from a correct to an error state.
The error state may take a system state from correct to failed with a
failure. A fault is an event leading to an error or vulnerability. A threat
is an environmental subsystem that can possibly introduce a fault in the
system. A vulnerability is a place where it is possible to introduce a
fault. An error is a system state that may lead to a system failure during
normal system operation. A failure is an event at which a deviation first
occurs between the service delivered and the expected service. Someone
pointed out that the expected can be different from the specified service.
In the future, they would like to have a hierarchical model, where a
failure in one sense is a fault in a larger sense. Also, they want to model
gradual intrusions, which exploit a vulnerability, introduce something
else, and the system eventually fails (perhaps gradually, slowing down).

The final paper of the workshop was "Securing Information Transmission by
Redundancy" by Jun Li, Peter Reiher, Gerald Popek, University of
California, Los Angeles. Jun presented. Interruption threats are hard to
counter. Redundant transmission makes interruptions harder. But redundant
transmission is not as easy as using redundancy in other systems. There is
path interruption and data interruption. This includes link overload and
dropping stuff. An encrypted message can still be interrupted. The
acknowledgment itself is also subject to interruption. Retransmission means
the possibly failing again. They suggest using redundancy. Don't use a
single path. Any point on a single path is a point of failure. Only
parallel redundancy is considered here. You are successful if at least one
copy of the message is received. Redundancy is used in other areas, like
highly available storage. Transmission redundancy is not easy because
discovering disjoint paths is difficult. Routing is transparent to
applications. Disjoint paths may not exist at all. You can try to be as
disjoint as possible. An attacker has to find a choke point or break
multiple points. Attendees wondered if this was related to existing QoS
work. Someone discussed a story where a company had leased from MCI and
Sprint to gain redundancy, but Sprint had leased a line from MCI. Another
stated that information hiding is considered harmful. Another pointed out
that arguing that the specification of a channel should include path or
path disjointness information is entirely a political problem. We have lost
the ability to ask the provider for that capability. Another noted that an
attacker can cut some lines and force the message to a particular backup
line. The authors are interested in redundancy for large scale situations.
High scale adds further problems. The structure for redundant transmission
can only be built in a distributed fashion. The routing infrastructure
itself is not secure enough. A router on one path can be fooled by a router
on another path. There is a tradeoff between resource usage and redundancy
degree. Each node may have different requirements on security assurance,
different transmission characteristics, different platform, and so on.
Their work in this area is called Revere. Their goal is to disseminate
security updates to a large number of machines. They assume a trusted
dissemination center. The security updates are small size but critical
information, like a new virus signature, a new intrusion detection
signature, a CRL, or the offending characteristics for a firewall to
monitor. Revere cannot use an acknowledgement or negative acknowledgement.
It employs redundancy to have multiple copies sent to each  node. Each node
can also forward security updates to others. A node can contact multiple
repository nodes for missed updates. You may receive multiple copies, some
authentic, some not. You can forward them to other nodes. They assume that
over 70% of the nodes are OK. Someone pointed out the similarity with the
Byzantine general's problem. Each node maintains several path vectors,
which are an ordered list of nodes to go through to reach this node from
center, and a latency. When a new node joins, it contacts several existing
nodes, and compares the path vectors to find parent nodes. Efficiency and
resiliency are the two most significant factors. They maintain the
structure with heartbeat messages. Not all nodes can communicate with each
other. The joining algorithm can detect cycles. There was some discussion
of how to evaluate the resiliency of the structure, including the minimum
number of edges to cut to interrupt communications. It is more resilient if
you have more disjoint paths, and more efficient if you have lower latency.
Maybe you can eliminate the least performant redundant path. Future
research includes developing a further understanding of large scale
redundancy, the security of the transmission structure itself, theoretical
aspects such as resiliency, and deployment in the Internet.