Review of  the
New Security Paradigms Workshop,
Schloss Dagstuhl, Germany
September 19-22, 2006

Review by Carol Taylor
1/14/2006

The New Security Paradigms Workshop was started in 1992 with the intent of examining new paradigms in security. The idea was to offer a venue where research that was promising but might not be accepted elsewhere could be examined critically. Consequently, requirements for acceptance at NSPW include justification from the authors attesting to the novelty of the research or idea being presented. to the critical examination of new paradigms in security. The NSPW program committee screens papers based on the newness of the proposed paradigm, novel approaches to existing problems, or topics that might generate controversy which other conferences wouldn't be likely accept.

NSPW has a unique format with highly interactive, lively discussion of each topic. Of the papers accepted, at least one author is required to attend the entire several day workshop. Total attendance is limited to about 30 participants and only authors, and the workshop organizers are invited to attend with an occasional sponsor attendee.

The workshop presentations and proceedings are unusual in format from most other workshops and conferences. Authors are expected to prepare a 20 minute summary of their work with over an hour reserved for discussion and participation. All attendees can comment on the work presented and question the author. Authors respond and all the feedback is captured in notes that are later presented to the author. Authors are encouraged to include the feedback in the final copy of their work which is included in the NSPW proceedings. The idea of this design is to encourage authors to consider the feedback and therefore hopefully improve their original ideas prior to final publication. This allows authors to incorporate feedback from many other views than the typical three-peer reviews from other conferences.

NSPW's registration fee includes all room and board so that participants can spend time together at meals and carry on the discussions and socialize without worry of travel. This provides a wonderful opportunity to form more lasting connections with colleagues.

This year I was the Vice Chair of the conference which was held on September 19th-22nd at a truly beautiful site named Schloss Dagstuhl, in Germany. I have presented summaries of the papers done by several of the conference participants. Researchers who believe they have new paradigms in computer security or answers to old paradigms, please submit your paper to future New Security Paradigms Workshops. Next year the site will be on the Eastern side of the US.

-- Carol Taylor NSPW 2006 Vice Chair

Title: Hitting Spyware Where it Hurts
by Richard Ford and Sarah Gordon, Florida Institute of Technology and Symantec

The authors attempt to solve the spyware problem by defining a cost based approach to reduce the effectiveness of spyware and adware. The authors develop a method for reducing return on investment for adware owners and develop an attack aimed at disrupting the earnings of these owners by sending many fake requests.

Unknown is the number of hosts required to actually increase risk to adware Maintainers. Someone suggested a biological analogy was the eradication of the Mexican Screw Worm in the 1970's, which was done by using sterile male Screw Worms who competed with the fertile male population. A large number of infertile males was required to neutralize the fertile population and by analogy suggesting that any attack network used would also have to be disproportionately large.

Title: Dark Application Communities
By Michael Locasto, Angelos Stavrou, Angelos Keromytis, Columbia University

This paper presented an idea called a "Dark Application Community". A DAC is a botnet that forwards crash reports and other state disruptions to the bot maintainer. The bot maintainer can acquire stack traces and other state disruptive information from normal use so he/she can then acquire information on new potential vulnerabilities and threats which can be used to generate exploitable code.

One question asked at the workshop was whether this result would be more productive than "fuzzing" or other diversity techniques. Some experiments suggested were to compare the bug discovery rates from open source auto-updated tools such as Firefox or Adium.

Title: Challenging the Anomaly Detection Paradigm
Carrie Gates (CA Labs) and Carol Taylor (University of Idaho)

This paper didn't so much present a new paradigm as critique an old paradigm, that of anomaly detection. The application of anomaly detection to networks was the main focus of this paper. Questions about the rarity and hostility of anomalies, training data problems, and operational requirements were asked.

Dorothy Denning's original paper was cited as inspiration for the paper. Denning had developed her anomaly theory under circumstances completely different than those in effect today. Systems that are networked are more complex, more prone to normal network traffic abnormalities and less manageable than those of Dennings time. Questioning the anomaly detection paradigm will lead to better research with more clearly stated assumptions.

Title: Inconsistency in Deception for Defense
By Vicentiu Neagoe, Matt Bishop, UC Davis

This paper examined whether deceptive mechanisms as portrayed by servers and systems need to maintain consistent false views to fool the attackers. The authors examined the nature of inconsistency in system response and actions. The deception model divides commands into two categories: ones that alter system state and commands which provide information on system state.

Participants suggested multilevel systems, where commands are not supposed to provide feedback to the users. Not sure how MLS systems would affect the model?

Title: A Model of Data Sanitization
By Rick Crawford, Matt Bishop, Bhume Bhuiratana, Lisa Clark, Karl Levitt, UC Davis

The paper described model of sanitization in the form of an inference game. The game involved a sanitizer, an analyzer and an adversary. One goal of sanitization and success in the game is for the sanitizer to transform the data so that the analyst can obtain the desired information without the adversary obtaining any private information.

Questions were asked regarding actively tampering with the dataset before releasing any sanitized information. Attacks could include salting the data beforehand, and using the sanitization as an excuse to announce something known privately.

Title: Panel: Control vs. Patrol: A new paradigm for network monitoring
Panelists: John McHugh (Dalhousie University), Fernando Carvalho-Rodrigues (NATO), David Townshed (University of New Brunswick) This panel involved an idea of an independent network monitoring authority operating to ensure network integrity. Panelists contrasted their concept of patrol versus more traditional discussions of network monitoring. An analogy was used of highway patrols: where a person drives in public spaces which is their own business and that they were present is publicly accessible knowledge.

The panel discussion focused on two areas including the logistics of such a patrol mechanism, and the role and implicit privacy of users. With the logistics question, fundamental questions of what the patrol would observe and collect were posed. Some patrol functions already exist but developing a large-scale patrol would involve aggregating and analyzing large volumes of data, plus deciding what classes of problems the patrol would address. With the initial concept of privacy on the highway, there was debate about the role of privacy on-line, recognizing that a user's perception of privacy is contextual and possibly unrelated to the facts.

Title: Large Scale Collection and Sanitization of Security Data
By Phil Porras (SRI), Vitaly Shmatikov (UT Austin)

Porras and Shmatikov's paper looked at existing research challenges in data collection and sanitization for security research. The authors suggested that security research lacks empirical work because there are so few data sets. Though public data sets are slowly being released, data sanitization is still not handled well.

Participants discussed the problems with sanitization explicitly within the area of empirical research. Suggestions focused on allowing the sanitizer to decide when Data should be released.

Title: Googling Considered Harmful
By Greg Conti, United States Military Academy

This paper was very timely in light of the release of private data by AOL. Greg Conti showed AOLStalker, a search engine using the recently released AOL dataset. He demonstrated how AOLStalker could be used to ferret out one user's data from seemingly benign queries. Through the use of free on-line services, users implicitly contribute their personal data. The author develops a threat analysis model to privacy based on information released from these services.

Discussion focused on forms of signal analysis and social contracts previously used to protect privacy. Examples included tracking military mobilization by studying pizza deliveries in the D.C. area. Similarly noted were previous requirements to families of service members to keep silent before a deployment compared to the kind of logistic actions families may take en masse before a mobilization, such as communicating with various soldiers' benefits services.

Title: A Pact With The Devil
By Mike Bond (University of Cambridge) and George Danezis (KU Leuven)

In this paper, the authors outlined a new, hypothetical, virus that would negotiate with its victim so it could improve its changes to spread across networks. The hypothetical virus would offer the infected user an opportunity to commit a collaborative computer crime. An example, would be that the original victim would write a mail that a new victim would open. In exchange for this, the virus would seek data on the new victim's drive such as e-mail and pass it on to the original victim.

Participants examined strategies such a virus could take, and if the victim could double-cross the virus. For example, in addition to offering carrots, the virus could eventually offer sticks such as threatening to release private or incriminating information, or planting criminal information on the victim's computer.

Title: E-Prime for Security
By Steve Greenwald, Independent Consultant

This paper offered E-Prime, a restricted subset of the English language developed by the General Semantics movement. E-Prime avoids uses of the verb "to be", such as "is", "am" and "is not". Steve Greenwald argued that by eliminating these verbs, a writer is forced to provide more complete information, such as providing attribution to some action or requirement. By forcing security policies to be written in E-Prime, policies that are easier to read might be created.

Title: Diffusion and Graph-Spectral Methods for Network Forensic Analysis
By Wei Wang and Tom Daniels, Iowa State

The authors describe a graph-theoretic approach to analyzing audit logs and network traffic. Their approach is to have each node represent a host while connections between nodes represent events. Events then have a weight associated with them based on some quality of the event or alert. Use of eigenvectors to determine qualities of the network.

Authors used data from the Lincoln Labs data set for testing, and so discussion focused on how this approach would perform given data from a real network. The main issue with this paper was what effect the noise inherent in a real network would have on the ability for this approach to identify attack information.

Title: PKI Design for the Real World
By Peter Gutmann (U Auckland), Ben Laurie (Google), Bob Blakley (Burton Group), Mary-Ellen Zurko (IBM), and Matt Bishop (UC Davis)

Panelists described their belief about PKI and its use in the real world. Ms. Zurko described the PKI system in use by Lotus Notes at IBM. This is a system that is deployed to many large enterprises and has been in use for several years. Laurie felt that the issue with PKI was the I - the infrastructure required for PKI was lacking. Bob Blakley believed that PKI was developed because, key distribution is hard and should be easier, and digital signatures are useful. Bob felt that the main problem with PKI was that it was designed to solve a problem that didn't exist. Bishop believed that the issue with PKI was that the design of it was difficult to understand.