Review of
USENIX Security,
Vancouver, BC, Canada
7/31 - 8/4/06
Review by Jeremy Epstein
9/15/06
On Tuesday, I attended Metricon 1.0, an invitational workshop to discuss metrics for information security. This workshop, put on by an ad hoc group of people involved in a discussion on www.securitymetrics.org, was intended to stimulate discussions, which it did. Whether it came up with any results is another question still to be determined. The key results are that many groups are trying to do security measurements, but at this point it's unclear which measurements actually correlate to security. Organizations (both vendors & end-users) are reluctant to disclose their actual metrics. One of the big debates is whether metrics should be top-down (figure out what we want to know, and go out to measure it) vs. bottom-up (figure out what we *can* measure, and see how this correlates to security). This is an area with plenty of research left to be done.
One of the more interesting presentations was about how the US Department of Homeland Security allocates funding to cities for (non-cyber) threats, by creating a 2x2 matrix of effectiveness vs. risk, and then allocating funds primarily based on those facilities that are high risk *and* high effectiveness. There are two outliers - New York and Washington, which both have very high risk - New York didn't get much because it "deserved" all of the pot, and Washington didn't get much because of lack of effectiveness. Thus the widely reported results that small towns in Nebraska got more money than NYC or DC.
A detailed report from Dan Geer will be in an upcoming issue of USENIX ;login:.
Editorial remarks by Epstein
Most of the talks at the conference were quite interesting. However, I believe that publishing papers on yet another way to prevent buffer overrun attacks (or other similar sorts of things which we know how to solve - e.g., by using type-safe languages) is an inappropriate use of both research funding and conference space. So even though some of the techniques are quite clever, I propose that conferences (and USENIX Security is the biggest offender in my mind) should simply reject papers that purport to provide another way to solve problems that can be solved by switching to type-safe languages. Just because it's novel doesn't mean it's appropriate.
Among the more interesting talks in the conference itself:
Richard Clarke, CEO of Good Harbor Consulting and former presidential advisor gave a non-technical talk about how little progress has been made since 9/11. His speech was a top-to-bottom criticism of how the Bush administration has focused its energy and spent money on security over the past five years. As examples, he criticized the efforts to kill al Qaeda leaders without changing the underlying worldview (which has allowed al Qaeda to transform from a hierarchical organization to a group of loosely affiliated terrorist groups). He made the analogy to the French experience in Algeria in the 1950s, where the French tactics alienated Algerian public, causing a new generation of terrorists, so even when the French killed the leaders, it had no long-term impact since there were replacements ready. Clarke argued that the metrics of success in Iraq should be things like economy, stability, etc. - but unemployment is >30%, and oil output & electricity below where they were under Saddam.
He then talked about what's happened in specific areas of homeland security:
Turning to cybersecurity, Clarke commented that in his tenure they released the national strategy for security cyberspace - but almost none of it has been implemented. The strategy said that government should fund research for more resilient systems, which used to be paid for by DARPA, but Secretary Rumsfeld took that charter away from DARPA and put into DHS, where spending has been declining. The President's commission on science issued "scathing" report on cyber security, including underfunding of research. Clarke called for regulation for critical infrastructure where needed, such as regulation for cybersecurity of electric power systems - the Federal Electric Regulatory Commission (FERC) has authority to do regulation, but is ideologically opposed to regulation. Similarly, the FCC has refused to regulate ISPs to force a cybersecurity standard, online banking, etc.
His talk was definitely worth reading/watching; so far I haven't been able to find a recording, but I know it was recorded.
Sonja Chiasson, Carleton University - Usability Study & Critique of Two
Password Managers
This presentation compared two tools (PwdHash (USENIX Security 2005) and
Password Multiplier (WWW2005)) for usability by non-expert users. While the
sample sizes were small, they found relatively little difference in
usability between the two - unfortunately, finding that neither was
particularly usable. Users didn't try to bypass the security provided, but
did it anyway because it wasn't obvious when to type their "real" password
and when to type the "fake" password. Not surprisingly, the problems were
increased because users did not read the provided instructions.
Yingjui Li, Singapore Management University - On the Release of CRLs in
Public Key Infrastructure
The essence of this presentation is that certificates do not "decay" at a
constant rate. In fact, the rate is exponential - if a certificate isn't
revoked within a relatively short time of issuance, it's unlikely to ever
get revoked. This was determined by an empirical study of certificates
issued by VeriSign (by looking at their public CRLs). The reasons why this
happens are a matter of conjecture. They present some formulas to argue how
often certificate issuers should update CRLs. Not addressed by the authors
(but the real meat of the matter to me) is what this means for CRL caching -
if I get a certificate and it's more than
Usable Security: Quo Vadis? (Panel)
Tina Whalen, Dalhousie University:
We've made some progress in making
security usable. We're less likely to say human security errors are due to
users being stupid - less blaming the victim. There's more recognition in
the security community of the need to involve user studies. What's being
neglected is that we're not seeing many ideas being deployed, especially
alternate password schemes - people are doing the same things they've
already done. There's little in the larger context of security deployment,
such as ethnographic studies. Studies aren't as nuanced as other areas of
HCI - less attention is paid to user types. We need to take user study
seriously - be careful not to do just "any" study and call it good.
Additionally, there are the tensions of interdisciplinary research - can one
person have enough expertise & time to perform well in both spheres? We
need to have mixed teams of specialists. Research that's weak on one side
can sneak in via non-expert reviewers on other side (e.g., bad HCI work may
be accepted into security conferences, and vice versa). What we don't need
more of is more of alternate password schemes (e.g., graphical schemes)
unless they're particularly certain to be deployed or are brilliant
research. And we definitely don't need any more paper titles involving the
word "Johnny"!
Dirk Balfanz, PARC:
The most interesting part was what he doesn't want to
see in terms of security & HCI research: (1) systems the author isn't even
using, (2) systems that require a global PKI (clearly not going to happen),
(3)"flexible" access control systems without a UI - if it can have
time-based access, etc., can the user understand what's really going on, and
(4) systems that teach users (as opposed to systems informed by users) - we
need to build systems based on what users do, not what we want them to do;
example is Graffiti for Palm (which had to teach users) vs. handwriting
analysis.
Paul van Oorschot, Carleton University:
Paul tried to answer the question
why hasn't there been more HCISec research? His answers: (1) lack of (true)
recognition of importance, (2) perception - outside scope of security
research, (3) unclear how to get started, (4) suitable venues for publishing
results, (5) unclear what the interesting open problems are, (6)
interdisciplinary research is hard - can't learn HCI you need in a day, or
security in a day, and (7) methods, techniques, and metrics for measuring
results. He suggested that an interesting topic is misconfiguration of
firewalls, others
Andy Ozment, MIT Lincoln Labs - Milk or Wine: Does Software Security Improve
with Age
This is a study that looked at whether software improves with age (like
wine) or turns sour (like milk) as measured by the number of vulnerabilities
detected in a piece of software over a long period of time - that is,
whether the number of bugs converges. They looked at OpenBSD over more than
a decade, and sorted out those bugs that were originally there but took time
to find versus those that were introduced as new features were added. There
are many simplifying assumptions in the study, but it was still *slightly*
encouraging - but it took six years to find half the known vulnerabilities.
Given the rate of increase in software complexity, this doesn't seem very
hopeful. Further, because their study focused on a piece of code that had
long been focused on security, the results probably would have been worse
with most other products. There's been no attempt to correlate
vulnerabilities with authors, to see if there are some people who just write
better code (from a security perspective).
Ben Cox, Univ of Virginia - A Deterministic Approach to Software
Randomization
They try to introduce artificial diversity by having a sack full of changes
they can make, and then comparing the results of running several instances.
This is harder than it sounds, because if all of the instances (for example)
are updating an external database, the versions can interfere with each
other. Additionally, it's not effective to compare the results of every
instruction, so they provide wrappers around system calls and compare the
states at system call times. The wrappers differ in how they operate -
read() causes the data to be read once and shared among the instances, while
setuid() has to be done for every instance. The instances have to be kept
synchronized, so overall performance is the slowest of the instances. Under
light load, they saw a 17% performance hit, while for heavy load, the
performance hit was about 2 - 3 times. Some system calls can't be
effectively handled, like mmap(), since each instance that modifies a memory
mapped file is modifying the same file. They hadn't considered whether this
sort of artificial diversity would work for Java code.
Panel: Major Security Blunders of the Past 30 Years
Matt Blaze, Univ of Pennsylvania:
Matt talked about how bank alarm systems
can be subverted through non-technical means, by simply deliberately setting
off the alarm, waiting for the police to respond and discover that nothing
is wrong, set of the alarm again, etc., until the police stop responding, at
which point the actual attack takes place. This is an example of where
people *are* paying attention to security, but can be defeated through false
positives. His favorite bug was an implementation of telnet that used
56-bit DES to encrypt the session - when they upgraded to a new version of
the crypto library that properly enforced the parity bits in DES, it
silently failed open (because the parity was wrong 255/256 times - 8 parity
bits out of 64 total bits). So an improvement to security (properly
implementing the DES standard) dramatically weakened security.
Virgil Gligor, Univ of Maryland:
Virgil offered three blunders: (1) The
Morris worm (first tangible exploit of a buffer overflow; the concept had
been known but ignored), but we're still making the same mistakes. (2)
Multi Level Secure operating systems and database systems, which sucked up
all of the research funding for years, even though there was no actual
market for MLS (even in the military); the stronger the MLS system, the more
things it broke. "MLS is the way of the future and always will be". [I'd
substitute "PKI" and say that's equally true!] (3) Failed attempts at fast
authenticated encryption in one pass; this one was obscure for me, but he
said it led to numerous methods and papers, all of which proved to be
breakable. The lessons from this are many - among them, don't invent your
own crypto!
Dick Kemmerer, Univ of California Santa Barbara:
Not to be outdone,
Dick offered five blunders. (0) Aggressive CD DRM, such as the
Sony-BMG CD DRM which backfired not only technically but also in the
market. (1) Use of a 56 bit key for DES, when Lucifer (the IBM
inspiration for DES) had a 128 bit key. (2) US export laws on
cryptography, which totally failed and cost US companies an estimated
$50-65 billion dollars in lost sales - the rules (still) don't make
sense, and if you get them wrong you go to jail. (3) Kryptonite
Evolution 2000 and the BIC pen - the locks came with an insurance
policy against the lock being *broken*, but since it was picked not
broken, they wouldn't pay. (4) The Australian raw sewage dump, where
a disgruntled former employee attacked a system from the outside -
lesson learned is to pay attention to SCADA systems and insider
threats. "Don't piss anyone off or you'll be knee deep in shit".
Peter Neumann, SRI International:
"We seek strength in depth, but we
have weakness in depth." (1) Propagation blunders, where a failure in
one part of a system drags other parts with it (1980 ARPANET collapse,
1990 AT&T Long Lines collapse, widespread power outages in 1965, 1977,
1984, 1996, 2003, 2006; all could have been exploited by adversaries.
(2) Backup & recovery failures, such as the air traffic control, train
system failure, other systems. (3) Software flaws, such as
buffer/stack overflows and other flaw types which are ubiquitous -
Multics prevented buffer overflows by having non-executable stack,
type-safe language (PL/1), etc. 30 years ago. (4) Election systems,
which have been implemented without consideration of security.
There were many more examples provided from the floor. A few selections:
Matt Blaze offered one of my favorite comments with respect to the Sony executive's comment that people don't know what a rootkit is and so they don't care whether they have one, "most people don't know what their pancreas is, but don't want it to get cancer".
Ed Felten, Princeton - DRM Wars: The Next Generation
DRM itself is an Orwellian term - not controlling, just "managing",
even though it helps the supplier not the consumer. There have been
proposals, but nothing is happen - which is a success for those
skeptical of legal support for DRM - "not going to defeat the opponent
on the battlefields of Washington & Ottawa, but waiting for it to
collapse of its own weight". We're in the early stages of a
realignment - in 5 or 10 years ago things will look very different.
The 2002 Microsoft paper on why DRM doesn't help was a revelation by
lawyers & public policy people, while technical folks thought it was
just a well explained version of what was widely known. As a result,
doubt is now sinking in among lawyers, movie industry executives,
etc. - they're catching on to this point that promises are broken by
DRM vendors. The other argument against DRM is the Sony/BMG "rootkit"
episode - it showed that DRM is not all that effective and causes
undesirable side-effects. So advocates for DRM are changing their
rationale, moving away from anti-piracy, and towards price
discrimination. DRM can prevent resale - tether copies to buyers, or
create different version (e.g., high-res / low-res, or limited time
copy). This benefits the seller (who can make more money) and
sometimes benefits society - this is why there's lobbying in favor of
DRM policies that allow price discrimination - and works even if DRM
isn't totally effective. The scholarly community has been arguing
this for years; lobbyists are now starting to make the argument too.
Platform locking is the other argument - Apple wants to lock users
into iPod and iTunes, and DRM provides a way to do it. Even if the
DRM can be broken, DMCA means that no one can sell a competing product
- it's weak enough to satisfy consumers, but strong enough to satisfy
your lawyers. As a result, DRM has hurt the music companies, and
helped Apple!!! Finally, Ed proposed (somewhat tongue in cheek) that
with Moore's law helping us, we could (theoretically) have the same
linkage of products as in the Lexmark case (where they use DRM to
ensure that you can't use a non-Lexmark toner cartridge in a Lexmark
printer) - someday your shoes might not work without approved
shoelaces, or your pen won't work without approved paper.