Protecting Private Keys against Memory Disclosure Attacks using Hardware Transactional Memory
Le Guan (Chinese Academy of Sciences), Jingqiang Lin (Chinese Academy of Sciences), Bo Luo (University of Kansas), Jiwu Jing (Chinese Academy of Sciences), Jing Wang (Chinese Academy of Sciences)
Cryptography plays an important role in computer and communication security. In practical implementations of cryptosystems, the cryptographic keys are usually loaded into the memory as plaintext, and then used in the cryptographic algorithms. Therefore, the private keys are subject to memory disclosure attacks that read unauthorized data from RAM. Such attacks could be performed through software methods (e.g., OpenSSL Heartbleed) even when the integrity of the victim system’s executable binaries is maintained. They could also be performed through physical methods (e.g., cold-boot attacks on RAM chips) even when the system is free of software vulnerabilities. In this paper, we propose Mimosa that protects RSA private keys against the above software-based and physical memory attacks. When the Mimosa service is in idle, private keys are encrypted and reside in memory as ciphertext. During the cryptographic computing, Mimosa uses hardware transactional memory (HTM) to ensure that (a) whenever a malicious process other than Mimosa attempts to read the plaintext private key, the transaction aborts and all sensitive data are automatically cleared with hardware mechanisms, due to the strong atomicity guarantee of HTM; and (b) all sensitive data, including private keys and intermediate states, appear as plaintext only within CPU-bound caches, and are never loaded to RAM chips. To the best of our knowledge, Mimosa is the first solution to use transactional memory to protect sensitive data against memory disclosure attacks. We have implemented Mimosa on a commodity machine with Intel Core i7 Haswell CPUs. Through extensive experiments, we show that Mimosa effectively protects cryptographic keys against various attacks that attempt to read sensitive data from memory, and it only introduces a small performance overhead.
CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization
Robert N. M. Watson (University of Cambridge), Jonathan Woodruff (University of Cambridge), Peter G. Neumann (SRI International), Simon W. Moore (University of Cambridge), Jonathan Anderson (Memorial University), David Chisnall (University of Cambridge), Nirav Dave (SRI International), Brooks Davis (SRI International), Khilan Gudka (University of Cambridge), Ben Laurie (Google UK Ltd.), Steven J. Murdoch (University College London), Robert Norton (University of Cambridge), Michael Roe (University of Cambridge), Stacey Son (Dev Random, Inc.), Munraj Vadera (University of Cambridge)
CHERI is a hardware-software architecture that combines a capability-system security model with design choices from contemporary processors, Instruction-Set Architectures (ISAs), compilers, and operating systems. At the lowest level, CHERI's fine-grained, in-address-space memory protection mitigates many widely used exploit techniques. However, CHERI's ISA-level capability model can also act as the foundation for a software object-capability model suitable for incremental deployment in compartmentalizing C-language applications to mitigate attacks. Prototyped as an extension to the 64-bit FPGA BERI RISC soft-core processor, FreeBSD operating system, and Clang/LLVM compiler suite, we demonstrate substantial improvements to security, programmability, and scalability as compared to compartmentalization based on pure Memory-Management Unit (MMU) designs. We evaluate CHERI using several real-world UNIX libraries and applications.
VC3: Trustworthy Data Analytics in the Cloud using SGX
Felix Schuster (Ruhr-Universität Bochum), Manuel Costa (Microsoft Research), Cedric Fournet (Microsoft Research), Christos Gkantsidis (Microsoft Research), Marcus Peinado (Microsoft Research), Gloria Mainar-Ruiz (Microsoft Research), Mark Russinovich (Microsoft)
We present VC3, the first practical framework that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hypervisor out of the TCB; thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop; VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.
Virtual Proofs of Reality and Their Physical Implementation
Ulrich Rührmair (TU München), J.L. Martinez-Hurtado (TU München), Xiaolin Xu (UMass Amherst), Christian Kraeh (TU München), Christian Hilgers (ZAE Bayern), Dima Kononchuk (TU Delft), Jonathan J. Finley (TU München), Wayne P. Burleson (UMass Amherst)
We discuss the question of how physical statements can be proven over digital communication channels between two parties (a “prover” and a “verifier”) residing in two separate local systems. Examples include: (i) “a certain object in the prover’s system has temperature X°C”, (ii) “two certain objects in the prover’s system are positioned at distance X”, or (iii) “a certain object in the prover’s system has been irreversibly altered or destroyed”. As illustrated by these examples, our treatment goes beyond classical security sensors in considering more general physical statements. Another distinctive aspect is the underlying security model: We neither assume secret keys in the prover’s system, nor do we suppose classical sensor hardware in his system which is tamperresistant and trusted by the verifier. Without an established name, we call this new type of security protocol a ”virtual proof of reality” or simply “virtual proof” (VP). In order to illustrate our novel concept, we give example VPs based on temperature sensitive integrated circuits (ICs), disordered optical scattering media, and quantum systems. The corresponding protocols prove the temperature, relative position, or destruction/modification of certain physical objects in the prover’s system to the verifier. These objects (so-called “witness objects”) are prepared by the verifier and handed over to the prover prior to the VP. In addition to these new protocols, we carry out and detail full proof-of-concept implementations for all of our optical and circuit-based VPs. Our work touches upon, and partly extends, several established concepts in cryptography and security, including physical unclonable functions, quantum cryptography, interactive proof systems, and, most recently, physical zero-knowledge proofs. We also discuss potential advancements of our method, for example “public virtual proofs” that function without exchanging witness objects between the verifier and the prover.
Using Hardware Features for Increased Debugging Transparency
Fengwei Zhang (George Mason University), Kevin Leach (University of Virginia), Angelos Stavrou (George Mason University), Haining Wang (University of Delaware), Kun Sun (College of William and Mary)
With the rapid proliferation of malware attacks on the Internet, understanding their malicious behavior plays a critical role in crafting effective defenses. Advanced malware analysis relies on virtualization or emulation technology to run samples in a confined environment, and analyze malicious activities by instrumenting code execution. However, virtual machines and emulators inevitably create artifacts in the execution environment, making these approaches vulnerable to detection or subversion. In this paper, we present MALT, a debugging framework that employs System Management Mode, a CPU mode in the x86 architecture, to transparently study armored malware. MALT does not depend on virtualization or emulation and thus is immune to threats targeting such environments. Our approach reduces the attack surface at the software level, and advances state-of-the-art debugging transparency. MALT embodies various debugging functions, including register/memory accesses, breakpoints, and four stepping modes. We implemented a prototype of MALT on two physical machines, and we conducted experiments by testing an array of existing anti-virtualization, anti-emulation, and packing techniques against MALT. The experimental results show that our prototype remains transparent and undetected against the samples. Furthermore, our prototype of MALT introduces moderate but manageable overheads on both Windows and Linux platforms.