Review of the
Security and Privacy,
Berkeley/Oakland, CA
May 19-22, 2008
Reviews of Selected Talks by Matt Fredrikson
May 19, 2008
Editor's note: the reviewer was unable to review all the talks due to his schedule; our apologies to authors whose work was not covered in this report.
Monday, May 19, 2008The conference began on Monday with a few remarks from the program chair. These remarks were given by Avi Rubin, as Patrick McDaniel was still in route to the conference. He started out speaking about the review process for the conference. Papers were assigned to members of the program committee by area, avoiding conflicts of interest. Each PC member received about twenty papers, and each paper was reviewed by three members of the committee. In general, to gain acceptance into the conference, a paper had to have at least one high-confidence review. Once all of the papers had been reviewed by the committee, lengthy deliberations ensued -- two weeks of email conversations and a lengthy meeting review to come to final decisions regarding the papers.
The end result of this process can be summarized with the following statistics. Out of 249 submissions, twenty-eight papers were accepted (11.2%). This is comparable to last year's program, in which twenty regular submissions were accepted out of 246 submissions (8.1%). However, there were no short papers accepted for this year's conference. Avi concluded by observing that this acceptance rate is beneficial for the speakers and authors, as tenure committees and managers look favorably on such numbers.
AwardsThis year, three awards were given to select contributors. The best student paper, including a cash prize, was given to Francis David, Ellick Chan, Jeffrey Carlyle, and Roy Campbell for their paper "Cloaker: Hardware Supported Rootkit Concealment". The best paper award was given to Daniel Halperin and his colleagues from the University of Massachussetts - Amherst (Thomas S. Heydt-Benjamin, Benjamin Ransford, Shane S. Clark, Benessa Defend, Will Morgan, Kevin Fu) and Tadayoshi Kohno of the University of Washington , and William H. Maisel of Beth Israel Deaconess Medical Center and Harvard Medical School, for their paper "Pacemakers and Implantible Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses". Finally, the IEEE Security and Privacy award was given to Saar Drimer, Steven J. Murdoch, and Ross Anderson for their paper "Thinking Inside the Box: System-Level Failures of Tamper Proofing".
First Session: PeeringThe first presentation in this session was given by Haifeng Yu regarding their work with defense of social networks against sybil attacks. The title of the talk was "SybilLimit: A Near-Optimal Social Network Defense Against Sybil Attacks". Haifeng began by describing their motivating problem, which is that sybil attacks are particularly troublesome in a decentralized environment, pointing to results that indicate the impossibility of perfect defense without a central authority to tie identities to human beings. He goes on to present SybilLimit, a protocol that leverages a key insight about social networks to place a bound on the number of accepted sybil nodes. He shows that for a network with one million nodes, SybilLimit reduces the number of accepted sybil nodes by approximately 200 times. Furthermore, in fast-mixing networks, the bounds provided by SybilLimit fall within a logarithmic factor of the optimal solution. Finally, the Haifeng concludes his talk by presenting empirical evidence that real-world social networks are fast-mixing, making them ideal candidates for use with SybilLimit.
The first question regarded the real-world datasets used by the authors for experimental validation. An audience member asked Haifeng how many nodes were removed while pre-processing the real-world datasets. Haifeng responded by saying that only nodes with extremely high incidence were removed from the datasets, so the total number removed depended on the dataset. It tended to vary between ten and fifty percent of the nodes. He then pointed out that removing edges from the graphs would not reduce the mixing time.
The next question was whether it would be possible to take a snapshot of a network, and all of the nodes that were suspected to be sybil, in order to verify the correctness of the solution. Haifeng said that such an exercise assumes that an authority is capable of correctly identifying which nodes are sybil. Assuming that this could be accomplished accurately, it would be an interesting experiment.
The second presentation was given by Parv Venkitasubramaniam, titled "Anonymous Networking with Minimum Latency in Ad-Hoc Networks. Parv opened up by observing a trend toward ubiquitous wireless networks composed of self-configuring devices, and discussed the need for security and anonymity in such an environment. He proceeded to discuss the inherent tradeoff between resilience to timing-based traffic analysis attacks and the quality of service as measured by latency. He then described the way in which anonymity is quantified in his work, and presented scheduling strategies that maximize this notion of anonymity, as well as a characterization of the performance penalties incurred. Parv concluded by hypothesizing that a more realistic model for the adversary might result in improved performance, and briefly talked about some future work in this area.
One of the audience members asked whether the proposed approach might increase brittleness with respect to forged packet attacks. Parv responded, acknowledging that this is indeed a concern, and that some of his previous work has addressed such attacks.
The first talk of the communications session, titled "Spot me if you can: Uncovering spoken phrases in encrypted VoIP", was given by Charles Wright of Johns Hopkins. Charles began by stating that VoIP offers comparable quality and better security than typical land lines, although it may be possible to deduce some information from encrypted traffic by sampling certain characteristics. If the attacker's goal is to recover information about the word content of a VoIP stream, then there are considerable challenges that must be surmounted; most notable are the large potential vocabulary and natural variability of human speech. Charles proceeded with the claim that despite these challenges, such information can be deduced due to the fact that the efficient variable bitrate encoding used by VoIP encodes different phonemes at distinct bitrates. He then showed how a hidden markov model can be used to recover spoken word content at recall rates of approximately 50% for reasonable precision rates. He concluded by pointing out that VoIP packets can be padded with null content to thwart such an attack.
Vern Paxson asked if one could order packets randomly to defend against such an attack. Charles agreed that such a defense would work, but would increase latency. Another conference attendee asked if the attack could be thwarted using non-technological measures, such as intentional voice modulation. Charles responded by saying that such a defense would probably work, and adding background noise to the VoIP payload would probably be effective as well. The last question from an attendee was about the effectiveness of the technique for pure word recall. Charles said that this problem was more challenging, and that his technique is not sufficient for it at the current time.
The next talk was also about VoIP, titled "Preserving Caller Anonymity in VoIP Networks". It was given by Mudhakar Srivasta from the IBM T.J. Watson Research Center. Looking at anonymity networks using VoIP as an application, Mudhakar showed how timing-based analysis attacks can be perpetrated to infer the source of a route with high probability when only a small portion of the network is malicious. He then continued to show that it is impossible to preserve the shortest path property of such a network while preserving caller anonymity, thus revealing a fundamental tradeoff between privacy and quality of service. The last part of his talk proposed random-walk techniques for establishing routes that preserve caller anonymity, and can be customized to achieve varying quality of service guarantees.
After the talk, Paul Syverson observed that when researching onion routing for the Tor anonymizing network, they looked at several alternatives to shortest path and random walk protocols, some of them similar to what Mudhakar presented in his talk. He expressed interest in discussing this further at a later time, as the similarities may be interesting.
The final presentation of the communications session was given by Mario Strasser, and it dealt with key establishment protocols over wireless networks that are resistant to radio jamming techniques.Mario began by presenting the fundamental difficulty of establishing a shared secret key between two devices that do not share secrets over a wireless link. While current key establishment protocols depend on jamming-resistant communications, current anti-jamming techniques depend on the presence of established secret keys. Mario proposed the use of frequency hopping to counter jamming attacks in this problem setting, and named the technique "uncoordinated frequency hopping". The technique uses an ECC-based, station-to-station Diffie-Hellman key establishment protocol, and Mario presented numbers that demonstrate its feasibility in terms of both security and execution time.
One of the conference attendees asked what would happen were the attacker to follow the frequency hopping protocol, mimicking one of the parties. Mario responded by stating that they did not consider this type of attack for the current work.
The first talk of the data session, titled "Casting out Demons: Sanitizing Training Data for Anomaly Sensors", was given by Gabriela Cretu. The talk addressed the problem of contaminated training data for anomaly-based intrusion detectors. More specifically, if real network or host event data is used to train an anomaly detector, and the data contains events corresponding to an attack, then the anomaly detector produced as a result of the training may fail to detect certain attacks. Gabriela proposed the addition of a sanitization phase to the anomaly detector training regimen, to remove these troublesome events from the training data. The proposed phase breaks the training data into several distinct slices that are then used to train a set of anomaly detectors. A voting scheme among the new detectors is then used to label certain parts of the training data as "attack data". She proceeded to show that the technique produces favorable results when existing sensors incorporate such a sanitization phase. Finally, Gabriela discussed the idea of distributed sanitization, where data from external networks and hosts is used to produce a better local model.
The first question from a conference attendee was about periodic events, and whether or not they would be outvoted in such a scheme and therefor not part of the anomaly detector's model. Gabriela responded by affirming that such events would indeed be counted as false positives. The next question was about the origins of the attack dataset, to which Gabriela informed us that the data came from the Columbia University network. The final question was about the true positive rate of anomaly detectors using the sanitization scheme, which was reported to be 100%. The attendee pointed out that it may be misleading to report such a true positive rate, as it implies that the detector is capable of catching all attacks. Gabriela responded by saying that the reported figure represents attacks that she could manually identify, which is really only one particular class of attack, and that she could not speak for the general case.
The first audience question was about the necessity of such a device in front of a typical IDS. If the IDS already has to reconstruct streams, then why not just configure and IDS to do what RobotNorm does. Mythili responded by saying that the idea is to remove the necessity of maintaining a large amount of stream state from the IDS, thus simplifying its function and design. The next attendee asked what would happen if ACK packets are spoofed from within the network. Mythili said that in the attack model used for this work, one side of the stream must be honest. If both sides collude, then the problem changes entirely, and this is a topic for separate work.
The first question came from Somesh Jha, who asked why similar research doesn't make use of background knowledge to the extent that his own algorithm does. Arvind stated that one potential reason for this condition is that it makes the problem substantially more complex, but that at this point it is becoming a necessity. The next attendee asked whether any of his "identified" Netflix customers might have been a fluke. Arvind replied that the next highest match was 28 standard deviations apart in the worst case, so the probability of this being true is extremely low. One attendee observed that if we want to protect our privacy in such datasets, then we can inject our own randomness. Arvind pointed out that this severely reduces the utility of the services that are based on the data.
This session began with a talk about the security of implantable pacemakers and cardiac defibrillators presented by Ben Ransford. He described several possible attacks on these devices, then pointed to a fundamental difficulty that allows for these attacks. The issue is that authentication on these devices is difficult, as there are a large number of potential accessors of the devices, and it is not acceptable to react to authentication failure by denying access. This is true because a doctor or paramedic may not possess the key to such a device, but it is imperative that they have access. Lastly, key distribution would be an immensely difficult task due to the fact that the owner of such a device can travel to arbitrary locations, and there is no way to know a priori who will need the key. Ben finishes by presenting a "defensive direction" - a mechanism that defends the device without using a battery. His proposed solution, termed "WISPer", notifies the owner of the implanted device of attempted access using physical sensation.
Jon Giffin asked Ben how he discovered the protocol for the device. Ben responded that they did not parse the protocol, but simply replayed transactions that they had previously observed. Another attendee was curious as to the willingness of the medical community to collaborate with such an endeavor. Ben reported that their group received an overwhelmingly positive response from the medical community. The final question from an attendee was whether a more blunt attack could be perpetrated against the implantible devices. Ben remarked that there are always more blunt attacks available, but as the devices become more sophisticated, he sees it as important that the security community consider correspondingly sophisticated attacks and defenses.
Somesh Jha asked David what would happen if he obfuscated a patch just enough to fool the binary differencer, but not enough to introduce significant runtime overhead. David stated that this might be more difficult than Somesh thinks, as the binary differencer he used is relatively sophisticated. Peter Chen asked if there were any differences between the exploits he generated, and those that are publicly released. David confirmed that this was the case. Another attendee pointed out that this could lead to a war of escalation, where patch distributors test a patch against all blackhat generation techniques, so that attackers must rely on manual analysis to generate an exploit. David then agreed that this may be the case, and observed that in the past the security community has not done a great job estimating the capabilities of the attacker.
One of the attendees asked Michael whether it might be possible to read reflections that bounce off of two surfaces. Michael replied that before this work he would have said that it was impossible, whereas now he will only say that it is unlikely. Louis Kruger asked about the possibility of observing moving figures, to which Michael responded that motion blur poses a significant difficulty to this type of work. Crispin Cowan suggested vibrating the screen lightly, to bring the problem of motion blur back into the equation. Michael remarked that this was certainly a creative idea, and may work in some situations. One of the attendees asked Michael whether his bald head might pose a security risk, to which Michael responded that is almost certainly would.
Hao Chen asked Marco if he had considered techniques similar to those used in speech recognition, such as hidden markov models. Marco confirmed that his group had considered these techniques, but the imprecisions from the vision phase hindered them. Several attendees asked questions about the experiments presented by Marco, to which Marco pointed out that his current results are preliminary, and there was still future work to be done on the problem.
The first talk of the miscellaneous session was given by Randy Smith from the University of Wisconsin on the problem of matching regular expression signatures on high-speed network links. Randy presented a technique, dubbed "Extended Finite State Automata", that makes use of a small amount of auxiliary memory to match regular languages at nearly the speed of deterministic finite state automatons, and requiring approximately as much memory as non-deterministic finite state automatons. He demonstrated the effectiveness of the technique on real network data.
One of the attendees asked Randy what would happen if NIDS could no longer keep track of all possible offending patterns, but instead had to whitelist good patterns. Randy replied that deep packet inspection might be a possible solution to this problem, as more sophisticated characteristics of the packet are being considered. Another attendee asked Randy if his technique might be capable of recognizing languages that are more complex than regular. Randy replied that he had not yet looked into this.
One of the attendees asked Louis whether his protocols are susceptible to covert timing attacks. Louis replied that he did not see this as a problem. Another attendee asked Louis how his algorithms compare with non privacy-preserving algorithms in terms of performance, to which he responded that there was no comparison - his algorithms perform much more slowly.
Bryan Pane started this session off with his talk entitled "Lares: An Architecture for Secure Active Monitoring Using Virtualization". Bryan pointed out that active monitoring is critical to modern systems security analysis, but malware might tamper with the hooks on which active monitoring systems rely. To address this problem, Bryan proposed moving the active monitoring infrastructure further out of reach of malware, to the virtualization layer. Bryan presented his system, Lares, which does precisely this. Lares resides in a separate hypervisor, and installs hook in the guest VM to perform active monitoring. Memory protection using page-granularity write permissions with additional byte-granularity checks are used to achieve memory protection, and therefore ensure that malware does not overwrite the hooks placed by Lares. He goes on to claim that Lares hooks perform withing ten microseconds of a typical kernel hook in a traditional active monitoring system.
Crispin Cowan asked Bryan what he does to prevent an attacker from trojanizing the whole system? Bryan replied that one of his base assumptions is that Lares is installed from a clean boot. Francis David then asked why all of the monitoring infrastructure resides in the virtualization layer, to which Bryan responded that placing it in this layer makes things easier and cleaner, so it is a good design decision.
The second talk of this session was given by R. Sekar and Weiqing Sun. Sekar started off by stating that the only correct way to deal with malware is to consider information flow-based integrity. This is based on the assumption that system integrity is preserved if critical subjects are never influenced by untrustworthy objects, essentially making the common programmer assumption that the execution environment is benign valid. He then discusses a method for automating the construction of policies based on this principle can be realized by mapping entries in an access log to a set of policy choices. Weiqing Sun then provided the details of their policy enforcement framework, and presented numbers regarding the effectiveness of their system.
One of the attendees pointed out that there exists an asymmetry in the manner in which violations of read-down and write-up policies are handled. Sekar pointed out that in one case, high-integrity applications are performing the violation, so it's generally OK to let them continue. In the other case, untrusted low-integrity apps are the problem, and are dealt with in a more conservative manner.
The last talk of the defenses session was given by Periklis Akritidis, titled "Preventing memory error exploits with WIT". Periklis presented a compiler-based system for preventing memory corruption attacks, where instructions are broken into equivalence classes based on which memory regions they access. These equivalence classes are determined using static information, and runtime checks are inserted into key locations to ensure that instructions from a particular equivalence class only touch the corresponding memory. Periklis claimed that the technique is backward compatible, detects a number of memory corruption attacks, and does not result in substantial performance overhead.
One attendee noted that the analysis is only as powerful as the static points-to analysis on which it depends. Periklis acknowledged this remark, and pointed out that his evaluation provided promising results.
The first talk of this session was given by Steven Murdoch, on the Chip-and-PIN technology that is finding widespread use in Europe and Canada. Steven first presented the protection mechanisms present in standard Chip-and-PIN technology, gave a broad overview of the successful attack that his group perpetrated on Chip-and-PIN technology, and presented video evidence of the financial industry's unwillingness to acknowledge the serious vulnerabilities present in these systems. Crispin Cowan pointed out that in North America, the burden of liability for financial fraud falls squarely on the bank, and asked how Canada will be affected with the adoption of Chip-and-PIN. An attendee from Canada who had recently received a Chip-and-PIN enabled card informed the audience that the cards come with a new customer agreement, which requires the customer to sign his rights away.
The second talk of the Attacks II session was given by Francis David, titled "Cloaker: Hardware Supported Rootkit Concealment". Francis began with a description of the intrusion workflow, and observed that evolution in rootkit technology has been driven by an arms race in recent years. He then presents Cloaker, a rootkit system that utilizes hardware support to conceal itself, representing the logical next step in the rootkit arms race. He then presents a few case study payloads that utilize Cloaker. He finishes with a take-home point that the problem of system integrity cannot be solved without considering the hardware, as Cloaker is only one example of a system that exploits a gap between software systems and architecture. One of the attendees asked Francis if he thought attackers were capable of devising ways to hijack control flows faster than defenders can find ways of checking for subversion. Francis admitted that it is indeed an easier task to write checks than to come up with new subversion techniques.
The final talk of the session, titled "Predictable Design of Network-Based Covert Communication Systems", was given by Ron Smith. Ron started off with the hypothesis that covert communications systems based upon exploitable low-bandwidth covert channels can be designed with mathematical predictability and precision. He presented three quantifiable properties of a network-based covert channel - probability of detection, system efficiency, and communication reliability expressed as a bit error rate. He then gave a formal characterization of covert channel detectability and an expression for covert channel efficiency.
John Nolan asked Ron if there is any hope of calculating channel capacity. Ron responded that he had attempted to do this in his thesis, but it is actually a considerably difficult problem. Another attendee noted that the adversarial model selected for this work was perhaps unrealistically powerful, to which Ron replied that they had made paranoid assumptions due to conservative principles of network security.