Abstract — Despite the the difficulty in measuring progress in adversarial environments, the field of adversarial machine learning undeniably is making progress. After briefly considering the ways in which we have succeeded, talk argues there are ways in which the entire field—both the attackers and defenders—could make more rapid and meaningful progress.
Biography — Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he received best paper awards at ICML and IEEE S&P. He received his PhD from the University of California, Berkeley in 2018.
Abstract — Static and dynamic program analysis is a flourishing area of programming language research. However, existing analyses often fail to use readily available but ambiguous information about program behavior which is usually available to software engineers. Recently, thanks to advancements in machine learning, and deep learning in specific, a promising new kind of learned code analysis has emerged that aims to fuse all information and probabilistically reason about code.
In this talk, I will give a brief overview of research that lies in the intersection of machine learning and programming languages and discuss research that provides indications that the "soft" aspects of source code contain useful information that can be exploited by code analyses. Then, I will focus on some recent advances on learnable code analyses. In particular, by representing relationships of program elements with graphs, we can exploit a powerful set of deep learning algorithms that allow us to learn from "big code" to perform statistical static analyses and find real-life bugs. I will conclude by discussing open problems and challenges in this area.
Biography — I am a researcher at Microsoft Research, Cambridge, UK. My research is at the intersection of machine learning, natural language processing and software engineering. My aim is to combine the rich structural aspects of programming languages with machine learning to create better coding tools for end-users and developers, while using problems in this area to motivate machine learning research. I have published in both machine learning and software engineering conferences and recently coauthored survey on machine learning for source code. I obtained my PhD from the University of Edinburgh advised by Dr. Charles Sutton. More information about me and my publications can be found here
Deep learning and security have made remarkable progress in the last years. Neural networks have been recognized as an essential tool for security in academia and industry, for example, for detecting attacks, analyzing malicious code or uncovering vulnerabilities in software. At the same time, the security of deep learning has gained focus in research and novel types of attacks against neural networks have been explored, such as adversarial perturbations, neural backdoors and membership inference attacks.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning. The workshop is aimed at academic and industrial researchers.
DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
Deep Learning
Computer Security
You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review.
For any questions, contact the workshop organizers at dls2019@sec.tu-bs.de
All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
Submissions should be made online at https://dls2019.sec.tu-bs.de.