Submit electronically via EasyChair.
All deadline times are AoE.
Static analysis is widely recognized as a fundamental tool for program verification, bug detection, compiler optimization, program understanding, and software maintenance. The series of Static Analysis Symposia has served as the primary venue for the presentation of theoretical, practical, and application advances in the area. The 26th Static Analysis Symposium, SAS 2019, will be held in Porto, Portugal. Previous symposia were held in Freiburg, New York, Edinburgh, Saint-Malo, Munich, Seattle, Deauville, Venice, Perpignan, Los Angeles, Valencia, Kongens Lyngby, Seoul, London, Verona, San Diego, Madrid, Paris, Santa Barbara, Pisa, Aachen, Glasgow, and Namur.
The technical program for SAS 2019 will consist of invited lectures and presentations of refereed papers. Contributions are welcomed on all aspects of static analysis, including, but not limited to:
New in 2019, special sessions will be organized around a trending topic in static analysis. For SAS 2019, we especially solicit Trends in Static Analysis contributions around the emerging convergence of static analysis and machine learning. Trends contributions are welcome on this convergence broadly construed, including, but not limited to:
Trends contributions will be refereed in the same manner and with the same standards as other contributions.
Abstract. Static analyses predominantly use discrete modes of logical reasoning to derive facts about programs. Despite signiﬁcant strides, this form of reasoning faces new challenges in modern software applications and practices. These challenges concern not only traditional analysis objectives such as scalability, accuracy, and soundness, but also emerging ones such as tailoring analysis conclusions based on relevance or severity of particular code changes, and needs of individual programmers.
We advocate seamlessly extending static analyses to leverage continuous modes of logical reasoning in order to address these challenges. Central to our approach is expressing the speciﬁcation of the static analysis in a constraint language that is amenable to computing provenance information. We use the logic programming language Datalog as proof-of-concept for this purpose. We illustrate the beneﬁts of exploiting provenance even in the discrete setting. Moreover, by associating weights with constraints, we show how to amplify these beneﬁts in the continuous setting.
We also present open problems in aspects of analysis usability, language expressiveness, and solver techniques. The overall process constitutes a fundamental rethinking of how to design, implement, deploy, and adapt static analyses.
Bio. Mayur Naik is an Associate Professor of Computer and Information Science at the University of Pennsylvania. His current research interests lie at the intersection of machine learning and symbolic reasoning, with applications to computer-aided approaches for improving software quality and programmer productivity. His contributions span the areas of static analysis, constraint solving, testing, security, concurrency, and mobile computing. He received a Ph.D. in Computer Science from Stanford University in 2008. Previously, he was a researcher at Intel Labs, Berkeley from 2008 to 2011, and a faculty member at Georgia Tech from 2011 to 2016.
Abstract. Data science software is playing an increasingly important role in every aspect of our daily lives and is even slowly creeping into mission critical scenarios, despite being often opaque and unpredictable. In this paper, we will discuss some key challenges and a number of research questions that we are currently addressing in developing static analysis methods and tools for data science software.
Bio. Caterina Urban is a Research Scientist at INRIA and École Normale Supérieure in Paris. Her main research interest is the development of formal methods and static analysis tools to enhance the reliability of computer software and to help understanding complex software systems. Prior to INRIA, she was a postdoc at ETH Zurich. She received her PhD in Computer Science with full marks and honors from École Normale Supérieure, where she was advised by Radhia Cousot and Antoine Miné. During her PhD she spent five months as a visiting research scholar at the NASA Ames Research Center and Carnegie Mellon University in California. She holds a Bachelor’s and a Master’s degree in Computer Science from the Università degli Studi di Udine in Italy, where she was born and grew up.
Abstract. Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, ﬁnance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences.
However, existing approaches to generating adversarial examples and devising robust ML algorithms mostly ignore the semantics and context of the overall system containing the ML component. For example, in an autonomous vehicle using deep learning for perception, not every adversarial example for the neural network might lead to a harmful consequence. Moreover, one may want to prioritize the search for adversarial examples towards those that signiﬁcantly modify the desired semantics of the overall system. Along the same lines, existing algorithms for constructing robust ML algorithms ignore the speciﬁcation of the overall system. In this talk, we argue that the semantics and speciﬁcation of the overall system has a crucial role to play in this line of research. We present preliminary research results that support this claim.
Bio. Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has also worked on privacy-preserving protocols and adversarial ML (AML). Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof Jha also received the NSF career award. Prof. Jha is the fellow of the ACM and IEEE..
Abstract. On the surface, modern-day machine learning and program veriﬁcation tools appear to have very diﬀerent and contradictory goals - machine learning emphasizes generality of the hypotheses it discovers over soundness of the results it produces, while program veriﬁcation ensures correctness of the claims it makes, even at the expense of the generality of the problems it can handle.
Nonetheless, it would also appear that machine learning pipelines have much to oﬀer program veriﬁers precisely because they are structured to extract useful, albeit hidden, information from their subject domain. When applied to software, data-driven methods may help discover facts and properties critical to program veriﬁcation that would otherwise require tedious human involvement to state and prove. Conversely, program veriﬁcation methods would seem to have much to oﬀer machine learning pipelines. Neural networks, the building blocks of modern ML methods, are opaque and uninterpretible, characteristics that make them vulnerable to safety violations and adversarial attacks. Suitably-adapted veriﬁcation methods may help to identify problematic behavior in these networks, an indispensable need in safety-critical environments.
This talk explores two point instances to support these claims. Our ﬁrst example considers how machine learning tools can facilitate solutions to Constrained Horn Clauses (CHCs), a popular formalism for encoding veriﬁcation conditions that capture sophisticated safety properties. We demonstrate how data-driven techniques can be used for eﬃcient invariant discovery over complex recursive CHCs in which the structure of the discovered invariants are drawn from expressive feature spaces (e.g., polyhedra domains). Our second example considers how program veriﬁcation and synthesis tools can be used to guarantee safety of reinforcement learning-based neural controllers. We suggest a black-box technique that uses the neural network as an oracle to guide the search for a similarlybehaving deterministic program, more amenable to veriﬁcation, that is guaranteed to satisfy a desired safety speciﬁcation. This program can then be eﬀectively used within a runtime monitoring framework as a safety shield, overriding proposed actions of the network whenever such actions can cause the system to violate safety conditions.
The results of these investigations give us conﬁdence that there are signiﬁcant synergies to be had by judiciously combining learning and veriﬁcation techniques. We envision learners as a mechanism to complement the expressivity of program veriﬁers by enabling improved eﬃciency and generality, while veriﬁers can be used to guarantee the safety of machine learning artifacts without compromising accuracy and utility.
Bio. Suresh Jagannathan is the Samuel D. Conte Professor of Computer Science at Purdue University. His research interests are in programming languages generally, with a specific focus on program verification, functional programming, and concurrent and distributed systems. From 2013-2016, he served as a program manager in the Information Innovation Office at DARPA, where he conceived and led programs in probabilistic reasoning and machine learning, program synthesis, and adaptive computing. He has also been a visiting faculty scholar at Cambridge University, where he spent a sabbatical year in 2010; and, prior to joining Purdue, was a senior research scientist at the NEC Research Institute in Princeton, N.J. He received his Ph.D from MIT.
Invited speakers for SAS 2019 are kindly sponsored by Google.
Submit electronically via the EasyChair submission page.
Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, object-oriented, aspect, multi-core, distributed, and GPU programming.
As in previous years, we encourage authors to submit a virtual machine image containing any artifacts and evaluations presented in the paper. The goal of the artifact submissions is to strengthen our field’s scientific approach to evaluations and reproducibility of results. The virtual machines will be archived on a permanent Static Analysis Symposium website to provide a record of past experiments and tools, allowing future research to better evaluate and contrast existing work.
Artifact submission is optional. We accept only virtual machine images that can be processed with VirtualBox. The artifact should come with a virtual machine (VM) image and step-by-step instructions:
Please follow the instructions below to submit your artifact:
SAS 2019 will use a lightweight double-blind reviewing process. Following this process means that reviewers will not see the authors’ names or affiliations as they initially review a paper. The authors’ names will then be revealed to the reviewers only once their reviews have been submitted.
To facilitate this process, submitted papers must adhere to the following:
During the author response period, authors will be able to read reviews and respond to them as appropriate.
All workshops will be held on October 8, 2019.
Since 2014, the program committee of each SAS conference selects a paper for the Radhia Cousot Young Researcher Best Paper Award, in memory of Radhia Cousot, and her fundamental contributions to static analysis, as well as being one of the main promoters and organizers of the SAS series of conferences.