26th Static Analysis Symposium

Part of the 3rd World Congress on Formal Methods
Porto, Portugal

Important Dates

Abstract Submission
Monday, April 22, 2019 (firm)
Paper Submission
Thursday, April 25, 2019 (firm) Thursday, April 18, 2019
Artifact Submission
Thursday, April 25, 2019 (firm)
Author Response
Friday-Monday, May 31-June 3, 2019
Notification
Friday, June 14, 2019
Early Registration
Tuesday, September 10, 2019
Conference Program
Wednesday-Friday, October 9-11, 2019

Submit electronically via EasyChair.

Check out the program and register via FMWeek.

All deadline times are AoE.

About

Static analysis is widely recognized as a fundamental tool for program verification, bug detection, compiler optimization, program understanding, and software maintenance. The series of Static Analysis Symposia has served as the primary venue for the presentation of theoretical, practical, and application advances in the area. The 26th Static Analysis Symposium, SAS 2019, will be held in Porto, Portugal. Previous symposia were held in Freiburg, New York, Edinburgh, Saint-Malo, Munich, Seattle, Deauville, Venice, Perpignan, Los Angeles, Valencia, Kongens Lyngby, Seoul, London, Verona, San Diego, Madrid, Paris, Santa Barbara, Pisa, Aachen, Glasgow, and Namur.

Topics

The technical program for SAS 2019 will consist of invited lectures and presentations of refereed papers. Contributions are welcomed on all aspects of static analysis, including, but not limited to:

Abstract domains
Abstract interpretation
Automated deduction
Data flow analysis
Debugging
Deductive methods
Emerging applications
Model checking
Program optimizations and transformations
Program synthesis
Program verification
Security analysis
Tool environments and architectures
Theoretical frameworks
Type checking

New in 2019, special sessions will be organized around a trending topic in static analysis. For SAS 2019, we especially solicit Trends in Static Analysis contributions around the emerging convergence of static analysis and machine learning. Trends contributions are welcome on this convergence broadly construed, including, but not limited to:

Scaling static analysis to "big code"
Data-driven static analysis
Assuring machine learning with static analysis

Trends contributions will be refereed in the same manner and with the same standards as other contributions.

Invited Speakers

  • Mayur Naik
    Mayur Naik (University of Pennsylvania)
    Rethinking Static Analysis by Combining Discrete and Continuous Reasoning

    Abstract. Static analyses predominantly use discrete modes of logical reasoning to derive facts about programs. Despite significant strides, this form of reasoning faces new challenges in modern software applications and practices. These challenges concern not only traditional analysis objectives such as scalability, accuracy, and soundness, but also emerging ones such as tailoring analysis conclusions based on relevance or severity of particular code changes, and needs of individual programmers.

    We advocate seamlessly extending static analyses to leverage continuous modes of logical reasoning in order to address these challenges. Central to our approach is expressing the specification of the static analysis in a constraint language that is amenable to computing provenance information. We use the logic programming language Datalog as proof-of-concept for this purpose. We illustrate the benefits of exploiting provenance even in the discrete setting. Moreover, by associating weights with constraints, we show how to amplify these benefits in the continuous setting.

    We also present open problems in aspects of analysis usability, language expressiveness, and solver techniques. The overall process constitutes a fundamental rethinking of how to design, implement, deploy, and adapt static analyses.

    Bio. Mayur Naik is an Associate Professor of Computer and Information Science at the University of Pennsylvania. His current research interests lie at the intersection of machine learning and symbolic reasoning, with applications to computer-aided approaches for improving software quality and programmer productivity. His contributions span the areas of static analysis, constraint solving, testing, security, concurrency, and mobile computing. He received a Ph.D. in Computer Science from Stanford University in 2008. Previously, he was a researcher at Intel Labs, Berkeley from 2008 to 2011, and a faculty member at Georgia Tech from 2011 to 2016.

  • Caterina Urban
    Caterina Urban (INRIA/ENS)
    Static Analysis of Data Science Software

    Abstract. Data science software is playing an increasingly important role in every aspect of our daily lives and is even slowly creeping into mission critical scenarios, despite being often opaque and unpredictable. In this paper, we will discuss some key challenges and a number of research questions that we are currently addressing in developing static analysis methods and tools for data science software.

    Bio. Caterina Urban is a Research Scientist at INRIA and École Normale Supérieure in Paris. Her main research interest is the development of formal methods and static analysis tools to enhance the reliability of computer software and to help understanding complex software systems. Prior to INRIA, she was a postdoc at ETH Zurich. She received her PhD in Computer Science with full marks and honors from École Normale Supérieure, where she was advised by Radhia Cousot and Antoine Miné. During her PhD she spent five months as a visiting research scholar at the NASA Ames Research Center and Carnegie Mellon University in California. She holds a Bachelor’s and a Master’s degree in Computer Science from the Università degli Studi di Udine in Italy, where she was born and grew up.

  • Somesh Jha
    Somesh Jha (University of Wisconsin-Madison)
    Towards Semantic Adversarial Examples

    Abstract. Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences.

    However, existing approaches to generating adversarial examples and devising robust ML algorithms mostly ignore the semantics and context of the overall system containing the ML component. For example, in an autonomous vehicle using deep learning for perception, not every adversarial example for the neural network might lead to a harmful consequence. Moreover, one may want to prioritize the search for adversarial examples towards those that significantly modify the desired semantics of the overall system. Along the same lines, existing algorithms for constructing robust ML algorithms ignore the specification of the overall system. In this talk, we argue that the semantics and specification of the overall system has a crucial role to play in this line of research. We present preliminary research results that support this claim.

    Bio. Somesh Jha received his B.Tech from Indian Institute of Technology, New Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has also worked on privacy-preserving protocols and adversarial ML (AML). Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof Jha also received the NSF career award. Prof. Jha is the fellow of the ACM and IEEE..

  • Suresh Jagannathan
    Suresh Jagannathan (Purdue University)
    Learning Verifiers and Verifying Learners

    Abstract. On the surface, modern-day machine learning and program verification tools appear to have very different and contradictory goals - machine learning emphasizes generality of the hypotheses it discovers over soundness of the results it produces, while program verification ensures correctness of the claims it makes, even at the expense of the generality of the problems it can handle.

    Nonetheless, it would also appear that machine learning pipelines have much to offer program verifiers precisely because they are structured to extract useful, albeit hidden, information from their subject domain. When applied to software, data-driven methods may help discover facts and properties critical to program verification that would otherwise require tedious human involvement to state and prove. Conversely, program verification methods would seem to have much to offer machine learning pipelines. Neural networks, the building blocks of modern ML methods, are opaque and uninterpretible, characteristics that make them vulnerable to safety violations and adversarial attacks. Suitably-adapted verification methods may help to identify problematic behavior in these networks, an indispensable need in safety-critical environments.

    This talk explores two point instances to support these claims. Our first example considers how machine learning tools can facilitate solutions to Constrained Horn Clauses (CHCs), a popular formalism for encoding verification conditions that capture sophisticated safety properties. We demonstrate how data-driven techniques can be used for efficient invariant discovery over complex recursive CHCs in which the structure of the discovered invariants are drawn from expressive feature spaces (e.g., polyhedra domains). Our second example considers how program verification and synthesis tools can be used to guarantee safety of reinforcement learning-based neural controllers. We suggest a black-box technique that uses the neural network as an oracle to guide the search for a similarlybehaving deterministic program, more amenable to verification, that is guaranteed to satisfy a desired safety specification. This program can then be effectively used within a runtime monitoring framework as a safety shield, overriding proposed actions of the network whenever such actions can cause the system to violate safety conditions.

    The results of these investigations give us confidence that there are significant synergies to be had by judiciously combining learning and verification techniques. We envision learners as a mechanism to complement the expressivity of program verifiers by enabling improved efficiency and generality, while verifiers can be used to guarantee the safety of machine learning artifacts without compromising accuracy and utility.

    Bio. Suresh Jagannathan is the Samuel D. Conte Professor of Computer Science at Purdue University. His research interests are in programming languages generally, with a specific focus on program verification, functional programming, and concurrent and distributed systems. From 2013-2016, he served as a program manager in the Information Innovation Office at DARPA, where he conceived and led programs in probabilistic reasoning and machine learning, program synthesis, and adaptive computing. He has also been a visiting faculty scholar at Cambridge University, where he spent a sabbatical year in 2010; and, prior to joining Purdue, was a senior research scientist at the NEC Research Institute in Princeton, N.J. He received his Ph.D from MIT.

Invited speakers for SAS 2019 are kindly sponsored by Google.

Submission

Submit electronically via the EasyChair submission page.

Papers

Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, object-oriented, aspect, multi-core, distributed, and GPU programming.

  • Papers must describe original work, be written and presented in English, and must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal or a conference with refereed proceedings.
  • Submitted papers will be judged on the basis of significance, relevance, correctness, originality, and clarity.
  • They should clearly identify what has been accomplished and why it is significant.
  • Paper submissions should not exceed 18 pages in Springer’s Lecture Notes in Computer Science (LNCS) format, excluding bibliography and well-marked appendices. Program Committee members are not required to read the appendices, and thus papers must be intelligible without them.

Artifacts

As in previous years, we encourage authors to submit a virtual machine image containing any artifacts and evaluations presented in the paper. The goal of the artifact submissions is to strengthen our field’s scientific approach to evaluations and reproducibility of results. The virtual machines will be archived on a permanent Static Analysis Symposium website to provide a record of past experiments and tools, allowing future research to better evaluate and contrast existing work.

Artifact submission is optional. We accept only virtual machine images that can be processed with VirtualBox. The artifact should come with a virtual machine (VM) image and step-by-step instructions:

  • Virtual machine image: The VM image must be bootable and contain all the necessary libraries installed. Please ensure that the VM image can be processed with VirtualBox. When preparing your artifact, please make it light as possible.
  • Step-by-step instructions: It should clearly explain how to reproduce the results that support your paper’s conclusions. We encourage the authors to have easy-to-run scripts. Also, you should explain how to interpret the output of the artifact. Please provide an estimated execution time for each instruction.

Please follow the instructions below to submit your artifact:

  • Make the VM image and the instruction document into single compressed archive file using zip or gzip. Use your paper number for the name of the archive file.
  • Upload the archive file to well-known storage service such as Dropbox or Google Drive and get the sharable link of it.
  • Run a checksum function with the archive file and make a text file that contains the link to the archive file and the checksum the result.
  • Submit the text file via the submission page. The submission form has a place for the artifact submission.

Review Process

Lightweight Double-Blind Reviewing Process

SAS 2019 will use a lightweight double-blind reviewing process. Following this process means that reviewers will not see the authors’ names or affiliations as they initially review a paper. The authors’ names will then be revealed to the reviewers only once their reviews have been submitted.

To facilitate this process, submitted papers must adhere to the following:

  • Author names and institutions must be omitted and
  • References to the authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission, makes the job of reviewing the paper more difficult, or interferes with the process of disseminating new ideas. For example, important background references should not be omitted or anonymized, even if they are written by the same authors and share common ideas, techniques, or infrastructure. Authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas.
Author Response Period

During the author response period, authors will be able to read reviews and respond to them as appropriate.

Workshops

All workshops will be held on October 8, 2019.

Awards

Radhia Cousot Young Researcher Award

Since 2014, the program committee of each SAS conference selects a paper for the Radhia Cousot Young Researcher Best Paper Award, in memory of Radhia Cousot, and her fundamental contributions to static analysis, as well as being one of the main promoters and organizers of the SAS series of conferences.

Organizers

Program Chair

Program Committee

Artifact Evaluation Chair

Artifact Evaluation Committee

Sponsors

Platinum

Google

Bronze

Springer