There are currently no seminars planned.
05 February 2024:
Omri Isac
Neural Network Verification with Proof Production
Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification community has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this talk, I will present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities. We will discuss how to create such a mechanism based on a constructive adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions. We will conclude by discussing how proof production could be leveraged in order to improve the performance of DNN verifiers, using conflict clauses. Omri is a PhD student advised by Guy Katz in the Hebrew University of Jerusalem. His research mainly focuses on proof production for deep neural network (DNN) verification, and its theoretical computability and complexity aspects.
29 January 2024:
Lorenzo Cazzaro
Efficient and Principled Verification of Tree-Based Classifiers
The increasing success of Machine Learning (ML) has highlighted the importance of defining and verifying the trustworthy behavior of ML models. Existing research followed two directions for addressing these challenges: (i) establishing properties like robustness and fairness to characterize the trustworthy behavior of ML models and (ii) designing verification algorithms based on formal methods to provide formal guarantees regarding compliance with the desired properties. Unfortunately, current proposals suffer from several limitations. Complete verification methods on ML models present limited scalability and do not always terminate within reasonable time and memory bounds. Additionally, robustness and fairness properties are not sufficiently expressive since they are local or data-dependent in that they are predicated solely on specific test instances rather than arbitrary inputs.
In this talk, I will introduce three contributions that attempt to address these shortcomings, focusing on a specific class of ML classifiers, tree-based classifiers. In the first part of the talk, I will shortly introduce two attempts to go beyond the verification of local robustness and fairness properties. These contributions leverage a data-independent stability analysis of the tree-based classifier to verify a generalization of robustness, resilience, and a global fairness property. Then, in the second part of the talk, I will deepen a novel approach to the security verification of tree-based classifiers, called verifiable learning, i.e., identifying and training ML models that enable (formally proven) efficient security verification. In particular, we identify and train a specific class of tree-based models called large-spread ensembles that admit efficient complete robustness verification, moving away from the NP-hardness of the robustness verification problem. Lorenzo Cazzaro is a third-year Ph.D. student in Computer Science and a teaching assistant at Università Ca’ Foscari Venezia. He graduated cum laude in Computer Science in 2021 and he was a research fellow in the field of adversarial machine learning until 2021. His main research interests include formal verification of properties of machine learning models, adversarial machine learning, and applications of artificial intelligence in cybersecurity.
22 January 2024:
Linyi Li
Certifying Critical Trustworthiness Properties of Large-Scale DL Systems
Along with the wide deployment of deep learning (DL) systems, their lack of trustworthiness (robustness, fairness, numerical reliability, etc) is raising serious social concerns, especially in safety-critical scenarios such as autonomous driving, aircraft navigation, and facial recognition. Hence, a rigorous and accurate evaluation of the trustworthiness of DL systems is critical before their large-scale deployment.
In this talk, I will introduce my research on certifying critical trustworthiness properties of large-scale DL systems. Inspired by techniques in optimization, cybersecurity, and software engineering, my work computes rigorous worst-case bounds to characterize the degree of trustworthiness for a given DL system and further improve such bounds via strategic training. Specifically, I will introduce two representative frameworks: (1) DSRS: a framework providing theoretically tightest certification for DL systems. DSRS along with our training method DRT and open-source tools (VeriGauge and alpha-beta-CROWN) is the state-of-the-art and award-winning solution for achieving DL robustness against constrained perturbations. (2) TSS: a framework providing most scalable certification for DL systems against semantic transformations. TSS opens a series of subsequent research on guaranteeing semantic robustness for various downstream DL and AI applications. Besides, I will briefly discuss some key techniques in alpha-beta-crown, the winning verification tool in recent VNN-COMP competitions. I will conclude this talk with a roadmap that outlines several core research questions and future directions on trustworthy machine learning.
20 November 2023:
Mark Muller
Training and Verification of Robust Neural Networks
Following the discovery of adversarial examples, provable robustness guarantees for neural networks have received increasing attention from the research community. While relatively small or heavily regularized models with limited standard accuracy can now be efficiently analyzed, obtaining guarantees for more accurate models remains an open problem. Recently, a new verification paradigm has emerged that tackles this challenge by combining a Branch-and-Bound approach with precise multi-neuron constraints. The resulting, more precise verifiers have in turn enabled novel certified training methods which reduce (over-)regularization to obtain more precise yet certifiable networks. In this talk, we discuss these certification and training methods.
Mark N. Müller is a senior Ph.D. student at the Secure, Reliable, and Intelligent Systems Lab of ETH Zürich, advised by Prof. Martin Vechev. Mark’s research focuses on provable guarantees for machine learning models. This includes both deterministic and probabilistic certification methods for a diverse range of architectures and certified training methods. He has led the research on these topics at SRI Lab for the last two years, including two industry collaborations, co-organized the Verification of Neural Networks Competition 2022 (VNN-Comp’22), and was the lead organizer for the 2nd Workshop on Formal Verification of Machine Learning (WFVML’23) at ICML’23 this year.
25 November 2022:
Mykel Kochenderfer
Automated Decision Making for Safety Critical Applications
Building robust decision making systems for autonomous systems is challenging. Decisions must be made based on imperfect information about the environment and with uncertainty about how the environment will evolve. In addition, these systems must carefully balance safety with other considerations, such as operational efficiency. Typically, the space of edge cases is vast, placing a large burden on human designers to anticipate problem scenarios and develop ways to resolve them. This talk discusses major challenges associated with ensuring computational tractability and establishing trust that our systems will behave correctly when deployed in the real world. We will outline some methodologies for addressing these challenges and point to some research applications that can serve as inspiration for building safer systems.
Mykel Kochenderfer is an Associate Professor of Aeronautics and Astronautics at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on aircraft collision avoidance, leading to the creation of the ACAS X international standard for manned and unmanned aircraft. He received his Ph.D. from the University of Edinburgh in 2006 and B.S. and M.S. degrees in computer science from Stanford University in 2003. Prof. Kochenderfer is a co-director of the Center for AI Safety. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty - Theory and Application (MIT Press, 2015), Algorithms for Optimization (MIT Press, 2019), and Algorithms for Decision Making (MIT Press, 2022).
02 March 2020:
Michael Akintunde
Formal Verification of Neural Agents in Non-deterministic Environments
We introduce a model for agent-environment systems where the agents are implemented via feed-forward ReLU neural networks and the environment is non-deterministic. We study the verification problem of such systems against CTL properties. We introduce a bounded fragment of CTL and show its usefulness in identifying shallow bugs in the system. We present a novel parallel algorithm for MILP-based verification of agent-environment systems, present an implementation, and report the experimental results obtained against a variant of the VerticalCAS use-case.
17 February 2020:
Dr Elena Botoeva
Verification of Neural Networks: Specifying Global Robustness using Generative Models
The success of neural networks across most machine learning tasks and the persistence of adversarial examples have made the verification of such models an important quest. Several techniques have been successfully developed to verify robustness, and are now able to evaluate neural networks with thousands of nodes. The main weakness of this approach is in the specification: robustness is asserted on a validation set consisting of a finite set of examples, i.e. locally. We propose a notion of global robustness based on generative models, which asserts the robustness on a very large and representative set of examples. We show how this can be used for verifying neural networks. In this paper we experimentally explore the merits of this approach, and show how it can be used to construct realistic adversarial examples.
03 February 2020:
Henrik Åslund
Verifiably Safe Off-Model Reinforcement Learning
The desire to use reinforcement learning in safety-critical settings has inspired a recent interest in formal methods for learning algorithms. Existing formal methods for learning and optimization primarily consider the problem of constrained learning or constrained optimization. Given a single correct model and associated safety constraint, these approaches guarantee efficient learning while provably avoiding behaviors outside the safety constraint. Acting well given an accurate environmental model is an important pre-requisite for safe learning, but is ultimately insufficient for systems that operate in complex heterogeneous environments. This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple possible environmental models must be taken into account. Through a combination of inductive data and deductive proving with design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.
27 January 2020:
Yi-Ling Liu
Adversarial Training for Image Recognition
We present a novel black-box adversarial training algorithm to defend against state-of-the-art attack methods in machine learning. In order to search for an adversarial attack, the algorithm analyses small regions around the input that are likely to make significant contributions for the generation of adversarial samples. Unlike some of the literature in the area, the proposed method does not require access to the internal layers of the model and is therefore applicable to applications such as security. We report the experimental results obtained on models of different sizes built for the MNIST and CIFAR10 datasets. The results suggest that known attacks on the resulting models are less transferable than those models trained by state-of-the art attack algorithms.
20 January 2020:
Francesco Leofante
AI Planning meets Production Logistics
Planning for logistics requires algorithms and tools that can handle the challenges such scenarios pose. On the one hand, expressive languages are needed to build faithful models; on the other hand, efficient solving techniques that can support these languages need to be devised. The existence of this trade-off, however, does not mean progress is not possible! Using the RoboCup Logistics League as a testbed, I will focus on production logistics and show how recent advances in satisfiability checking can be used to solve some interesting problems in the field.
13 January 2020:
Patrick Henriksen
Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search
We propose a novel verification method for high-dimensional feed-forward neural networks governed by ReLU, Sigmoid and Tanh activation functions. We show that the method is complete for ReLU networks and sound for other networks. The technique extends symbolic interval propagation by using gradient descent to locate counter-examples from spurious solutions generated by the associated LP problems. The approach includes a novel adaptive splitting strategy intended to refine the nodes with the greatest impact on the output of the network. The resulting implementation, called VERINET, generally achieved speed-ups of an order of magnitude for safe cases and a speed-up of 3 orders of magnitude for unsafe cases on MNIST models. We also show that VERINET can verify networks of over 50,000 ReLU nodes trained on the CIFAR-10 data-set, a network significantly larger than networks previously verified via complete methods.
19 December 2019:
Dr Dragos Margineantu (Boeing)
Robust Machine Learning and Scalable Interactive Decision Systems
Deployable decision systems require much more than end-to-end learned models. This talk will focus on two of the questions that open up for the machine learning scientist in the process of architecting and implementing modern decision systems: [A] how to design robust systems and [B] how to build interactive decision systems.
Research questions on machine learning robustness fall into two broad categories: [A] model correctness analysis, verification & validation and [B] bounding the risk of handling (predictions & actions) observations on which the learner is not qualified to handle. The first part of the talk will focus on the latter set of questions: how do we learn models that ‘know when they don’t know’, what are the formalisms that we need, what is practically doable - both, for supervised learning and reinforcement learning. First we will explore methods for learning self-competence models in addition to the predictions. Next, we will discuss self-competence estimation approaches for reinforcement learning and decision making. Finally, we will explore multi-faceted learning and related techniques, that can be applied in decision systems with the goal of increased robustness.
The second part of the talk will focus on architecting scalable interactive systems. Here, we will explore how Inverse Reinforcement Learning methods can be applied in a user-in-the-loop setting for the detection of anomalous actions and for intent recognition.
02 December 2019:
Dr Elena Botoeva
Strong Mixed-Integer Programming Formulations for Trained Neural Networks
We present an ideal mixed-integer programming (MIP) formulation for a rectified linear unit (ReLU) appearing in a trained neural network. Our formulation requires a single binary variable and no additional continuous variables beyond the input and output variables of the ReLU. We contrast it with an ideal “extended” formulation with a linear number of additional continuous variables, derived through standard techniques. An apparent drawback of our formulation is that it requires an exponential number of inequality constraints, but we provide a routine to separate the inequalities in linear time. We also prove that these exponentially-many constraints are facet-defining under mild conditions. Finally, we study network verification problems and observe that dynamically separating from the exponential inequalities 1) is much more computationally efficient and scalable than the extended formulation, 2) decreases the solve time of a state-of-the-art MIP solver by a factor of 7 on smaller instances, and 3) nearly matches the dual bounds of a state-of-the-art MIP solver on harder instances, after just a few rounds of separation and in orders of magnitude less time.
18 November 2019:
Alexander Manta
Semantic Labelling and Learning for Parity Game Solving in LTL Synthesis
The paper proposes “semantic labelling” as a novel ingredient for solving games in the context of LTL synthesis. It exploits recent advances in the automata-based approach, yielding more information for each state of the generated parity game than the game graph can capture. Utilizing this extra information even in simple heuristics can help improve the standard strategy iteration approach to yield a winning strategy directly without any computation and also makes strategy iteration yield smaller solutions. Furthermore, equipping Q-learning with semantic information makes it competitive to strategy iteration. The experimental results show that already the simplest heuristics can achieve significant improvements demonstrating the utility of semantic labelling for more advanced learning approaches both for initialization and improvement of strategies.
21 October 2019:
Dr Panagiotis Kouvaros
Verification of ReLU-based Neural Networks
10 December 2018:
Markus N. Rabe
Deep Learning for Automated Reasoning
Until today, automated reasoning and theorem proving mostly relied on combinatorial search algorithms and handwritten heuristics. But the success of deep learning in various application areas prompts us to revisit those paradigms. In particular AlphaGo, DeepMind’s system mastering board games such as Go and Chess, has demonstrated that deep reinforcement learning can be fused with combinatorial search algorithms with great success. This talk will discuss two projects on leveraging deep learning for automated reasoning and theorem proving. The first part concerns learning better heuristics for a solver for quantified Boolean formulas, and the second part will give an outlook on the ongoing effort in Google on learning how to prove theorems in higher-order logic. I will focus on the challenges and tradeoffs that arise from the combination of the two worlds.
24 July 2018:
Prof. Mykel Kochenderfer
Building Trust in AI for Safety-Critical Systems
Starting in the 1970s, decades of effort went into building human-designed rules for providing automatic maneuver guidance to pilots to avoid mid-air collisions. The resulting system was later mandated worldwide on all large aircraft and significantly improved the safety of the airspace. Recent work has investigated the feasibility of using computational techniques to help derive optimized decision logic that better handles various sources of uncertainty and balances competing system objectives. This approach has resulted in a system called Airborne Collision Avoidance System (ACAS) X that significantly reduces the risk of mid-air collision while also reducing the alert rate, and it is in the process of becoming the next international standard. Using ACAS X as a case study, this talk will discuss lessons learned about building trust in advanced decision support systems. This talk will also outline research challenges in facilitating greater levels of automation into safety critical systems.
09 July 2018:
Xiaowei Huang
Approximate Verification of Deep Neural Networks with Provable Guarantees
Deep neural networks (DNNs) have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. In this talk, I will report our recent works towards the verification of feed-forward multi-layer DNNs. We enable exhaustive search of the region by employing discretisation, reduce the verification problem to the finding of an optimal solution in a two-player game, and propagate the analysis layer by layer. By utilising the fact that DNNs are Lipschitz continuous, we show that our approach can achieve provable guarantees. We implemented our approach into a software tool and conducted experiments on several DNNs trained on datasets such as MNIST, CIFAR10, and imageNet, and a DNN for Nexar traffic light challenge.