The overarching aim of the Safe Artificial Intelligence Lab is to develop novel computational methods and tools for providing safety guarantees to a wide range of autonomous systems, including autonomous vehicles, robotic systems, and swarm systems.
We are particularly active in the following topics:
Our work is guided by a passion for Artificial Intelligence and the belief that AI should be safe and secure for society to use.
We have a history of development and maintenance of open-source state-of-the-art toolkits for Safe AI and international collaboration both with academia and the industry.
We presently benefit from strong links with the Assured Autonomy DARPA program and the Centre for Doctoral Training in Safe and Trusted AI.
Meet the team member: Atri Sharma11 September 2023
Paper on verification of key point detection accepted at KR202309 August 2023
Panagiotis Kouvaros awarded prestigious IJCAI Early Career Spotlight Award06 July 2023
SAIL (formerly VAS) Group has a paper on verification against LVM-based specifications accepted at CVPR2308 June 2023
SAIL (formerly VAS) Group has a paper on verification-friendly networks accepted at IJCNN2311 May 2023
SAIL (formerly VAS) Group has a paper on robust explanations accepted at AAMAS23