The overarching aim of the Safe Artificial Intelligence Lab is to develop novel computational methods and tools for providing safety guarantees to a wide range of autonomous systems, including autonomous vehicles, robotic systems, and swarm systems.
We are particularly active in the following topics:
Our work is guided by a passion for Artificial Intelligence and the belief that AI should be safe and secure for society to use.
We have a history of development and maintenance of open-source state-of-the-art toolkits for Safe AI and international collaboration both with academia and the industry.
We presently benefit from strong links with the Assured Autonomy DARPA program and the Centre for Doctoral Training in Safe and Trusted AI.
Meet the team member: Alejandro Mercado
25 March 2024Meet the team member: Sherwin Varghese
18 October 2023Meet the team member: Atri Sharma
11 September 2023Paper on verification of key point detection accepted at KR2023
09 August 2023Panagiotis Kouvaros awarded prestigious IJCAI Early Career Spotlight Award
06 July 2023SAIL (formerly VAS) Group has a paper on verification against LVM-based specifications accepted at CVPR23