SAIL (formerly VAS) Group has a paper on verification of agents accepted at AAMAS23
Mehran Hosseini, a Research Associate in the SAIL (formerly VAS) group, has had a paper accepted at the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS23):
M. Hosseini, A. Lomuscio. Bounded and Unbounded Verification of RNN-Based Agents in Non-deterministic Environments (extended abstract). Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS23). London, UK. IFAAMAS Press.
We caught up with Mehran to get an idea about his work and how it contributes to research in the area of Multi-agent Systems.
Could you quickly summarise what the paper is about?
In our paper we verify bounded and unbound temporal specifications of Agent-Environment Systems (AES) with memoryful agents. The memory in these agents is implemented using a Recurrent Neural Network (RNN). Such AESs are useful for modelling and analysing the interactions of autonomous agents within uncertain environments.
That’s interesting! So how do you advance the state of the art in the area and what are your contributions?
As mentioned before, we assume that the memory in the agents is implemented using a Recurrent Neural Network (RNN). We introduce an approach based on Bound Propagation (BP) and Mixed-Integer Linear Programming (MILP) to verify temporal specifications of such systems. Not only does this approach, for the first time, allow us to verify unbounded temporal specifications of AESs with memoryful agents, but also allows for more optimised complete and sound verification of bounded temporal specifications of such AESs.
This seems quite theoretical so far. Why is this an important advancement - are there any practical applications where this is relevant or that are unlocked by your progress?
Yes, there are definitely some applications where this might be useful! Autonomous Unmanned Vehicles (AUVs) are usually deployed in unfamiliar environments, where the vehicle has to remember past actions and observations. A moon rover would, for instance, would be a typical example of such an AUV. At the moment, verification approaches do not scale to systems with this complexity but these are challenges that we hope our research will contribute to overcome.
That sounds very relevant indeed! So what are future directions of research that you’d like to pursue in that area?
Besides scalability, in the future, our goal is to extend this approach to allow for verifying a wider range of neural agents in multi-agent environments.