I am originally from Turin, Italy, and moved to the UK at the age of 11. I completed an integrated master’s degree (MEng) in Electronic Engineering with Artificial Intelligence at the University of Southampton, after which I moved to Singapore to conduct research at the National University of Singapore (NUS). I thereafter started a PhD in the School of Computing, supported by the generous SINGA scholarship provided by A*STAR. My thesis, Adversarial Robustness in Deep Learning NLP Systems, introduced ways to perturb language models as well as evaluate and improve their robustness. After completing my PhD, I joined Imperial.
I enjoy travelling. I have visited around 20 countries so far, in one of my most memorable trips I climbed Mount Bromo in Indonesia. I also like taking part in hackathons, having taken part in more than ten, my favorite was hosted aboard the research yacht Gene Chaser.
I began exploring computer science from a security perspective during my undergraduate. As systems are designed, deployed, and widely adopted, the incentive for attackers to compromise them inevitably grows. When I first looked into the security of AI models, I had the intuition that deep learning would be no different: as adoption increases, so does the financial motivation for bad actors. At the time, before I started the PhD, very little work had been done in this area beyond early academic studies on small language and vision models. This motivated me to explore the topic of security and robustness of NLP from a research perspective and to contribute early work to the field. Since then, the field has grown substantially, and the security/robustness of AI systems has become increasingly important.
I was also originally drawn to a PhD by the opportunity to explore these ideas freely and to build systems from first principles.
My research focuses on adversarial robustness in NLP. More recently, I have been exploring the intersection of optimization and combinatorics, particularly in the context of generating adversarial perturbations for LLMs. I have also spent time working on mathematical reasoning and tackling issues such as diversity collapse in model-generated solutions.