I’m from Buenos Aires, Argentina: the true city that never sleeps! I earned my Licenciatura, a six-year degree equivalent to a combined Bachelor’s and Master’s, in Computer Science from the University of Buenos Aires (UBA). At UBA, the program has a strong emphasis on theoretical computer science, mathematics, and logic. My thesis focused on Belief Revision, an area within Knowledge Representation and Reasoning, a branch of Symbolic AI, that deals with updating knowledge bases while maintaining consistency and preserving as much information as possible. After graduating, I worked at Safe Intelligence, an Imperial spin-off, carrying out a project on the formal verification of object detection models for safety-critical applications.
Having a good balance between work and social life is key for me. When I’m not at my desk, you can find me hanging out with friends, trying out new places (I go crazy for good food), going out dancing, listening to live music and visiting art galleries. I really enjoy meeting new people, as well as just wandering around the city: even after 24 years living there, Buenos Aires had always made me feel like a tourist.I also love cycling (if you can cycle in Buenos Aires, you can cycle anywhere) traveling and photography.
As I was completing my Licenciatura, I realized there was still so much more for me to learn in the field. My interests grew particularly around the limitations of modern machine learning methods, especially in terms of safety, explainability, and performance in complex tasks requiring reasoning; and how these weaknesses can be overcome by integrating deep learning with symbolic AI. Plus, I’ve long wanted to have the experience of living abroad, and a PhD seemed like a great opportunity that met both my objectives.
My research focuses on developing neurosymbolic approaches that scale sound and complete methods for synthesizing symbolic generalized policies in sequential decision-making problems. On the one hand, data-driven models offer flexibility and scalability but are often opaque, difficult to verify, and prone to failures under distribution shift, which makes them unsuitable for safety-critical autonomous systems. On the other hand, symbolic policies are interpretable, verifiable, and supported by formal guarantees, but they are significantly more expensive to synthesize. By integrating both approaches, my research aims to combine the scalability of neural models with the reliability and guarantees of symbolic reasoning to produce robust and trustworthy decision-making systems.