Maria Stoica

Maria Stoica

STAI PhD Student

Personal Website →

Where are you from, and what is your background?

I grew up in Kansas, and I graduated from Harvard with a bachelor’s degree in computer science in 2017. After that, I was working in finance in New York City and London at Goldman Sachs and NatWest Markets as a quantitative analyst. During the pandemic, I realised that I had an interest in research, and so I decided to enrol in Oxford’s MSc in Advanced Computer Science. During my time at Oxford, I found topics in the safety and reliability of AI systems exciting, which led me to pursue a PhD in this area.

What do you do in your spare time?

I love being active! I grew up skiing and playing tennis. During my undergraduate degree, I joined the Lightweight Women’s Rowing team as a coxswain (unfortunately, my height does not bode well for rowing), and during my MSc, I was a member of the St. Edmund Hall Boat Club. At Imperial, I was a member of the Imperial Lawn Tennis Club and was involved in outreach through the Women in Computing group. Aside from this, I have my sailing license and I am a certified scuba diver!

What influenced you to do a PhD?

While working in the financial industry, I was introduced to different applications of machine learning, but these were often difficult to deploy due to their potential risks. Human traders would often check the outputs of automated systems because building trust and confidence in them was difficult. I want to explore areas of safety in machine learning and specifically investigate how monitoring algorithms can ensure the reliability of machine learning algorithms, especially those deployed in high-risk applications such as finance and healthcare. These monitoring algorithms can help reduce the amount of human checking required in a system and increase confidence in their deployments.

What are your research interests?

My PhD research addresses a critical challenge in artificial intelligence: ensuring the reliability and safety of neural networks when deployed in real-world scenarios. I am developing lightweight and accurate monitoring algorithms to detect out-of-distribution inputs and unexpected behaviours in neural networks. These situations, where inputs deviate significantly from the data seen during training or the model behaves unpredictably, can lead to failures with potentially severe consequences in high-stakes applications. My goal is to create tools that operate efficiently in real time, running alongside neural networks without significant computational overhead, to enhance their safety and robustness.