Atri Sharma

Atri Sharma

STAI PhD Student

Personal Website →

Where are you from, and what is your background?

I grew up in India and Singapore, following which I pursued an undergraduate degree in Aeronautical Engineering, also from Imperial College London. Over the years, I developed a strong interest in machine learning, and worked on applying it to optimise aircraft structures. I subsequently joined a start-up applying machine learning to the healthcare domain, where I worked on research and deployment of Natural Language Processing algorithms on clinical data.

What do you do in your spare time?

I am a massive history buff (particularly aviation), and I really enjoy reading, watching documentaries, and going to museums. I love being active, with hiking and running being my favourite activities. Furthermore, I greatly enjoy building and flying drones and model aircraft, and I have been part of and led several groups and competition teams in building and flying autonomous rotary and fixed wing aerial vehicles at school and university.

What influenced you to do a PhD?

Working in the healthcare domain highlighted the immense potential and key drawbacks of machine-learning algorithms – they have immense potential to identify at-risk patients, deliver improved, personalised care, and assist medical professionals. However, they have a low adoption rate in the real world due to a lack of trust. This lack of trust stems from a lack of explainability as well as observed fragility in predictions, which is difficult to understand, quantify, and subsequently mitigate. Therefore, I am interested in studying and developing machine learning algorithms that are verified and have certified performance guarantees so that they can be applied safely within sensitive use cases.

What are your research interests?

I am currently working on methods to evaluate and improve the robustness of machine learning algorithms for structured data. In particular, I have worked on training adversarially robust decision-tree ensembles, and am currently studying methods to evaluate robustness of large language models and tabular foundation models on tabular prediction tasks.