I am a postdoctoral researcher at Aalto University working with Sami Kaski on human-AI collaboration. Before that, I did my PhD at Northeastern University with Chris Amato, after I had finished my master’s at the University of Amsterdam supervised by Frans Oliehoek. During my master’s and PhD, I worked on Bayesian reinforcement learning. Since then, I have focused on human-AI collaboration. Here, I focus on designing AI that models their users as intentional agents with beliefs over the consequences of their interactions with the system.
You can find a taste of my research here, though for a comprehensive list look at Google scholar. Where appropriate, I have included the code repositories there. For all my public code, check out my Github instead.
Core ideas behind my work are:
- Computational (Bounded) Rationality: the theory that human behavior can be explained as decision-making that optimizes some utility, albeit under constraints. This theory provides a principled and general approach for modeling, inferring, and predicting actions from humans.
- Theory of Mind: by assigning beliefs, desires, and intentions to each other, we model each other with incredible accuracy. This allows us to predict each other and reason about joint solutions which, ultimately, is crucial to our ability to collaborate.
- Bayesian Inference & Uncertainty Estimation: I am particularly interested in problems with uncertainty. This uncertainty can be with respect to the state of the environment, its dynamics, or the task. The Bayesian perspective provides a principled mechanism for capturing uncertainty and is used throughout my work.
My work uses these concepts to develop systems that infer intentions, predict actions, and understand user goals in order to collaborate. Applications and problem settings of particular interest to me have been “human-in-the-loop” settings. This is a broad class of problems defined by the fact that a user is a key part of the problem and includes, for example, AI for science and reinforcement learning from human feedback.
News
- Jan 2026: our work “Predictive Deep Sets” got accepted at AIStats (congrats Alex!).
- Jan 2026: our work “More Than Irrational: Modeling Belief-Biased Agents” got accepted at AAAI (congrats Yifan!). I will also present a poster at AAAI workshop ToM4AI on “Theory of Mind in Human-in-the-Loop”.
- Dec 2025: the site is live!



