Stipendiary Lecturer in Computer Science

Clare Lyle

  • I work on machine learning, developing theoretical and empirical tools to better understand and predict how models will generalize from their training data to the real data they will be used on.
  • The thing I enjoy the most about teaching at Oxford is working on small groups with exceptionally bright and motivated students.
  • Right now, I am particularly interested in getting AI systems to disentangle cause from effect.


I’m a final-year DPhil student in the Computer Science department. I completed my undergraduate studies at McGill University in Canada, where I studied Mathematics and Computer Science. I’ve done research internships at Google Brain and DeepMind. My research focuses on generalization in machine learning (ML), and uses ideas from a broad range of fields such as causal inference, Bayesian deep learning, and reinforcement learning.


At Trinity, I teach Linear Algebra, Discrete Mathematics, and Continuous Mathematics. I have also co-supervised MSc students in the computer science department.


I’m particularly interested in studying how the learning dynamics of ML systems affect generalization and convergence properties. For example, it is widely observed that often models which can quickly fit their training data have better generalization properties than those which take longer. I’ve worked on using ideas from Bayesian model selection to understand this phenomenon and to propose new performance estimators that let us more efficiently search for good neural network architectures.

I also work on deep reinforcement learning, which is concerned with how learning systems can interact with the world to achieve goals. This setting yields much more complex and unstable learning dynamics, and as a result many of the strategies that people use to train neural networks for supervised learning tasks, where the goal is to fit a fixed set of input-label pairs, fail when applied to reinforcement learning problems. I’ve worked on both theoretical analysis of reinforcement learning algorithms and the development of new learning algorithms which improve training stability and generalization.

One unifying theme in this work is the idea that by learning the right causal structure of the world, ML systems will generalize better and be more robust when they’re deployed. Humans have useful intuitions about cause and effect that help us navigate the world reasonably robustly, but translating these intuitions into machines is surprisingly challenging.

Further information can be found on my website here.

Selected Publications

Lyle, Clare, Marc G. Bellemare, and Pablo Samuel Castro. ‘A comparative analysis of expected and distributional reinforcement learning.’ Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.

Lyle, Clare, Lisa Schut, Robin Ru, Yarin Gal, and Mark van der Wilk. ‘A Bayesian Perspective on Training Speed and Model Selection.’ Advances in Neural Information Processing Systems 33 (2020).

Wang, B., C. Lyle, and M. Kwiatkowska. ‘Provable guarantees on the robustness of decision rules to causal interventions.’ In Proceedings of the International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2021.

Lyle, Clare, Mark Rowland, Georg Ostrovski, and Will Dabney. ‘On The Effect of Auxiliary Tasks on Representation Dynamics.’ In International Conference on Artificial Intelligence and Statistics, pp. 1-9. PMLR, 2021.

Invariant Prediction for Generalization in Reinforcement Learning
Clare Lyle