Chris Cundy

Chris Cundy

Machine Learning PhD Student

Stanford University


I am broadly interested in Artificial Intelligence (AI), particularly in ensuring that sophisticated AI systems will robustly and reliably carry out the tasks that we want them to.

If you’re a student at Stanford (undergraduate/masters/PhD) who wants to work on a project involving safe and reliable machine learning: get in touch!

I studied Physics as an undergraduate at Cambridge University before switching area to take a Computer Science Master’s Degree. My Master’s thesis was in the area of Bayesian inference for timeseries, investigating variational methods for inference in Gaussian Process State-Space Models. I was supervised by Carl E. Rasmussen

After graduating, I spent a summer at the Centre for Human Compatible AI at UC Berkeley, working with Stuart Russell and Daniel Filan on new approaches to incorporating human irrationality into Inverse Reinforcement Learning. I presented a poster on this work at the First workshop on goals and specification in Reinforcement Learning at ICML 2018. In my spare time, I took on a project on efficiently parallelizing Long Short-Term Memory units (LSTM)s in collaboration with Eric Martin, where we were able to obtain a 9x speedup for several popular RNN architectures. I presented the work as a poster at ICLR 2018.

For the nine months until June 2018, I worked at the Future of Humanity Institute at Oxford University, collaborating with Owain Evans on scalable human supervision of complex AI tasks.

Get in touch at chris dot j dot cundy at gmail dot com


  • Probabilistic Machine Learning
  • Generative Models
  • Reinforcement Learning
  • Safe and Reliable ML


  • PhD in Computer Science, 2018-Ongoing

    Stanford University

  • MEng in Computer Science, 2017

    Cambridge University

  • BA in Natural Sciences (Physics), 2016

    Cambridge University

Recent Posts

F Divergences

A first try at blogging, I explore some interesting properties of f divergences