Chris Cundy

Chris Cundy

Machine Learning PhD Student

Stanford University

Hi

I am a PhD student at Stanford University, advised by Stefano Ermon. I’m broadly interested in Artificial Intelligence (AI), particularly in ensuring that sophisticated AI systems will robustly and reliably carry out the tasks that we want them to.

If you’re a student at Stanford (undergraduate/masters/PhD) who wants to work on a project involving safe and reliable machine learning: get in touch!

I studied Physics as an undergraduate at Cambridge University before switching area to take a computer science master’s degree. During my master’s, it was a pleasure to work with Carl E. Rasmussen, developing variational methods for Gaussian Process State-Space Models.

Before starting my PhD at Stanford, I worked at the Centre for Human Compatible AI, collaborating with Stuart Russell and Daniel Filan. I have also worked at the Future of Humanity Institute at Oxford University, collaborating with Owain Evans on scalable human supervision of complex AI tasks.

Get in touch at chris dot j dot cundy at gmail dot com

Interests

  • Probabilistic Machine Learning
  • Generative Models
  • Reinforcement Learning
  • Safe and Reliable ML
  • Large Language Models

Education

  • PhD in Computer Science, 2018-Ongoing

    Stanford University

  • MEng in Computer Science, 2017

    Cambridge University

  • BA in Natural Sciences (Physics), 2016

    Cambridge University

Recent Publications

Quickly discover relevant content by filtering publications.

LMPriors: Pre-Trained Language Models as Task-Specific Priors

Particularly in low-data regimes, an outstanding challenge in machine learning is developing principled techniques for augmenting our …

IQ-Learn: Inverse soft-Q Learning for Imitation

In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is …

Scalable Variational Approaches for Bayesian Causal Discovery

A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic …

Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients

As reinforcement learning techniques are increasingly applied to real-world decision problems, attention has turned to how these …

Flexible Approximate Inference via Stratified Normalizing Flows

A major obstacle to forming posterior distributions in machine learning is the difficulty of evaluating partition functions. …

Recent Posts

AI Misuse Proof-of-Concept: Algorithmic Surveillance

Introduction Recently I’ve been thinking about misuse of sophisticated foundation models such as GPT4. Even if we are able to solve AI alignment, there are significant challenges that arise when general-purpose reasoning becomes cheap and widespread.

GPT-4 Memorizes Project Euler Numerical Solutions

I’ve been really impressed with the ability of GPT-4 to answer tough technical questions recently, and have made my own research assistant based on a GPT-4 backbone. While looking at the ability of GPT-4 to solve programming puzzles, I asked GPT-4 to write a solution program to Project Euler problem 1 (Find the sum of all the multiples of 3 or 5 below 1000).

Using Codex in the Wild

Introduction Following on from my previous article about using codex in emacs, I’ve found my plug-in more and more useful in everyday programming. Some general impressions At the moment, the results are on par with what I’d expect from a decent undergraduate programmer.

Using Codex in Emacs

Introduction Recently OpenAI released their ‘editing mode’ API for their language models. In this mode (which you can select by clicking on the ‘mode’ selector on the right-hand-side), we are able to put a piece of context (such as a code snippet) called the ‘input’ and an instruction (such as change the sort to use a heapsort’).

The Adjoint Method in a Dozen Lines of JAX

The Adjoint Method is a powerful method for computing derivatives of functions involving constrained optimization. It’s been around for a long time, but recently has been popping up in machine learning, used in papers such as the Neural ODE and many others.