Chris Cundy

Chris Cundy

Research Scientist

FAR AI

Hi

I am a Research Scientist at FAR AI, researching topics to reduce catastrophic risks from advanced AI systems. If you are doing similar work, please reach out – I’d love to hear from you! We are also hiring.

I have a PhD from Stanford University, wonderfully advised by Stefano Ermon. During my PhD, I studied a diverse range of topics including constrained reinforcement learning, variational inference, and autoregressive models.

I studied Physics for my undergrad and took a Computer Science Master’s. It was a pleasure to work with Carl E. Rasmussen, developing variational methods for Gaussian Process State-Space Models.

I have also interned at the Centre for Human Compatible AI, the Future of Humanity Institute at Oxford University, and DeepMind.

Get in touch at chris dot j dot cundy at gmail dot com

Interests

  • Deceptive Behavior from LLMs
  • Risk Evaluation and Elicitation
  • Governance of Frontier Models
  • Adversarial Robustness
  • Probabilistic Machine Learning

Education

  • PhD in Computer Science, 2018-Ongoing

    Stanford University

  • MEng in Computer Science, 2017

    Cambridge University

  • BA in Natural Sciences (Physics), 2016

    Cambridge University

Recent Publications

Quickly discover relevant content by filtering publications.

SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking

In many domains, autoregressive models can attain high likelihood on the task of predicting the next observation. However, this …

Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients

As reinforcement learning techniques are increasingly applied to real-world decision problems, attention has turned to how these …

LMPriors: Pre-Trained Language Models as Task-Specific Priors

Particularly in low-data regimes, an outstanding challenge in machine learning is developing principled techniques for augmenting our …

IQ-Learn: Inverse soft-Q Learning for Imitation

In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is …

Scalable Variational Approaches for Bayesian Causal Discovery

A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic …

Recent Posts

AI Misuse Proof-of-Concept: Algorithmic Surveillance

Introduction Recently I’ve been thinking about misuse of sophisticated foundation models such as GPT4. Even if we are able to solve AI alignment, there are significant challenges that arise when general-purpose reasoning becomes cheap and widespread.

GPT-4 Memorizes Project Euler Numerical Solutions

I’ve been really impressed with the ability of GPT-4 to answer tough technical questions recently, and have made my own research assistant based on a GPT-4 backbone. While looking at the ability of GPT-4 to solve programming puzzles, I asked GPT-4 to write a solution program to Project Euler problem 1 (Find the sum of all the multiples of 3 or 5 below 1000).

Using Codex in the Wild

Introduction Following on from my previous article about using codex in emacs, I’ve found my plug-in more and more useful in everyday programming. Some general impressions At the moment, the results are on par with what I’d expect from a decent undergraduate programmer.

Using Codex in Emacs

Introduction Recently OpenAI released their ‘editing mode’ API for their language models. In this mode (which you can select by clicking on the ‘mode’ selector on the right-hand-side), we are able to put a piece of context (such as a code snippet) called the ‘input’ and an instruction (such as change the sort to use a heapsort’).

The Adjoint Method in a Dozen Lines of JAX

The Adjoint Method is a powerful method for computing derivatives of functions involving constrained optimization. It’s been around for a long time, but recently has been popping up in machine learning, used in papers such as the Neural ODE and many others.