Posts

(Averaged) Cross-Entropy Loss is not a Proper Scoring Rule

In a recent paper we wrote when referring to the KL divergence that In some domains, the length of the sequences n differs in each example, which can be incorporated by choosing an effective length N = max n, and treating all sequences shorter than N as having a sequence of padding tokens appended,

AI Misuse Proof-of-Concept: Algorithmic Surveillance

Introduction Recently I’ve been thinking about misuse of sophisticated foundation models such as GPT4. Even if we are able to solve AI alignment, there are significant challenges that arise when general-purpose reasoning becomes cheap and widespread.

GPT-4 Memorizes Project Euler Numerical Solutions

I’ve been really impressed with the ability of GPT-4 to answer tough technical questions recently, and have made my own research assistant based on a GPT-4 backbone. While looking at the ability of GPT-4 to solve programming puzzles, I asked GPT-4 to write a solution program to Project Euler problem 1 (Find the sum of all the multiples of 3 or 5 below 1000).

Using Codex in the Wild

Introduction Following on from my previous article about using codex in emacs, I’ve found my plug-in more and more useful in everyday programming. Some general impressions At the moment, the results are on par with what I’d expect from a decent undergraduate programmer.

Using Codex in Emacs

Introduction Recently OpenAI released their ‘editing mode’ API for their language models. In this mode (which you can select by clicking on the ‘mode’ selector on the right-hand-side), we are able to put a piece of context (such as a code snippet) called the ‘input’ and an instruction (such as change the sort to use a heapsort’).

The Adjoint Method in a Dozen Lines of JAX

The Adjoint Method is a powerful method for computing derivatives of functions involving constrained optimization. It’s been around for a long time, but recently has been popping up in machine learning, used in papers such as the Neural ODE and many others.

Running GPT-J On Several Smaller GPUs

Introduction Recently several large language models have been open-sourced. Particularly interesting is GPT-J, which has completely open-sourced weights and provides pre-trained weights. The model itself has performance comparable to the smallest version of GPT3.

Mutual Information Regularization for Reward Hacking

Introduction In some recent work that we just put up on arxiv, we explore the idea of training reinforcement agents which obey privacy constraints. Something I’ve wanted to explore for a while is the possibility of using this constrained RL approach for dealing with reward hacking.

Installing cdt R prerequisites in Ubuntu without root

I wanted to use some of the tools from the causal discovery toolbox which require R and the pcalg package to be installed. As a complete newcomer to R, it was more hassle than I thought it would be to install R on an ubuntu server without root access.

Managing ArXiv RSS Feeds in Emacs

Background It’s very important for any researcher to keep up with the papers that are being published, especially in the fast-moving field of machine learning. However, there are a lot of papers from the arxiv categories which I follow, sometimes hundreds of papers a day.