In recent years, significant performance gains in autoregressive language modeling have been achieved by increasing the number of parameters in Transformer models. This has led to a tremendous increase in training energy cost and resulted in a generation of dense “Large Language Models” (LLMs) with 100+ billion parameters. Simultaneously, large datasets containing trillions of words…
Responsibility & Safety
Published
…
Research
Published
…
Solving some of the major challenges of the 21st Century, such as producing clean electricity or developing high temperature superconductors, will require us to design new materials with specific properties. To do this on a computer requires the simulation of electrons, the subatomic particles that govern how atoms bond to form molecules and are also…
Research
Published
…
In our recent paper we explore how multi-agent deep reinforcement learning can serve as a model of complex social interactions, like the formation of social norms. This new class of models could provide a path to create richer, more detailed simulations of the world. Humans are an ultra social species. Relative to other mammals we…
We believe artificial intelligence (AI) is one of the most significant technologies of our age and we want to help people understand its potential and how it’s being created. In 2019, we released DeepMind: The Podcast to explore these ideas, answer common questions and give an inside look at how AI research happens at a…
In our recent paper, we show that it is possible to automatically find inputs that elicit harmful text from language models by generating inputs using language models themselves. Our approach provides one tool for finding harmful model behaviours before users are impacted, though we emphasize that it should be viewed as one component alongside many…
Collaborating with YouTube to optimise video compression in the open source VP9 codec. In 2016, we introduced AlphaGo, the first artificial intelligence program to defeat humans at the ancient game of Go. Its successors, AlphaZero and then MuZero, each represented a significant step forward in the pursuit of general-purpose algorithms, mastering a greater number of…
Research
Published
…