Ameya Daigavane, Balaraman Ravindran, and Gaurav Aggarwal
Understanding the building blocks and design choices of graph neural networks.
Benjamin Sanchez-Lengeling, Emily Reif, Adam Pearce, and Alexander B. Wiltschko
What components are needed for building learning algorithms that leverage the structure and properties of graphs?
Editorial Team
After five years, Distill will be taking a break.
Gabriel Goh, Nick Cammarata †, Chelsea Voss †, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah
We report the existence of multimodal neurons in artificial neural networks, similar to those found in the human brain.
Jacob Hilton, Nick Cammarata, Shan Carter, Gabriel Goh, and Chris Olah
With diverse environments, we can analyze, diagnose and edit deep reinforcement learning models using attribution.
Fred Hohman, Matthew Conlen, Jeffrey Heer, and Duen Horng (Polo) Chau
Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization.
Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, Michael Levin, and Sam Greydanus
A collection of articles and comments with the goal of understanding how to design robust and general purpose self-organizing systems.
Apoorv Agnihotri and Nipun Batra
How to tune hyperparameters for your machine learning model using Bayesian optimization.
Mingwei Li, Zhenge Zhao, and Carlos Scheidegger
By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks.
Nick Cammarata, Shan Carter, Gabriel Goh, Chris Olah, Michael Petrov, Ludwig Schubert, Chelsea Voss, Ben Egan, and Swee Kiat Lim
What can we learn if we invest heavily in reverse engineering a single neural network?
Pascal Sturmfels, Scott Lundberg, and Su-In Lee
Exploring the baseline input hyperparameter, and how it impacts interpretations of neural network behavior.
André Araujo, Wade Norris, and Jack Sim
Detailed derivations and open-source code to analyze the receptive fields of convnets.
Sam Greydanus and Chris Olah
A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency
Logan Engstrom, Justin Gilmer, Gabriel Goh, Dan Hendrycks, Andrew Ilyas, Aleksander Madry, Reiichiro Nakano, Preetum Nakkiran, Shibani Santurkar, Brandon Tran, Dimitris Tsipras, and Eric Wallace
Six comments from the community and responses from the original authors
Augustus Odena
What we’d like to find out about GANs that we don’t know yet.
Jochen Görtler, Rebecca Kehlbeck, and Oliver Deussen
How to turn a collection of small building blocks into a versatile tool for solving regression problems.
Andreas Madsen
Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding.
Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, and Chris Olah
By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents.
Geoffrey Irving and Amanda Askell
If we want to train AI to do what humans want, we need to study humans.
Distill Editors
An Update from the Editorial Team
Alexander Mordvintsev, Nicola Pezzotti, Ludwig Schubert, and Chris Olah
A powerful, under-explored tool for neural network visualizations and art.
Vincent Dumoulin, Ethan Perez, Nathan Schucher, Florian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio
A simple and surprisingly effective family of conditioning mechanisms.
Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev
Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space.
Shan Carter and Michael Nielsen
By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.
Awni Hannun
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
Chris Olah, Alexander Mordvintsev, and Ludwig Schubert
How neural networks build up their understanding of images
Gabriel Goh
We often think of optimization with momentum as a ball rolling down a hill. This isn’t wrong, but there is much more to the story.
Chris Olah and Shan Carter
Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt...
Shan Carter, David Ha, Ian Johnson, and Chris Olah
Several interactive visualizations of a generative model of handwriting. Some are fun, some are serious.
Augustus Odena, Vincent Dumoulin, and Chris Olah
When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts.
Martin Wattenberg, Fernanda Viégas, and Ian Johnson
Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading.
A visual overview of neural attention, and the powerful extensions of neural networks being built on top of it.
Communicating with Interactive Articles
Fred Hohman, Matthew Conlen, Jeffrey Heer, and Duen Horng (Polo) Chau
Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization.