
The Science of Deep Learning
March 13 - 14, 2019
National Academy of Sciences, Washington, D.C.
Organized by: David Donoho, Maithra Raghu, Ali Rahimi, Ben Recht and Matan Gavish
Artificial neural networks have re-emerged as a powerful concept for designing state-of-the-art algorithms in machine learning and artificial intelligence. Across a variety of fields, these architectures seem to outperform time-honored machine learning methods. Interestingly, our understanding of why and when these methods work remains limited. At the same time, an increasing number of mission-critical systems depend on deep neural networks, from autonomous vehicles to social media platforms that influence political discourse. Scientists are also beginning to rely more on deep learning as a knowledge discovery tool as research becomes ever more data driven.
This interdisciplinary meeting began with talks that surveyed the state of affairs in deep learning in academia and industry, the projected developments in the coming years, and the broader implications on science and society. The colloquium will then covered two timely, interleaved topics: First, what can deep learning do for science? What disciplines already integrating deep learning, and what lies ahead for scientists using deep learning? Second, what can science do for deep learning? What insights can deep learning gain from scientists who study complex systems (e.g. in Physics, Chemistry and the Life Sciences)? Can experimental techniques be used to study the nature of artificial deep neural networks? Can familiar principles that emerge in natural complex systems help us understand deep neural networks?
Wednesday March 13th
Session I: The State of Deep Learning (Chair: Donoho)
Opening remarks: David Donoho, Stanford University
Overview talk (I) Amnon Shashua, Hebrew University / Mobileye, Successes and Challenges in Modern Artificial Intelligence
Overview talk (II) Jitendra Malik, University of California, Berkeley
Talk: Chris Manning, Stanford University, The State of Deep Learning for Natural Language Processing
alk: Oriol Vinyals, Google AI, The State of Deep Reinforcement Learning
Critical Perspective: Strengths and fallacies in the dominant DL narrative
Moderator: David Donoho, Stanford University
Terrence Sejnowski, Salk Institute for Biological Studies
Tomaso Poggio, Massachusetts Institute of Technology
Regina Barzilay, Massachusetts Institute of Technology
Rodney Brooks, Massachusetts Institute of Technology
Session II: Deep Learning in Science (Chair: Raghu)
Talk: Regina Barzilay, Massachusetts Institute of Technology
Talk: Kyle Cranmer, New York University, Experiences with deep learning in particle physics
Talk: Olga Troyanskaya, Princeton University
Talk: Eero Simoncelli, New York University
Counterpoint: Bruno Olshausen, University of California, Berkeley, Can deep learning provide deep insight in neuroscience?
Counterpoint: Antonio Torralba, Massachusetts Institute of Technology
Panel Discussion: Scientific Funding for Deep Learning
Moderator: Juan Meza, NSF (TBC)
Robert Bonneau, DOD
Hava Siegelmann, DARPA
Henry Kautz, NSF
Richard (Doug) Riecken, USAF Office of Scientific Research
Evening
Annual Sackler Lecture
Introduction by Marcia McNutt, President, National Academy of Sciences
Rodney Brooks, Massachusetts Institute of Technology
Thursday March 14th
Session III: Theoretical Perspectives on Deep Learning (Chair: Rahimi)
Talk: Tomaso Poggio, Massachusetts Institute of Technology
Deep learning: Solving the Approximation, Optimization and Generalization Puzzles
Talk: Nati Srebro, Toyota Technological Institute at Chicago
Talk: Peter Bartlett, University of California, Berkeley, Accurate prediction from interpolation: A new challenge for statistical learning theory
Counterpoint: Konrad Kording, University of Pennsylvania, Why neuroscience needs deep learning theory
Counterpoint: Anders Hansen, Cambridge University, On instabilities in deep learning - Does AI come at a cost?,
Counterpoint: Ronald Coifman, Yale University, Deeper Learning in Empirical Science, some requirements and needs
Critical Perspective: Could a good DL theory change practice?
Moderator: Ben Recht, UC Berkeley
Eero Simoncelli, New York University
Julia Kempe, New York University Center for Data Science
Policy and Science Funding Panel
Panel Discussion: Drivers and considerations for federal / industry space investment in fundamental academic AI research
Moderator: Jim Kurose, NSF
John Beieler, IARPA
Juan Mesa, National Science Foundation
Tony Thrall, National Security Agency
Session IV: Experimental Perspectives on Deep Learning (Chair: Gavish)
Short talk: Jonathon Phillips, National Institute of Standards and Technology, Data Sets for
Analyzing Face Recognition Performance of Humans and Algorithms
Short talk: Isabelle Guyon, Paris-Sud University & ClopiNet, Neural Solvers for Power Transmission Problems
Talk: Doina Precup, McGill University, From deep reinforcement learning to AI
Talk: Haim Sampolinsky, Hebrew University of Jerusalem, Theory-based measures of object representations in deep artificial and biological networks
Counterpoint: Tara Sainath, Google AI
Critical Perspective: What’s missing in today’s experimental analysis of DL?
Moderator: Jonathon Phillips, NIST
Jitendra Malik, University of California, Berkeley
Peter Bartlett, University of California, Berkeley
Antonio Torralba, Massachusetts Institute of Technology
Isabelle Guyon, Paris-Sud University & ClopiNet
Summary: Right ways forward? (Chair: Donoho)
Terrence Sejnowski, Salk Institute for Biological Studies
Jon Kleinberg, Cornell University
Leon Bottou, FaceBook AI Research, From Machine Learning to Artificial Intelligence