The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks

Author:   Daniel A. Roberts (Massachusetts Institute of Technology) ,  Sho Yaida ,  Boris Hanin (Princeton University, New Jersey)
Publisher:   Cambridge University Press
Edition:   New edition
ISBN:  

9781316519332


Pages:   472
Publication Date:   26 May 2022
Format:   Hardback
Availability:   Manufactured on demand   Availability explained
We will order this item for you from a manufactured on demand supplier.

Our Price $155.22 Quantity:  
Add to Cart

Share |

The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks


Add your own review!

Overview

This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.

Full Product Details

Author:   Daniel A. Roberts (Massachusetts Institute of Technology) ,  Sho Yaida ,  Boris Hanin (Princeton University, New Jersey)
Publisher:   Cambridge University Press
Imprint:   Cambridge University Press
Edition:   New edition
Dimensions:   Width: 18.40cm , Height: 2.60cm , Length: 26.10cm
Weight:   1.060kg
ISBN:  

9781316519332


ISBN 10:   1316519333
Pages:   472
Publication Date:   26 May 2022
Audience:   General/trade ,  General
Format:   Hardback
Publisher's Status:   Active
Availability:   Manufactured on demand   Availability explained
We will order this item for you from a manufactured on demand supplier.

Table of Contents

Preface; 0. Initialization; 1. Pretraining; 2. Neural networks; 3. Effective theory of deep linear networks at initialization; 4. RG flow of preactivations; 5. Effective theory of preactivations at initializations; 6. Bayesian learning; 7. Gradient-based learning; 8. RG flow of the neural tangent kernel; 9. Effective theory of the NTK at initialization; 10. Kernel learning; 11. Representation learning; ∞. The end of training; ε. Epilogue; A. Information in deep learning; B. Residual learning; References; Index.

Reviews

'In the history of science and technology, the engineering artifact often comes first: the telescope, the steam engine, digital communication. The theory that explains its function and its limitations often appears later: the laws of refraction, thermodynamics, and information theory. With the emergence of deep learning, AI-powered engineering wonders have entered our lives - but our theoretical understanding of the power and limits of deep learning is still partial. This is one of the first books devoted to the theory of deep learning, and lays out the methods and results from recent theoretical approaches in a coherent manner.' Yann LeCun, New York University and Chief AI Scientist at Meta 'For a physicist, it is very interesting to see deep learning approached from the point of view of statistical physics. This book provides a fascinating perspective on a topic of increasing importance in the modern world.' Edward Witten, Institute for Advanced Study 'This is an important book that contributes big, unexpected new ideas for unraveling the mystery of deep learning's effectiveness, in unusually clear prose. I hope it will be read and debated by experts in all the relevant disciplines.' Scott Aaronson, University of Texas at Austin 'It is not an exaggeration to say that the world is being revolutionized by deep learning methods for AI. But why do these deep networks work? This book offers an approach to this problem through the sophisticated tools of statistical physics and the renormalization group. The authors provide an elegant guided tour of these methods, interesting for experts and non-experts alike. They write with clarity and even moments of humor. Their results, many presented here for the first time, are the first steps in what promises to be a rich research program, combining theoretical depth with practical consequences.' William Bialek, Princeton University 'This book's physics-trained authors have made a cool discovery, that feature learning depends critically on the ratio of depth to width in the neural net.' Gilbert Strang, Massachusetts Institute of Technology


'In the history of science and technology, the engineering artifact often comes first: the telescope, the steam engine, digital communication. The theory that explains its function and its limitations often appears later: the laws of refraction, thermodynamics, and information theory. With the emergence of deep learning, AI-powered engineering wonders have entered our lives - but our theoretical understanding of the power and limits of deep learning is still partial. This is one of the first books devoted to the theory of deep learning, and lays out the methods and results from recent theoretical approaches in a coherent manner.' Yann LeCunn, New York University and Chief AI Scientist at Meta 'For a physicist, it is very interesting to see deep learning approached from the point of view of statistical physics. This book provides a fascinating perspective on a topic of increasing importance in the modern world.' Edward Witten, Institute for Advanced Study 'This is an important book that contributes big, unexpected new ideas for unraveling the mystery of deep learning's effectiveness, in unusually clear prose. I hope it will be read and debated by experts in all the relevant disciplines.' Scott Aaronson, University of Texas at Austin 'It is not an exaggeration to say that the world is being revolutionized by deep learning methods for AI. But why do these deep networks work? This book offers an approach to this problem through the sophisticated tools of statistical physics and the renormalization group. The authors provide an elegant guided tour of these methods, interesting for experts and non-experts alike. They write with clarity and even moments of humor. Their results, many presented here for the first time, are the first steps in what promises to be a rich research program, combining theoretical depth with practical consequences.' William Bialek, Princeton University 'This book's physics-trained authors have made a cool discovery, that feature learning depends critically on the ratio of depth to width in the neural net.' Gilbert Strang, Massachusetts Institute of Technology


Author Information

Daniel A. Roberts was cofounder and CTO of Diffeo, an AI company acquired by Salesforce; a research scientist at Facebook AI Research; and a member of the School of Natural Sciences at the Institute for Advanced Study in Princeton, NJ. He was a Hertz Fellow, earning a PhD from MIT in theoretical physics, and was also a Marshall Scholar at Cambridge and Oxford Universities. Sho Yaida is a research scientist at Meta AI. Prior to joining Meta AI, he obtained his PhD in physics at Stanford University and held postdoctoral positions at MIT and at Duke University. At Meta AI, he uses tools from theoretical physics to understand neural networks, the topic of this book. Boris Hanin is an Assistant Professor at Princeton University in the Operations Research and Financial Engineering Department. Prior to joining Princeton in 2020, Boris was an Assistant Professor at Texas A&M in the Math Department and an NSF postdoc at MIT. He has taught graduate courses on the theory and practice of deep learning at both Texas A&M and Princeton.

Tab Content 6

Author Website:  

Customer Reviews

Recent Reviews

No review item found!

Add your own review!

Countries Available

All regions
Latest Reading Guide

wl

Shopping Cart
Your cart is empty
Shopping cart
Mailing List