|
![]() |
|||
|
||||
OverviewArtificial intelligence and machine learning have emerged as driving forces behind transformative advancements in various fields, and have become increasingly pervasive in many industries and daily life. As these technologies continue to gain momentum, so does the need to develop a deeper understanding of their underlying principles, capabilities, and limitations. In this monograph, the authors focus on the theory of machine learning and statistical learning theory, with a particular focus on the generalization capabilities of learning algorithms. Part I covers the foundations of information-theoretic and PAC-Bayesian generalization bounds for standard supervised learning. Part II explores the applications of generalization bounds, as well as extensions to settings beyond standard supervised learning. Several important areas of application include neural networks, federated learning and reinforcement learning. The monograph concludes with a broader discussion of information-theoretic and PAC-Bayesian generalization bounds as a whole. This monograph will be of interest to students and researchers working in generalization and theoretical machine learning. It provides a comprehensive introduction to information-theoretic generalization bounds and their connection to PAC-Bayes, serving as a foundation from which the most recent developments are accessible. Full Product DetailsAuthor: Fredrik Hellström , Giuseppe Durisi , Benjamin Guedj , Maxim RaginskyPublisher: now publishers Inc Imprint: now publishers Inc Weight: 0.346kg ISBN: 9781638284208ISBN 10: 1638284202 Pages: 242 Publication Date: 23 January 2025 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Active Availability: In Print ![]() This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us. Table of Contents1. Introduction: On Generalization and Learning 2. Information-Theoretic Approach to Generalization 3. Tools 4. Generalization Bounds in Expectation 5. Generalization Bounds in Probability 6. The CMI Framework 7. The Information Complexity of Learning Algorithms 8. Neural Networks and Iterative Algorithms 9. Alternative Learning Models 10. Concluding Remarks Acknowledgements ReferencesReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |