|
![]() |
|||
|
||||
OverviewDiscover the cutting-edge advancements in knowledge distillation for computer vision within this comprehensive monograph. As neural networks become increasingly complex, the demand for efficient and lightweight models grows critical, especially for real-world applications. This book uniquely bridges the gap between academic research and industrial implementation, exploring innovative methods to compress and accelerate deep neural networks without sacrificing accuracy. It addresses two fundamental problems in knowledge distillation: constructing effective student and teacher models and selecting the appropriate knowledge to distill. Presenting groundbreaking research on self-distillation and task-irrelevant knowledge distillation, the book offers new perspectives on model optimization. Readers will gain insights into applying these techniques across a wide range of visual tasks, from 2D and 3D object detection to image generation, effectively bridging the gap between AI research and practical deployment. By engaging with this text, readers will learn to enhance model performance, reduce computational costs, and improve model robustness. This book is ideal for researchers, practitioners, and advanced students with a background in computer vision and deep learning. Equip yourself with the knowledge to design and implement knowledge distillation, thereby improving the efficiency of computer vision models. Full Product DetailsAuthor: Linfeng ZhangPublisher: Springer Imprint: Springer ISBN: 9789819503667ISBN 10: 9819503663 Pages: 125 Publication Date: 22 September 2025 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Forthcoming Availability: Not yet available ![]() This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release. Table of Contents""Chapter 1: Introduction"".- ""Chapter 2: Student and Teacher Models in KD"".- ""Chapter 3: Distilled Knowledge in KD"".- ""Chapter 4: Application of KD in High-Level Vision Tasks"".- ""Chapter 5: Application of KD in Low-Level Vision Tasks"".- ""Chapter 6: Application of KD beyond Model Compression"".- ""Chapter 7: Conclusion"".ReviewsAuthor InformationDr. Zhang Linfeng is the assistant professor in School of Artificial Intellignce, Shanghai Jiao Tong University. He graduated from the Institute of Interdisciplinary Information Sciences at Tsinghua University with a doctoral degree in Computer Science and Technology, specializing in computer vision model compression and acceleration. His doctoral dissertation, ""Structured Knowledge Distillation: Towards Efficient Visual Intelligence,"" was recognized as an outstanding doctoral dissertation by Tsinghua University. He has served as a reviewer for more than a dozen top academic conferences and journals, including IEEE TPAMI, NeurIPS, ICLR, and CVPR for several consecutive years. He has published more than 20 high-level academic papers as first author or corresponding author. According to Google Scholar, his papers have been cited 2,300 times, with the highest citation count for a single first-authored paper exceeding 1,000 times. At the 2019 ICCV conference, he first proposed the Self-Distillation algorithm, which is one of the representative works in the field of knowledge distillation. He has successfully applied knowledge distillation algorithms to various visual tasks such as object detection, instance segmentation, and image generation, as well as to different types of visual data including images, multi-view images, point clouds, and videos to achieve compression and acceleration effects of visual models. Meanwhile, his research achievements have been utilized in the Qiming series chips developed by Polar Bear Technology, Huawei, DiD Global, and Kwai, providing compression and acceleration effects for artificial intelligence models in real industrial scenarios. Tab Content 6Author Website:Countries AvailableAll regions |