|
![]() |
|||
|
||||
OverviewThrough information theory, problems of communication and compression can be precisely modeled, formulated, and analyzed, and this information can be transformed by means of algorithms. Also, learning can be viewed as compression with side information. Aimed at students and researchers, this book addresses data compression and redundancy within existing methods and central topics in theoretical data compression, demonstrating how to use tools from analytic combinatorics to discover and analyze precise behavior of source codes. It shows that to present better learnable or extractable information in its shortest description, one must understand what the information is, and then algorithmically extract it in its most compact form via an efficient compression algorithm. Part I covers fixed-to-variable codes such as Shannon and Huffman codes, variable-to-fixed codes such as Tunstall and Khodak codes, and variable-to-variable Khodak codes for known sources. Part II discusses universal source coding for memoryless, Markov, and renewal sources. Full Product DetailsAuthor: Michael Drmota (Technische Universität Wien, Austria) , Wojciech Szpankowski (Purdue University, Indiana)Publisher: Cambridge University Press Imprint: Cambridge University Press ISBN: 9781108474443ISBN 10: 1108474446 Pages: 400 Publication Date: 07 September 2023 Audience: College/higher education , Postgraduate, Research & Scholarly Format: Hardback Publisher's Status: Active Availability: Manufactured on demand ![]() We will order this item for you from a manufactured on demand supplier. Table of ContentsReviews'Drmota & Szpankowski's book presents an exciting and very timely review of the theory of lossless data compression, from one of the modern points of view. Their development draws interesting connections with learning theory, and it is based on a collection of powerful analytical techniques.' Ioannis Kontoyiannis, University of Cambridge 'Drmota and Szpankowski, leading experts in the mathematical analysis of discrete structures, present here a compelling treatment unifying modern and classical results in information theory and analytic combinatorics. This book is certain to be a standard reference for years to come.' Robert Sedgewick, Princeton University Author InformationMichael Drmota is Professor for Discrete Mathematics at TU Wien. His research activities range from analytic combinatorics over discrete random structures to number theory. He has published several books, including 'Random Trees' (2009), and about 200 research articles. He was President of the Austrian Mathematical Society from 2010 to 2013, and has been Corresponding Member of the Austrian Academy of Sciences since 2013. Wojciech Szpankowski is the Saul Rosen Distinguished Professor of Computer Science at Purdue University where he teaches and conducts research in analysis of algorithms, information theory, analytic combinatorics, random structures, and machine learning for classical and quantum data. He has received the Inaugural Arden L. Bement Jr. Award (2015) and the Flajolet Lecture Prize (2020), among others. In 2021, he was elected to the Academia Europaea. In 2008, he launched the interdisciplinary Institute for Science of Information, and in 2010, he became the Director of the NSF Science and Technology Center for Science of Information. Tab Content 6Author Website:Countries AvailableAll regions |