|
![]() |
|||
|
||||
OverviewWith a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student. Full Product DetailsAuthor: Yue Zhang , Zhiyang TengPublisher: Cambridge University Press Imprint: Cambridge University Press Dimensions: Width: 19.30cm , Height: 2.70cm , Length: 25.20cm Weight: 1.190kg ISBN: 9781108420211ISBN 10: 1108420214 Pages: 484 Publication Date: 07 January 2021 Audience: College/higher education , Professional and scholarly , Tertiary & Higher Education , Professional & Vocational Format: Hardback Publisher's Status: Active Availability: Available To Order ![]() We have confirmation that this item is in stock with the supplier. It will be ordered in for you and dispatched immediately. Table of ContentsPart I. Basics: 1. Introduction; 2. Counting relative frequencies; 3. Feature vectors; 4. Discriminative linear classifiers; 5. A perspective from information theory; 6. Hidden variables; Part II. Structures: 7. Generative sequence labelling; 8. Discriminative sequence labelling; 9. Sequence segmentation; 10. Predicting tree structures; 11. Transition-based methods for structured prediction; 12. Bayesian models; Part III. Deep Learning: 13. Neural network; 14. Representation learning; 15. Neural structured prediction; 16. Working with two texts; 17. Pre-training and transfer learning; 18. Deep latent variable models; Index.Reviews'An amazingly compact, and at the same time comprehensive, introduction and reference to natural language processing (NLP). It describes the NLP basics, then employs this knowledge to solve typical NLP problems. It achieves very high coverage of NLP through a clever abstraction to typical high-level tasks, such as sequence labelling. Finally, it explains the topics in deep learning. The book captivates through its simple elegance, depth, and accessibility to a wide range of readers from undergrads to experienced researchers.' Iryna Gurevych, Technical University of Darmstadt, Germany 'An excellent introduction to the field of natural language processing including recent advances in deep learning. By organising the material in terms of machine learning techniques - instead of the more traditional division by linguistic levels or applications - the authors are able to discuss different topics within a single coherent framework, with a gradual progression from basic notions to more complex material.' Joakim Nivre, Uppsala University 'The book is a valuable tool for both beginning and advanced researchers in the field.' Catalin Stoean, zbMATH 'An amazingly compact, and at the same time comprehensive, introduction and reference to natural language processing (NLP). It describes the NLP basics, then employs this knowledge to solve typical NLP problems. It achieves very high coverage of NLP through a clever abstraction to typical high-level tasks, such as sequence labelling. Finally, it explains the topics in deep learning. The book captivates through its simple elegance, depth, and accessibility to a wide range of readers from undergrads to experienced researchers.' Iryna Gurevych, Technical University of Darmstadt, Germany 'An excellent introduction to the field of natural language processing including recent advances in deep learning. By organising the material in terms of machine learning techniques - instead of the more traditional division by linguistic levels or applications - the authors are able to discuss different topics within a single coherent framework, with a gradual progression from basic notions to more complex material.' Joakim Nivre, Uppsala University Author InformationYue Zhang is an associate professor at Westlake University. Before joining Westlake, he worked as a research associate at the University of Cambridge and then a faculty member at Singapore University of Technology and Design. His research interests lie in fundamental algorithms for NLP, syntax, semantics, information extraction, text generation, and machine translation. He serves as an action editor for TACL, and as area chairs of ACL, EMNLP, COLING, and NAACL. He gave several tutorials at ACL, EMNLP and NAACL, and won a best paper award at COLING in 2018. Zhiyang Teng is currently a postdoctoral research fellow in the natural language processing group of Westlake University, China. He obtained his Ph.D. from Singapore University of Technology and Design (SUTD) in 2018, and his Master's from the University of Chinese Academy of Science in 2014. He won the best paper award at CCL/NLP-NABD 2014, and published conference papers for ACL/TACL, EMNLP, COLING, NAACL, and TKDE. His research interests include syntactic parsing, sentiment analysis, deep learning, and variational inference. Tab Content 6Author Website:Countries AvailableAll regions |