|
![]() |
|||
|
||||
OverviewMost intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does. Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow. This essential book provides: A detailed look at some of the most useful and commonly used explainability techniques, highlighting pros and cons to help you choose the best tool for your needs Tips and best practices for implementing these techniques A guide to interacting with explainability and how to avoid common pitfalls The knowledge you need to incorporate explainability in your ML workflow to help build more robust ML systems Advice about explainable AI techniques, including how to apply techniques to models that consume tabular, image, or text data Example implementation code in Python using well-known explainability libraries for models built in Keras and TensorFlow 2.0, PyTorch, and HuggingFace Full Product DetailsAuthor: Michael Munn , David Pitman , Parker BarnesPublisher: O'Reilly Media Imprint: O'Reilly Media ISBN: 9781098119133ISBN 10: 1098119134 Pages: 250 Publication Date: 11 November 2022 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Active Availability: In Print ![]() This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us. Table of ContentsReviewsAuthor InformationMichael Munn is a research software engineer at Google. His work focuses on better understanding the mathematical foundations of machine learning and how those insights can be used to improve machine learning models at Google. Previously, he worked in the Google Cloud Advanced Solutions Lab helping customers design, implement, and deploy machine learning models at scale. Michael has a PhD in mathematics from the City University of New York. Before joining Google, he worked as a research professor. David Pitman is a staff engineer working in Google Cloud on the AI Platform, where he leads the Explainable AI team. He's also a co-organizer of PuPPy, the largest Python group in the Pacific Northwest. David has a Masters of Engineering degree and a BS in computer science from MIT, where he previously served as a research scientist. Tab Content 6Author Website:Countries AvailableAll regions |