|
|
|||
|
||||
OverviewExplore reusable design patterns, including data-centric approaches, model development, model fine-tuning, and RAG for LLM application development and advanced prompting techniques Free with your book: PDF Copy, AI Assistant, and Next-Gen Reader Key Features Learn comprehensive LLM development, including data prep, training pipelines, and optimization Explore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents Implement evaluation metrics, interpretability, and bias detection for fair, reliable models Book DescriptionThis practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment. You’ll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems. By the end of this book, you’ll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values. What you will learn Implement efficient data prep techniques, including cleaning and augmentation Design scalable training pipelines with tuning, regularization, and checkpointing Optimize LLMs via pruning, quantization, and fine-tuning Evaluate models with metrics, cross-validation, and interpretability Understand fairness and detect bias in outputs Develop RLHF strategies to build secure, agentic AI systems Who this book is forThis book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must. Full Product DetailsAuthor: Ken HuangPublisher: Packt Publishing Limited Imprint: Packt Publishing Limited ISBN: 9781836207030ISBN 10: 1836207034 Pages: 538 Publication Date: 30 May 2025 Audience: General/trade , General Format: Paperback Publisher's Status: Active Availability: Available To Order We have confirmation that this item is in stock with the supplier. It will be ordered in for you and dispatched immediately. Table of ContentsTable of Contents Introduction to LLM Design Patterns Data Cleaning for LLM Training Data Augmentation Handling Large Datasets for LLM Training Data Versioning Dataset Annotation and Labeling Training Pipeline Hyperparameter Tuning Regularization Checkpointing and Recovery Fine-Tuning Model Pruning Quantization Evaluation Metrics Cross-Validation Interpretability Fairness and Bias Detection Adversarial Robustness Reinforcement Learning from Human Feedback Chain-of-Thought Prompting Tree-of-Thoughts Prompting Reasoning and Acting Reasoning WithOut Observation Reflection Techniques Automatic Multi-Step Reasoning and Tool Use Retrieval-Augmented Generation Graph-Based RAG Advanced RAG Evaluating RAG Systems Agentic PatternsReviewsAuthor InformationKen Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. In addition, Huang contributed extensively to key initiatives in the space. He is a core contributor to OWASP's Top 10 Risks for LLM Applications and heavily involved in the NIST Generative AI Public Working Group. He also provides feedback on publications like NIST SP 800-226. A sought-after speaker, Ken has shared his insights at renowned global conferences, including those hosted by Davos WEF, ACM, IEEE, and CSA AI Summit, CSA AI Think Tank Day and World Bank. His recent co-authorship of ""Blockchain and Web3: Building the Cryptocurrency, Privacy, and Security Foundations of the Metaverse"" adds to his reputation, with the book being recognized as one of the must-reads in both 2023 and 2024 by TechTarget. Tab Content 6Author Website:Countries AvailableAll regions |
||||