|
![]() |
|||
|
||||
OverviewLifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine-learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. This book describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics and chess. Full Product DetailsAuthor: Sebastian Thrun , Tom M. MitchellPublisher: Springer Imprint: Springer Edition: 1996 ed. Volume: 357 Dimensions: Width: 15.50cm , Height: 1.70cm , Length: 23.50cm Weight: 1.270kg ISBN: 9780792397168ISBN 10: 0792397169 Pages: 264 Publication Date: 30 April 1996 Audience: College/higher education , Professional and scholarly , Postgraduate, Research & Scholarly , Professional & Vocational Format: Hardback Publisher's Status: Active Availability: In Print ![]() This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us. Table of Contents1 Introduction.- 1.1 Motivation.- 1.2 Lifelong Learning.- 1.3 A Simple Complexity Consideration.- 1.4 The EBNN Approach to Lifelong Learning.- 1.5 Overview.- 2 Explanation-Based Neural Network Learning.- 2.1 Inductive Neural Network Learning.- 2.2 Analytical Learning.- 2.3 Why Integrate Induction and Analysis?.- 2.4 The EBNN Learning Algorithm.- 2.5 A Simple Example.- 2.6 The Relation of Neural and Symbolic Explanation-Based Learning.- 2.7 Other Approaches that Combine Induction and Analysis.- 2.8 EBNN and Lifelong Learning.- 3 The Invariance Approach.- 3.1 Introduction.- 3.2 Lifelong Supervised Learning.- 3.3 The Invariance Approach.- 3.4 Example: Learning to Recognize Objects.- 3.5 Alternative Methods.- 3.6 Remarks.- 4 Reinforcement Learning.- 4.1 Learning Control.- 4.2 Lifelong Control Learning.- 4.3 Q-Learning.- 4.4 Generalizing Function Approximators and Q-Learning.- 4.5 Remarks.- 5 Empirical Results.- 5.1 Learning Robot Control.- 5.2 Navigation.- 5.3 Simulation.- 5.4 Approaching and Grasping a Cup.- 5.5 NeuroChess.- 5.6 Remarks.- 6 Discussion.- 6.1 Summary.- 6.2 Open Problems.- 6.3 Related Work.- 6.4 Concluding Remarks.- A An Algorithm for Approximating Values and Slopes with Artificial Neural Networks.- A.1 Definitions.- A.2 Network Forward Propagation.- A.3 Forward Propagation of Auxiliary Gradients.- A.4 Error Functions.- A.5 Minimizing the Value Error.- A.6 Minimizing the Slope Error.- A.7 The Squashing Function and its Derivatives.- A.8 Updating the Network Weights and Biases.- B Proofs of the Theorems.- C Example Chess Games.- C.1 Game 1.- C.2 Game 2.- References.- List of Symbols.ReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |