|
![]() |
|||
|
||||
OverviewThe tremendous worldwide interest in the design and applications of recurrent neural networks prompts this volume compiling chapters contributed by leading experts in the field. Neural networks permit backward connections between nodes, resulting in dynamical behavior that is very useful in solving many real problems in science, engineering, and business. Recurrent Neural Networks presents the design and selected applications of recurrent neural network paradigms. Full Product DetailsAuthor: Larry Medsker , Lakhmi C. JainPublisher: Taylor & Francis Inc Imprint: CRC Press Inc Dimensions: Width: 15.60cm , Height: 1.90cm , Length: 23.50cm Weight: 0.744kg ISBN: 9780849371813ISBN 10: 0849371813 Pages: 416 Publication Date: 20 December 1999 Audience: Professional and scholarly , Professional and scholarly , Professional & Vocational , Professional & Vocational Format: Hardback Publisher's Status: Out of Print Availability: Out of stock ![]() Table of ContentsINTRODUCTION Overview Design Issues and Theory Applications Future Directions RECURRENT NEURAL NETWORKS FOR OPTIMIZATION: THE STATE OF THE ART Introduction Continuous-Time Neural Networks for QP and LCP Discrete-Time Neural Networks for QP and LCP Simulation Results Concluding Remarks EFFICIENT SECOND-ORDER LEARNING ALGORITHMS FOR DISCRETE-TIME RECURRENT NEURAL NETWORKS Introduction Spatio x Spatio-Temporal Processing Computational Capability Recurrent Neural Networks as Nonlinear Dynamic Systems Recurrent Neural Networks and Second-Order Learning Algorithms Recurrent Neural Network Architectures State Space Representation for Recurrent Neural Networks Second Order Information in Optimization-Based Learning Algorithms The Conjugate Gradient Algorithm An Improved SGM Method The Learning Algorithm for Recurrent Neural Networks Simulation Results Concluding Remarks DESIGNING HIGH ORDER RECURRENT NETWORKS FOR BAYESIAN BELIEF REVISION Introduction Belief Revision and Reasoning Under Uncertainty Hopfield Networks and Mean Field Annealing High Order Recurrent Networks Efficient Data Structures for Implementing HORNs Designing HORNs for Belief Revision Conclusions EQUIVALENCE IN KNOWLEDGE REPRESENTATION: AUTOMATA, RECURRENT NEURAL NETWORKS, AND DYNAMICAL FUZZY SYSTEMS Introduction Fuzzy Finite State Automata Representation of Fuzzy States Automata Transformation Network Architecture Network Stability Analysis Simulations Conclusions LEARNING LONG-TERM DEPENDENCIES IN NARX RECURRENT NEURAL NETWORKS Introduction Vanishing Gradients and Long-Term Dependencies NARX Networks An Intuitive Explanation of NARX Network Behavior Experimental Results Conclusion OSCILLATION RESPONSES IN A CHAOTIC RECURRENT NETWORK Introduction Progression to Chaos External Patterns Dynamic Adjustment of Pattern Strength Characteristics of the Pattern-to-Oscillation Map Discussion LESSON FROM LANGUAGE LEARNING Introduction Lesson 1: Language Learning is Hard Lesson 2: When Possible, Search a Smaller Space Lesson 3: Search the most Likely Places First Lesson 4: Order your Training Data Summary RECURRENT AUTOASSOCIATIVE NETWORKS: DEVELOPING DISTRIBUTED REPRESENTATIONS OF STRUCTURED SEQUENCES BY AUTOASSOCIATION Introduction Sequences, Hierarchy, and Representations Neural Networks and Sequential Processing Recurrent Autoassociative Networks A Cascade of RANs Going Further to a Cognitive Model Discussion Conclusions COMPARISON OF RECURRENT NEURAL NETWORKS FOR TRAJECTORY GENERATION Introduction Architecture Training Set Error Function and Performance Metric Training Algorithms Simulations Conclusions TRAINING ALGORITHMS FOR RECURRENT NEURAL NETS THAT ELIMINATE THE NEED FOR COMPUTATION OF ERROR GRADIENTS WITH APPLICATION TO TRAJECTORY PRODUCTION PROBLEM Introduction Description of the Learning Problem and some Issues in Spatiotemporal Training Training by Methods of Learning Automata Training by Simplex Optimization Method Conclusions TRAINING RECURRENT NEURAL NETWORKS FOR FILTERING AND CONTROL Introduction Preliminaries Principles of Dynamic Learning Dynamic Backprop for the LDRN Neurocontrol Application Recurrent Filter Summary REMEMBERING HOW TO BEHAVE: RECURRENT NEURAL NETWORKS FOR ADAPTIVE ROBOT BEHAVIOR Introduction Background Recurrent Neural Networks for Adaptive Robot Behavior Summary and DiscussionReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |