|
![]() |
|||
|
||||
OverviewFeed-Forward Neural Networks: Vector Decomposition Analysis, Modelling and Analog Implementation presents a novel method for the mathematical analysis of neural networks that learn according to the back-propagation algorithm. The book also discusses some other recent alternative algorithms for hardware implemented perception-like neural networks. The method permits a simple analysis of the learning behaviour of neural networks, allowing specifications for their building blocks to be readily obtained. Starting with the derivation of a specification and ending with its hardware implementation, analog hard-wired, feed-forward neural networks with on-chip back-propagation learning are designed in their entirety. On-chip learning is necessary in circumstances where fixed weight configurations cannot be used. It is also useful for the elimination of most mis-matches and parameter tolerances that occur in hard-wired neural network chips. Fully analog neural networks have several advantages over other implementations: low chip area, low power consumption, and high speed operation. Feed-Forward Neural Networks is an excellent source of reference and may be used as a text for advanced courses. Full Product DetailsAuthor: Jouke AnnemaPublisher: Springer-Verlag New York Inc. Imprint: Springer-Verlag New York Inc. Edition: Softcover reprint of the original 1st ed. 1995 Volume: 314 Dimensions: Width: 15.50cm , Height: 1.40cm , Length: 23.50cm Weight: 0.397kg ISBN: 9781461359906ISBN 10: 1461359902 Pages: 238 Publication Date: 13 July 2013 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Active Availability: Manufactured on demand ![]() We will order this item for you from a manufactured on demand supplier. Table of Contents1 Introduction.- 2 The Vector Decomposition Method.- 3 Dynamics of Single Layer Nets.- 4 Unipolar Input Signals in Single-Layer Feed-Forward Neural Networks.- 5 Cross-talk in Single-Layer Feed-Forward Neural Networks.- 6 Precision Requirements for Analog Weight Adaptation Circuitry for Single-Layer Nets.- 7 Discretization of Weight Adaptations in Single-Layer Nets.- 8 Learning Behavior and Temporary Minima of Two-Layer Neural Networks.- 9 Biases and Unipolar Input signals for Two-Layer Neural Networks.- 10 Cost Functions for Two-Layer Neural Networks.- 11 Some issues for f’ (x).- 12 Feed-forward hardware.- 13 Analog weight adaptation hardware.- 14 Conclusions.- Nomenclature.ReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |