An Introduction to Computational Learning Theory

Author:   Michael J. Kearns (Computer/Info Sci- 509 Levine) ,  Umesh Vazirani (Computer Science Division)
Publisher:   MIT Press Ltd
ISBN:  

9780262111935


Pages:   222
Publication Date:   15 August 1994
Recommended Age:   From 18 years
Format:   Hardback
Availability:   In Print   Availability explained
Limited stock is available. It will be ordered for you and shipped pending supplier's limited stock.

Our Price $139.92 Quantity:  
Add to Cart

Share |

An Introduction to Computational Learning Theory


Add your own review!

Overview

Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics. Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning. Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs. The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.

Full Product Details

Author:   Michael J. Kearns (Computer/Info Sci- 509 Levine) ,  Umesh Vazirani (Computer Science Division)
Publisher:   MIT Press Ltd
Imprint:   MIT Press
Dimensions:   Width: 17.80cm , Height: 1.70cm , Length: 22.90cm
Weight:   0.590kg
ISBN:  

9780262111935


ISBN 10:   0262111934
Pages:   222
Publication Date:   15 August 1994
Recommended Age:   From 18 years
Audience:   Professional and scholarly ,  Professional and scholarly ,  Professional & Vocational ,  Postgraduate, Research & Scholarly
Format:   Hardback
Publisher's Status:   Out of Stock Indefinitely
Availability:   In Print   Availability explained
Limited stock is available. It will be ordered for you and shipped pending supplier's limited stock.

Table of Contents

Reviews

Author Information

Michael J. Kearns is Professor of Computer and Information Science at the University of Pennsylvania. Umesh Vazirani is Roger A. Strauch Professor in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley.

Tab Content 6

Author Website:  

Customer Reviews

Recent Reviews

No review item found!

Add your own review!

Countries Available

All regions
Latest Reading Guide

MRG2025CC

 

Shopping Cart
Your cart is empty
Shopping cart
Mailing List