|
![]() |
|||
|
||||
OverviewFull Product DetailsAuthor: Ashwin NanjappaPublisher: Packt Publishing Limited Imprint: Packt Publishing Limited ISBN: 9781789137750ISBN 10: 1789137756 Pages: 136 Publication Date: 31 May 2019 Audience: General/trade , General Format: Paperback Publisher's Status: Active Availability: Available To Order ![]() We have confirmation that this item is in stock with the supplier. It will be ordered in for you and dispatched immediately. Table of ContentsTable of Contents Introduction and Installation Composing Networks Training Networks Working with Caffe Working with Other Frameworks Deploying Models to Accelerators for Inference Caffe2 at the Edge and in the cloudReviewsAuthor InformationAshwin Nanjappa is a senior architect at NVIDIA, working in the TensorRT team on improving deep learning inference on GPU accelerators. He has a PhD from the National University of Singapore in developing GPU algorithms for the fundamental computational geometry problem of 3D Delaunay triangulation. As a post-doctoral research fellow at the BioInformatics Institute (Singapore), he developed GPU-accelerated machine learning algorithms for pose estimation using depth cameras. As an algorithms research engineer at Visenze (Singapore), he implemented computer vision algorithm pipelines in C++, developed a training framework built upon Caffe in Python, and trained deep learning models for some of the world's most popular online shopping portals. Tab Content 6Author Website:Countries AvailableAll regions |