|
![]() |
|||
|
||||
OverviewData Intensive Computing refers to capturing, managing, analyzing, and understanding data at volumes and rates that push the frontiers of current technologies. The challenge of data intensive computing is to provide the hardware architectures and related software systems and techniques which are capable of transforming ultra-large data into valuable knowledge. Handbook of Data Intensive Computing is written by leading international experts in the field. Experts from academia, research laboratories and private industry address both theory and application. Data intensive computing demands a fundamentally different set of principles than mainstream computing. Data-intensive applications typically are well suited for large-scale parallelism over the data and also require an extremely high degree of fault-tolerance, reliability, and availability. Real-world examples are provided throughout the book. Handbook of Data Intensive Computing is designed as a referencefor practitioners and researchers, including programmers, computer and system infrastructure designers, and developers. This book can also be beneficial for business managers, entrepreneurs, and investors. Full Product DetailsAuthor: Borko Furht , Armando EscalantePublisher: Springer-Verlag New York Inc. Imprint: Springer-Verlag New York Inc. Dimensions: Width: 15.50cm , Height: 4.10cm , Length: 23.50cm Weight: 1.223kg ISBN: 9781489999191ISBN 10: 1489999191 Pages: 794 Publication Date: 03 March 2014 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Active Availability: Manufactured on demand ![]() We will order this item for you from a manufactured on demand supplier. Table of ContentsPART I ARCHITECTURES AND SYSTEMS.- High Performance Network Architectures for Data Intensive Computing.- Architecting Data-Intensive Software Systems.- ECL: A High-Level Programming Language for Data-Intensive Supercomputing.- Scalable Storage for Data-Intensive Computing.- Computation and Storage Trade-off for Cost-Effective Storage of Scientific Datasets in the Cloud.- PART II TECHNOLOGIES AND TECHNIQUES.- Load Balancing Techniques for Data Intensive Computing.- Resource Management for Data Intensive Clouds Through Dynamic Federation: A Game Theoretic Approach.- SALT: Scalable Automated Linking Technology for Data Intensive Computing.- Parallel Processing, Multiprocessors and Virtualization in Data-Intensive Computing.- Challenges in Data Intensive Analysis and Visualization at Scientific Experimental User Facilities.- Large-Scale Data Analytics Using Ensemble Clustering.- Specification of Data Intensive Applications with Data Dependency and Abstract Clocks.- Ensemble Feature RankingMethods for Data Intensive Computing Applications.- Record Linkage Methodology and Applications.- Semantic Wrapper: Concise Semantic Querying of Legacy Relational Databases.- PART III SECURITY.- Security in Data Intensive Computing Systems.- Data Security and Privacy in Data-Intensive Supercomputing Clusters.- Information Security in Large Scale Distributed Systems.- Privacy and Security Requirements of Data Intensive Applications in Clouds.- PART IV APPLICATIONS.- On the Processing of Extreme Scale Datasets in the Geosciences.- Parallel Earthquake Simulations on Large-scale Multicore Supercomputers.- Data Intensive Computing in Bioinformatics: A Biomedical Case Study in Gene Selection and Filtering.- Design Space Exploration for Efficient Data Intensive Computing on SoCs.- Discovering Relevant Entities in Large-scale Social Information Systems.-Geospatial Data Management with Terrafly.- An Application for Processing Large and Non-uniform Media Objects on MapReduce-based Clusters.- Feature Selection Algorithms for Mining High-Dimensional DNA Microarray Data.- Application of Random Matrix Theory to Analyze Biological Data.- Keyword Search on Large Relational Databases: an OLAP-Oriented Approach.- A Distributed Publish/Subscribe System for Large Scale Sensor Networks.ReviewsFrom the reviews: The material is written by experts from nearly 40 institutions, including academia, industry, and government; they are mostly from the US, but also from Europe, Asia, and Australia. ... The editors make readers aware of the scale of data generated from a variety of sources, which require immediate comprehensive analyses. ... The value of this book might be in collecting papers that focused on the issues of big data, so interested parties can have a handy overview of related problems and prospective solutions. (Janusz Zalewski, ACM Computing Reviews, September, 2012) From the reviews: The material is written by experts from nearly 40 institutions, including academia, industry, and government; they are mostly from the US, but also from Europe, Asia, and Australia. The editors make readers aware of the scale of data generated from a variety of sources, which require immediate comprehensive analyses. The value of this book might be in collecting papers that focused on the issues of big data, so interested parties can have a handy overview of related problems and prospective solutions. (Janusz Zalewski, ACM Computing Reviews, September, 2012) From the reviews: “The material is written by experts from nearly 40 institutions, including academia, industry, and government; they are mostly from the US, but also from Europe, Asia, and Australia. … The editors make readers aware of the scale of data generated from a variety of sources, which require immediate comprehensive analyses. … The value of this book might be in collecting papers that focused on the issues of big data, so interested parties can have a handy overview of related problems and prospective solutions.” (Janusz Zalewski, ACM Computing Reviews, September, 2012) Author Information
Tab Content 6Author Website:Countries AvailableAll regions |