|
![]() |
|||
|
||||
OverviewThis book provides a practical and fairly comprehensive review of Data Science through the lens of dimensionality reduction, as well as hands-on techniques to tackle problems with data collected in the real world. State-of-the-art results and solutions from statistics, computer science and mathematics are explained from the point of view of a practitioner in any domain science, such as biology, cyber security, chemistry, sports science and many others. Quantitative and qualitative assessment methods are described to implement and validate the solutions back in the real world where the problems originated. The ability to generate, gather and store volumes of data in the order of tera- and exo bytes daily has far outpaced our ability to derive useful information with available computational resources for many domains. This book focuses on data science and problem definition, data cleansing, feature selection and extraction,statistical, geometric, information-theoretic, biomolecular and machine learning methods for dimensionality reduction of big datasets and problem solving, as well as a comparative assessment of solutions in a real-world setting. This book targets professionals working within related fields with an undergraduate degree in any science area, particularly quantitative. Readers should be able to follow examples in this book that introduce each method or technique. These motivating examples are followed by precise definitions of the technical concepts required and presentation of the results in general situations. These concepts require a degree of abstraction that can be followed by re-interpreting concepts like in the original example(s). Finally, each section closes with solutions to the original problem(s) afforded by these techniques, perhaps in various ways to compare and contrast dis/advantages to other solutions. Full Product DetailsAuthor: Max Garzon , Ching-Chi Yang , Deepak Venugopal , Nirman KumarPublisher: Springer International Publishing AG Imprint: Springer International Publishing AG Edition: 1st ed. 2022 Weight: 0.588kg ISBN: 9783031053702ISBN 10: 3031053702 Pages: 265 Publication Date: 29 July 2022 Audience: Professional and scholarly , Professional & Vocational Format: Hardback Publisher's Status: Active Availability: Manufactured on demand ![]() We will order this item for you from a manufactured on demand supplier. Table of Contents1. What is Data Science (DS)?1.1 Major Families of Data Science Problems1.1.1 Classification Problems1.1.2 Prediction Problems1.1.3 Clustering Problems1.2 Data, Big Data and Pre-processing1.2.1 What is Data?1.2.2 Big data1.2.3 Data Cleansing1.2.4 Data Visualization1.2.5 Data Understanding1.3 Populations and Data Sampling1.3.1 Sampling1.3.2 Training, Testing and Validation1.4 Overview and Scope1.4.1 Prerequisites and Layout1.4.2 Data Science Methodology1.4.3 Scope of the Book2. Solutions to Data Science Problems2.1 Conventional Statistical Solutions2.1.1 Linear Multiple Regression Model: Continuous Response2.1.2 Logistic Regression: Categorical Response2.1.3 Variable Selection and Model Building2.1.4 Generalized Linear Model (GLM)2.1.5 Decision Trees2.1.6 Bayesian Learning2.2 Machine Learning Solutions: Supervised2.2.1 k-Nearest Neighbors (kNN)2.2.2 Ensemble Methods2.2.3 Support Vector Machines (SVMs)2.2.4 Neural Networks (NNs)2.3 Machine Learning Solutions: Unsupervised2.3.1 Hard Clustering2.3.2 Soft Clustering2.4 Controls, Evaluation and Assessment2.4.1 Evaluation Methods2.4.2 Metrics for Assessment3. What is Dimensionality Reduction (DR)?3.1 Dimensionality Reduction3.2 Major Approaches to Dimensionality Reduction3.2.1 Conventional Statistical Approaches3.2.2 Geometric Approaches3.2.3 Information-theoretic Approaches3.2.4 Molecular Computing Approaches3.3 The Blessings of Dimensionality4. Conventional Statistical Approaches4.1 Principal Component Analysis (PCA)4.1.1 Obtaining the Principal Components4.1.2 Singular value decomposition (SVD)4.2 Nonlinear PCA 4.2.1 Kernel PCA4.2.2 Independent component analysis (ICA)4.3 Nonnegative Matrix Factorization (NMF)4.3.1 Approximate Solutions4.3.2 Clustering and Other Applications4.4 Discriminant Analysis4.4.1 Linear discriminant analysis (LDA)4.4.2 Quadratic discriminant analysis (QDA)4.5 Sliced Inverse Regression (SIR)5. Geometric Approaches5.1 Introduction to Manifolds5.2 Manifold Learning Methods5.2.1 Multi-Dimensional Scaling (MDS)5.2.2 Isometric Mapping (ISOMAP)5.2.3 t-Stochastic Neighbor Embedding ( t-SNE )5.3 Exploiting Randomness (RND)6. Information-theoretic Approaches6.1 Shannon Entropy (H)6.2 Reduction by Conditional Entropy6.3 Reduction by Iterated Conditional Entropy6.4 Reduction by Conditional Entropy on Targets6.5 Other Variations7. Molecular Computing Approaches7.1 Encoding Abiotic Data into DNA7.2 Deep Structure of DNA Spaces7.2.1 Structural Properties of DNA Spaces7.2.2 Noncrosshybridizing (nxh) Bases7.3 Reduction by Genomic Signatures7.3.1 Background7.3.2 Genomic Signatures7.4 Reduction by Pmeric Signatures8. Statistical Learning Approaches8.1 Reduction by Multiple Regression8.2 Reduction by Ridge Regression8.3 Reduction by Lasso Regression 8.4 Selection versus Shrinkage8.5 Further refinements9. Machine Learning Approaches9.1 Autoassociative Feature Encoders9.1.1 Undercomplete Autoencoders 9.1.2 Sparse Autoencoders9.1.3 Variational Autoencoders9.1.4 Dimensionality Reduction in MNIST Images9.2 Neural Feature Selection9.2.1 Facial Features, Expressions and Displays9.2.2 The Cohn-Kanade Dataset9.2.3 Primary and Derived Features9.3 Other Methods10. Metaheuristics of DR Methods10.1 Exploiting Feature Grouping10.2 Exploiting Domain Knowledge10.2.1 What is Domain Knowledge?10.2.2 Domain Knowledge for Dimensionality Reduction10.3 Heuristic Rules for Feature Selection, Extraction and Number10.4 About Explainability of Solutions10.4.1 What is Explainability?10.4.2 Explainability in Dimensionality Reduction10.5 Choosing Wisely10.6 About the Curse of Dimensionality10.7 About the No-Free-Lunch Theorem (NFL)11. Appendices11.1 Statistics and Probability Background11.1.1 Commonly Used Discrete Distributions11.1.2 Commonly Used Continuous Distributions11.1.3 Major Results In Probability and Statistics11.2 Linear Algebra Background11.2.1 Fields, Vector Spaces and Subspaces11.2.2 Linear independence, Bases and Dimension11.2.3 Linear Transformations and Matrices11.2.4 Eigenvalues and Spectral Decomposition11.3 Computer Science Background11.3.1 Computational Science and Complexity11.3.2 Machine Learning11.4 Typical Data Science Problems11.5 A Sample of Common and Big Datasets11.6 Computing Platforms11.6.1 The Environment R11.6.2 Python environmentsReferencesReviewsAuthor InformationMax H. Garzon is professor of computer science and bioinformatics at the U of Memphis. He has (co-)authored about 200 books, book chapters, journal or refereed conference publications. The main theme of his research is biomolecule-based computing and applications to areas such as bioinformatics, nanotechnology, self-assembly, machine learning and foundations of data science. He has served on the editorial board and as guest editor of several journals and as mentor of about 90 MS and PhD students in these areas. He has also served as TPC member and organizer of many scientific conferences and professional meetings and has been a visiting professor and guest scientist at several research institutions around the world. Ching-Chi Yang is an assistant professor of mathematical sciences at the University of Memphis. He received a doctoral degree in statistics from The Pennsylvania State University in 2019. His primary interests focus on statistical learning, dimensional analysis, industrial and engineering statistics, response surface methodology. His related research projects vary from response surface methodology, tropical cyclone predictions, to stock price predictions. He has received awards including the American Society for Quality 2018 Fall Technical Conference Student Scholarship, and Jack and Eleanor Pettit Scholarship in Science from Penn State University. Deepak Venugopal is an associate professor in the department of Computer Science at University of Memphis. His research interests lie in the fields of Machine Learning and Artificial Intelligence. In particular, he has made research contributions to statistical relational learning, Neuro-Symbolic AI, explainable AI and AI-based educational technologies. Dr. Venugopal regularly teaches Machine learning and AI courses both at the graduate and undergraduate levels. Nirman Kumar is an assistant professor of computer science at the University of Memphis since 2016. His research area is approximation algorithms and Computational geometry. Nirman holds a Phd and Masters degree in Computer Science from the University of Illinois, and a Bachelors degree in Computer Science and Enginnering from the Indian Institute of Technology, Kanpur. Kalidas Jana is post-doctoral research fellow in Economics and Data Science at the University of Memphis. He received Ph.D. in Economics from North Carolina State University in 2005. His research interests are in Econometrics and Data Science. Lih-Yuan Deng received the B.S. and M.S. degree in Mathematics from National Taiwan University, Taiwan in 1975 and 1977, respectively. He also received M.S. and Ph.D. degrees in Computer Science and Statistics from University of Wisconsin-Madison, USA in 1982 and 1984, respectively. He is currently professor in the department of Mathematical Sciences, University of Memphis, USA. His active research work is mainly in the area of “design of random number generators” for computer simulation and computer security applications. Tab Content 6Author Website:Countries AvailableAll regions |