|
![]() |
|||
|
||||
OverviewPresenting an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems, this book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture-independent fashion. These formulations allow for the parameterization of the architecture specific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the optimal distribution of the computations. The material in the text connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems. Full Product DetailsAuthor: Vijay K. NaikPublisher: Springer Imprint: Springer Edition: 1993 ed. Volume: 236 Dimensions: Width: 15.50cm , Height: 1.40cm , Length: 23.50cm Weight: 1.100kg ISBN: 9780792393702ISBN 10: 0792393708 Pages: 198 Publication Date: 31 July 1993 Audience: Professional and scholarly , Professional & Vocational Format: Hardback Publisher's Status: Active Availability: In Print ![]() This item will be ordered in for you from one of our suppliers. Upon receipt, we will promptly dispatch it out to you. For in store availability, please contact us. Table of Contents1 Introduction.- 1.1 Parallel computing and communication.- 1.2 Scope of this work.- 1.3 Organization.- 1.4 Model of computation.- 1.5 Graph-theoretic definitions.- 1.6 Basic terminology.- 2 Diamond Dags.- 2.1 Communication requirements of a DAG.- 2.2 The diamond dag.- 2.3 Diamond dags with higher degree vertices.- 2.4 Effects of the tradeoff on performance.- 2.5 Concluding remarks.- 3 Rectangular Dags.- 3.1 The rectangular dag.- 3.2 Lower bound on computation time.- 3.3 Lower bound on data traffic.- 3.4 Lower bound on t • ?.- 3.5 The tradeoff factor for the rectangular dag.- 3.6 Performance considerations.- 3.7 Concluding remarks.- 4 Three and Higher Dimensional Dags.- 4.1 An n X n X n dag.- 4.2 A d–dimensional dag.- 4.3 The effects of tradeoff on performance.- 4.4 Concluding remarks.- 5 Factoring Dense and Sparse Matrices.- 5.1 Dense symmetric positive definite systems.- 5.2 Sparse, symmetric positive definite systems.- 5.3 Concluding remarks.- 6 Conclusions and Some Open Issues.- 6.1 Summary of principal results.- 6.2 Suggestions for further research.ReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |