Task Scheduling for Parallel SystemsISBN: 978-0-471-73576-2
Hardcover
312 pages
May 2007
This is a Print-on-Demand title. It will be printed specifically to fill your order. Please allow an additional 15-20 days delivery time. The book is not returnable.
|
Acknowledgments.
1. Introduction.
1.1 Overview.
1.2 Organization.
2. Parallel Systems and Programming.
2.1 Parallel Architectures.
2.1.1 Flynn’s Taxonomy.
2.1.2 Memory Architectures.
2.1.3 Programming Paradigms and Models.
2.2 Communication Networks.
2.2.1 Static Networks.
2.2.2 Dynamic Networks.
2.3 Parallelization.
2.4 Subtask Decomposition.
2.4.1 Concurrency and Granularity.
2.4.2 Decomposition Techniques.
2.4.3 Computation Type and Program Formulation.
2.4.4 Parallelization Techniques.
2.4.5 Target Parallel System.
2.5 Dependence Analysis.
2.5.1 Data Dependence.
2.5.2 Data Dependence in Loops.
2.5.3 Control Dependence.
2.6 Concluding Remarks.
2.7 Exercises.
3. Graph Representations.
3.1 Basic Graph Concepts.
3.1.1 Computer Representation of Graphs.
3.1.2 Elementary Graph Algorithms.
3.2 Graph as a Program Model.
3.2.1 Computation and Communication Costs.
3.2.2 Comparison Criteria.
3.3 Dependence Graph (DG).
3.3.1 Iteration Dependence Graph.
3.3.2 Summary.
3.4 Flow Graph (FG).
3.4.1 Data-Driven Execution Model.
3.4.2 Summary.
3.5 Task Graph (DAG).
3.5.1 Graph Transformations and Conversions.
3.5.2 Motivations and Limitations.
3.5.3 Summary.
3.6 Concluding Remarks.
3.7 Exercises.
4. Task Scheduling.
4.1 Fundamentals.
4.2 With Communication Costs.
4.2.1 Schedule Example.
4.2.2 Scheduling Complexity.
4.3 Without Communication Costs.
4.3.1 Schedule Example.
4.3.2 Scheduling Complexity.
4.4 Task Graph Properties.
4.4.1 Critical Path.
4.4.2 Node Levels.
4.4.3 Granularity.
4.5 Concluding Remarks.
4.6 Exercises.
5. Fundamental Heuristics.
5.1 List Scheduling.
5.1.1 Start Time Minimization.
5.1.2 With Dynamic Priorities.
5.1.3 Node Priorities.
5.2 Scheduling with Given Processor Allocation.
5.2.1 Phase Two.
5.3 Clustering.
5.3.1 Clustering Algorithms.
5.3.2 Linear Clustering.
5.3.3 Single Edge Clustering.
5.3.4 List Scheduling as Clustering.
5.3.5 Other Algorithms.
5.4 From Clustering to Scheduling.
5.4.1 Assigning Clusters to Processors.
5.4.2 Scheduling on Processors.
5.5 Concluding Remarks.
5.6 Exercises.
6. Advanced Task Scheduling.
6.1 Insertion Technique.
6.1.1 List Scheduling with Node Insertion.
6.2 Node Duplication.
6.2.1 Node Duplication Heuristics.
6.3 Heterogeneous Processors.
6.3.1 Scheduling.
6.4 Complexity Results.
6.4.1 α|β|γ Classification.
6.4.2 Without Communication Costs.
6.4.3 With Communication Costs.
6.4.4 With Node Duplication.
6.4.5 Heterogeneous Processors.
6.5 Genetic Algorithms.
6.5.1 Basics.
6.5.2 Chromosomes.
6.5.3 Reproduction.
6.5.4 Selection, Complexity, and Flexibility.
6.6 Concluding Remarks.
6.7 Exercises.
7. Communication Contention in Scheduling.
7.1 Contention Awareness.
7.1.1 End-Point Contention.
7.1.2 Network Contention.
7.1.3 Integrating End-Point and Network Contention.
7.2 Network Model.
7.2.1 Topology Graph.
7.2.2 Routing.
7.2.3 Scheduling Network Model.
7.3 Edge Scheduling.
7.3.1 Scheduling Edge on Route.
7.3.2 The Edge Scheduling.
7.4 Contention Aware Scheduling.
7.4.1 Basics.
7.4.2 NP-Completeness.
7.5 Heuristics.
7.5.1 List Scheduling.
7.5.2 Priority Schemes—Task Graph Properties.
7.5.3 Clustering.
7.5.4 Experimental Results.
7.6 Concluding Remarks.
7.7 Exercises.
8. Processor Involvement in Communication.
8.1 Processor Involvement—Types and Characteristics.
8.1.1 Involvement Types.
8.1.2 Involvement Characteristics.
8.1.3 Relation to LogP and Its Variants.
8.2 Involvement Scheduling.
8.2.1 Scheduling Edges on the Processors.
8.2.2 Node and Edge Scheduling.
8.2.3 Task Graph.
8.2.4 NP-Completeness.
8.3 Algorithmic Approaches.
8.3.1 Direct Scheduling.
8.3.2 Scheduling with Given Processor Allocation.
8.4 Heuristics.
8.4.1 List Scheduling.
8.4.2 Two-Phase Heuristics.
8.4.3 Experimental Results.
8.5 Concluding Remarks.
8.6 Exercises.
Bibliography.
Author Index.
Subject Index.