您好,欢迎光临本网站![请登录][注册会员]  
文件名称: 多线程、并行与分布式程序设计基础 英文版pdf
  所属分类: 互联网
  开发工具:
  文件大小: 9mb
  下载次数: 0
  上传时间: 2013-10-19
  提 供 者: kua***
 详细说明: 主要内容如题:多线程,并行和分布式所使用的程序设计 Product Description Foundations of Multithreaded, Parallel, and Distributed Programming covers-and then applies-the core concepts and techniques needed for an introductory course in this topic. The book emphasizes the practice and application of parallel systems, using real-world examples throughout. Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance. From the Back Cover Greg Andrews teaches the fundamental concepts of multithreaded, parallel and distributed computing and relates them to the implementation and performance processes. He presents the appropriate breadth of topics and supports these discussions with an emphasis on performance. Features * Emphasizes how to solve problems, with correctness the primary concern and performance an important, but secondary, concern * Includes a number of case studies which cover such topics as pthreads, MPI, and OpenMP libraries, as well as programming languages like Java, Ada, high performance Fortran, Linda, Occam, and SR * Provides examples using Java syntax and discusses how Java deals with monitors, sockets, and remote method invocation * Covers current programming techniques such as semaphores, locks, barriers, monitors, message passing, and remote invocation * Concrete examples are executed with complete programs, both shared and distributed * Sample applications include scientific computing and distributed systems Preface Chapter 1: The Concurrent Computing Landscape1 1.1 The Essence of Concurrent Pr-ogrammiilg 2 1.2 Hardware Architectures . 1.2.1 Processors and Caches 4 I 2.2 Shared-Me1nor.y Multiprocessors 6 1.2.3 Distributed-Memory Multicomputers and Networks8 1.3 Applications and Progralnlning Styles 10 1.4 Iterative Parallelism: Matrix Multiplication 13 1.5 Recursive Parallelism: Adaptjve Quadrature17 1.6 Producers and Consumers: Unix Pipes 19 1.7 Clients and Servers: File Systems . 27 1.8 Peers: Distributed Matrix Multiplication 23 1.9 Summary of Programming Notation 26 1.9.1 Declarations26 19.2 Sequential Statements 27 1.9.3 Concurrent Statements, Processes, and Procedures 29 1.9.4 Colnments 31 Historical Notes 31 References 33 .Exercises 34 Part 1 : Shared-Variable Programming Chapter 2: Processes and Synchronization 41 2.1 States. Actions. Histories. and Properlies42 2.2 Parallelization: Finding Patterns in a File . 44 2.3 Synchronization: The Maximum of an Array 48 . 2.4 Atomic Actions and Await Statements 51 2.4.1 Fine-Grained Atomicity 51 2.4.2 Specifying Synchronization: The Await Statement 54 2.5 Produce~/Consurner Synchronization 56 2.6 A Synopsis of Axiomatic Semantics 57 2.6. 1 Fol.mai Logical Systems 58 2.6.2 A Programming Logic 59 2.6.3 Semantics of Concurrent Execution 62 2.7 Techniques fool- Avoiding Interference 65 2.7.1 Disjoint Variables 65 2.7.2 Weakened Assertions 66 2.7.3 Global Invariants 68 . 2.7.4 S y~ichronizatjon 69 . 2.7.5 An Example: The An-ay Copy f roblern Revisited 70 2.8 Safety and Liveness Properties 72 2.8.1 Proving Safety Properties 73 2.8.2 ScheduJiog Policies and Fairness 74 Historical Notes77 References80 Exercises 81 Chapter 3: Locks and Barriers 93 3.1 The Critical Secrion Problem 94 3.2 Critical Sections: Spin Locks 97 3.2.1 Test and Set 98 3.2.2 Test and Test and Set 100 . 3.2.3 Implementing Await Statements 101 3.3 Critical Sections: Fair Solutions 104 3.3.1 Tfie Tie-Breaker Algorithm 104 3.3.2 The Ticket Algorithm 108 3.3.3 The Bakery Algorithm 11 I. 3.4 Barrier Synchronizatjon 115 3.4.1 Shqred Counter 116 3.4.2 Flags and Coordinators 117 3.4.3 Symmetric Barriers120 3.5 Data Paallel Algorithms 124 3.5.1 Parallel Prefix Computations 124 3.5.2 Operations on Linked Lists127 3.5.3 Grid Compntations: Jacobi Iteration129 3.5.4 Synchronous Multiprocessors 131 3.6 Paral.l.el Computing with a Bag of Tasks 132 3.6.1 Matrix Multiplication133 3.6.2 Adaptive Quadrature 134 . Historical Notes 135 References139 Exercises 141 Chapter 4: Semaphores 153 4 . I Syntax and Semantics 154 4.2 Basic Problems and Techniques156 4.2.1 Critical Sections: Mutual Exclusion156 4.2.2 B tiers: Signaling Events15G 4.2.3 Producers and Consumers: Split Binary Semapl~ores 158 4.2.4 Bounded Buffers: Resource Counting160 4.3 The Dining Philosophers 164 4.4 Readers and Writers 166 4.4.1 ReaderslWriters as an Exclusion Problem 167 4.4.2 Readerstwriters Using Condition Synchronization 169 4.4.3 The Technique of Passing the Baton 171 4.4.4 Alternative Scheduling Policies 175 4.5 Resource Allocation and Scheduling178 . 4.5.1 Problem Definition and General Solution Pattern 178 4.5.2 Shortest-Job-Next Allocation . 180 4.6 Case Study: Pthreads 184 4.6.1 Thread CI-eation 185 4.6.2 Semaphores186 4.6.3 Example: A Simple Producer and Consumer 186 his to^. ical Notes188 References190 E~ercises 191 Chapter 5: Monitors 203 5.1 Syntax and Semanlics 204 5 . 11 Mutual Exclusion 206 5 . 1.2 Condition Variables 207 . 5.1.3 Signaling Disciplines 208 5.1.4 Additional Operations on Condition Variables 212 5.2 Synchronization Techniques 213 . 5.2.1 Bounded Buffers: Basic Condition Synchronization 213 5.2.2 Readers and Writers: Broadcast Signal 215 5.2.3 Shortest-Job-Next Allocation: Priority Wait 21 7 5.2.4 Interval Tii11er: Covering Conditiolls 218 . 5.2.5 The Sleeping Barber: Rendezvous 221 5.3 Disk Scheduling: Program Structures 224 5.3.1 Using a Separate Monitor 228 5.3.2 Using an Intermediary 230 . 5.3.3 Using a Nested Monitor 235 5.4 Case Study: Java 237 5.4.1 The Tl~reads Class 238 5.4.2 Synchonized Methods 239 5.4.3 Parallel ReadersIWriters 241 . 5.4.4 Exclusive ReadersNriters 243 . 5.4.5 True ReadersIWriters 245 5.5 Case Study: f theads 246 5.5.1 Locks and Condition Variables 246 5.5.2 Example: Summing the Elements of a Matrix 248 Historical Notes 250 References 253 . Exercises 255 Chapter 6: Implementations 265 . 6 .I A Single-Pi-ocessor Kernel 7-66 6.2 A Multiprocessor. Kernel 270 6.3 Implementing Semaphores in a Kernel 276 6.4 Impleinenting Monitors in a Kernel 279 6.5 Implementing Monitors Using Semaphores 283 HistoricaI Notes 284 'XF, References -. Exercises 137 Part 2: Distributed Programming . Chapter 7: Message Passing 29s 7.1 Asynchronous Message Passing 296 7.2 Filters: A Sor-ring Network298 7.3 Clients and Servers 302 7.3.1 Active IVlonitors302 7.3.2 A Self-scheduling Disk Server 308 7.3.3 File Servers: Conversational Continuity 311 7.4 Interacting Peers: Exchanging Values 314 7.5 Synchronous Message Passing 318 7.6 Case Study: CSP 320 7.6.1 Corn~nunication Statements 321 7.6.2 Guarded Communication 323 7.6.3 Example: The Sieve of Eratosthenes 326 7.6.4 Occam and Modern CSP 328 7.7 Case Study: Linda 334 7.7.1 Tuple Space and Process Interaction 334 7.7.2 Example: Prime Numbers with a Bag of Tasks337 7.8 Case Study: MPI 340 7.8.1 Basic Functions 341 7.8.2 Global Communication and Synchronization 343 7.9 Case Study: Java 344 7.9.1 Networks and Sockets 344 . 7.9.2 Example: A Remote File Reader 345 Histol-icai Notes 348 References 351 Exercises353 Chapter 8: RPC and Rendezvous 361 8.1 Remote Procedure Call362 8.11 Synchronization in Modules 364 8.1.2 A Time Server365 8.1.3 Caches in a Distributed File System 367 8.1.4 A Sorting Network of Merge Filters 370 8.1.5 Interacting Peers: Exchanging Values 371 8.2 Rendezvous 373 8.2.1 Input Statements374 8.2.2 ClienZ/Server Examples376 8.2.3 A Sorting Network of Merge Filters 379 8.2.4 Interacting Peers: Exchanging Values 381 8.3 A Multiple Primitives Notation 382 8.3.1 lnvoking and Servicing Operations 382 8.3.2 Examples384 5.4 Readers~Writers Revisited 386 8.4.1 Encapsulated Access 387 8.4.2 Replicated Files 389 8.5 Case Study: Java 393 8.5. 1 Remote Method Invocation 393 8.5.2 Example: A Remote Database 39.5 8.6 Case Study: Ada 397 . 8.6.1 Tasks 398 . 8.6.2 Rendezvous 399 5.6.3 ProteccedTypes 401 8.6.4 Example: The Dining Philosophers 403 8.7 Case Study: SR 406 8.7.1 Resources and Globals 406 8.7.2 Comrnunication and Synchronjzation 405 8.7.3 Example: Critical Section Silnulation 409 Historical Notes 411 References 41.5 Exercises 416 Chapter 9: Paradigms for Process Interaction 423 9.1 Manager/Worlzers (Distributed Bag of Tasks) 424 . 9 . I1 Spai-se Matrix Multiplication 424 9.1.2 Adaptive Quadrature Revisited 428 9.2 Heartbeat Algorithms . 430 9.2.1. Image Processing: Region Labeling 432 9.2.2 Cellular Automata: The Game of Life 435 9.3 Pipeline Algorithms 437 9.3.1 A Distribu~ed Matrix Multiplication Pipeliile 438 0.3.2 Matrix Multiplication by Blocks 441 9.4 ProbeEcho Algorithms 444 9.4.1 Broadcast in a Network 444 9.4.2 Computing the Topology of a Network448 9.5 Broadcast Algorithms 451 9.5.1 Logical Clocks and Event Ordering 452 . 9.5.2 Distributed Semaphores 454 . 9.6 Token-Passing Algoritl~~ns 457 9.6.1 Distributed Mutual Excl~~sion 457 9.6.2 Termination Derectio~z in a Ring 460 9.6.3 Termination Detection in a Graph 462 9.7 Replicated Servers465 9.7.1 Distributed Dining Philosophers 466 9.7.2 Decentralized Dining Philosophers 407 Historical Notes 471 References474 Exercises 477 Chapter 10: Implementations 487 . 10.1 Asynchronous Message Passing 488 10.1. I Shared-Memory Keniel 488 10.1.2 Distributed Kernel 491 10.2 Synchronous Message Passing 496 10.2.1. Direct Communication Using Asynchronous Messages 497 10.2.2 Guarded Com~nunication Using a Clearinghouse 498 10.3 RPC and Rendezvous504 10.3.1 RPC in a Kernel 504 10.3.2 Rendezvous Using Asynchronous Message Passing507 10.3.3 Multiple Primitives in a Kernel 509 10.4 Distributed Shared Memory515 10.4.1 Implementation Overview516 10.4.2 Page Consistency Protocols 518 Historical Notes 520 References521 Exercises 522 . Part 3: Parallel Programming 527 Chapter 11 : Scientific Computing533 . 11.1 Grid Cornputations 534 11.1.1 Laplace's Equation 534 11.1.2 Sequential Jacobi Iteration535 11 .I 3 Jacobi Iteration Using Shared Variables 540 I 1.1.4 Jacobi Iteration Using Message Passing 541 1 1.1.5 Red/Black Successive Over-Relaxation (SOR)546 1 1.1.6 Multigrid Methods549 11.2 Particle Computations 553 I 1.2.1 The Gravitatioilal N-Body Problem 554 1 1.2.2 Shared-Variable Program 555 1 1.2.3 Message-Passing Programs 559 1 1.2.4 Approximate Methods 569 11.3 Matrix Computations 573 11.3.1 Gaussian Elimination 573 11.3.2 LU Decomposition575 1 1.3.3 Shared-Variable Program 576 1 1.3.4 Message-Passing Program 581 Historical Notes 583 References 584 Exercises 585 Chapter 12: Languages. Compilers. Libraries. and Tools 591 12.1 Parallel Programming Libraries592 12.1.1 Case Study: Pthreads 593 12.1.2 Case Study: MPI 593 . 12.1.3 Case Study: OpenMP 595 12.2 Parallelizing Compilers 603 12.2.1 Dependence Analysis604 12.2.2 Prograrn Transfo~mations 607 12.3 Languages and Models 614 12.3. 1 Imperative Languages . 616 12.3.2 Coordination Languages 619 12.3.3 Data Parallel Languages 620 12.3.4 Functional Languages 623 12.3.5 Abstracr Models626 12.3.6 Case Study: High-Performance Fortran (HPF) 629 12.4 Parallel Prograinming Tools 633 12.4.1 Pel-formance Measurement and Visualization 633 12.4.2 Metacornpilters and Metacornpuling 634 12.4.3 Case Study: The Globus Toolkit 636 Historical Notes 638 References642 Exercises 644 Glossary647 ...展开收缩
(系统自动生成,下载前可以参看下载内容)

下载文件列表

相关说明

  • 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
  • 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度
  • 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
  • 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
  • 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
  • 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.
 输入关键字,在本站1000多万海量源码库中尽情搜索: