SitemapHigh Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPITable of ContentsCopyrightPrefaceAudienceOrganizationConventionsHow to Contact UsUsing Code ExamplesAcknowledgmentsPart I: An Introduction to ClustersChapter 1. Cluster Architecture1.1 Modern Computing and the Role of Clusters1.2 Types of Clusters1.3 Distributed Computing and Clusters1.4 Limitations1.5 My BiasesChapter 2. Cluster Planning2.1 Design Steps2.2 Determining Your Cluster's Mission2.3 Architecture and Cluster Software2.4 Cluster Kits2.5 CD-ROM-Based Clusters2.6 BenchmarksChapter 3. Cluster Hardware3.1 Design Decisions3.2 EnvironmentChapter 4. Linux for Clusters4.1 Installing Linux4.2 Configuring Services4.3 Cluster SecurityPart II: Getting Started QuicklyChapter 5. openMosix5.1 What Is openMosix?5.2 How openMosix Works5.3 Selecting an Installation Approach5.4 Installing a Precompiled Kernel5.5 Using openMosix5.6 Recompiling the Kernel5.7 Is openMosix Right for You?Chapter 6. OSCAR6.1 Why OSCAR?6.2 What's in OSCAR6.3 Installing OSCAR6.4 Security and OSCAR6.5 Using switcher6.6 Using LAM/MPI with OSCARChapter 7. Rocks7.1 Installing Rocks7.2 Managing Rocks7.3 Using MPICH with RocksPart III: Building Custom ClustersChapter 8. Cloning Systems8.1 Configuring Systems8.2 Automating Installations8.3 Notes for OSCAR and Rocks UsersChapter 9. Programming Software9.1 Programming Languages9.2 Selecting a Library9.3 LAM/MPI9.4 MPICH9.5 Other Programming Software9.6 Notes for OSCAR Users9.7 Notes for Rocks UsersChapter 10. Management Software10.1 C310.2 Ganglia10.3 Notes for OSCAR and Rocks UsersChapter 11. Scheduling Software11.1 OpenPBS11.2 Notes for OSCAR and Rocks UsersChapter 12. Parallel Filesystems12.1 PVFS12.2 Using PVFS12.3 Notes for OSCAR and Rocks UsersPart IV: Cluster ProgrammingChapter 13. Getting Started with MPI13.1 MPI13.2 A Simple Problem13.3 An MPI Solution13.4 I/O with MPI13.5 Broadcast CommunicationsChapter 14. Additional MPI Features14.1 More on Point-to-Point Communication14.2 More on Collective Communication14.3 Managing Communicators14.4 Packaging DataChapter 15. Designing Parallel Programs15.1 Overview15.2 Problem Decomposition15.3 Mapping Tasks to Processors15.4 Other ConsiderationsChapter 16. Debugging Parallel Programs16.1 Debugging and Parallel Programs16.2 Avoiding Problems16.3 Programming Tools16.4 Rereading Code16.5 Tracing with printf16.6 Symbolic Debuggers16.7 Using gdb and ddd with MPI16.8 Notes for OSCAR and Rocks UsersChapter 17. Profiling Parallel Programs17.1 Why Profile?17.2 Writing and Optimizing Code17.3 Timing Complete Programs17.4 Timing C Code Segments17.5 Profilers17.6 MPE17.7 Customized MPE Logging17.8 Notes for OSCAR and Rocks UsersPart V: AppendixAppendix A. ReferencesA.1 BooksA.2 URLsColophonIndexindex_SYMBOLindex_Aindex_Bindex_Cindex_Dindex_Eindex_Findex_Gindex_Hindex_Iindex_Jindex_Kindex_Lindex_Mindex_Nindex_Oindex_Pindex_Qindex_Rindex_Sindex_Tindex_Uindex_Vindex_Windex_X