High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI [Electronic resources] - نسخه متنی

Joseph D. Sloan

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید








7.3 Using MPICH with Rocks


Before
we leave Rocks, let's look at a programming example
you can use to convince yourself that everything is really working.

While Rocks doesn't include MPI/LAM, it gives you
your choice of several MPICH distributions. The
/opt directory contains subdirectories for
MPICH, MPICH-MPD, and MPICH2-MPD. Under MPICH, there is also a
version of MPICH for Myrnet users. The distinctions are described
briefly in Chapter 9. We'll stick to MPICH for
now.

You can begin by copying one of the examples to your home directory.

[sloanjd@frontend sloanjd]$ cd /opt/mpich/gnu/examples
[sloanjd@frontend examples]$ cp cpi.c ~
[sloanjd@frontend examples]$ cd

Next, compile the program.

[sloanjd@frontend sloanjd]$ /opt/mpich/gnu/bin/mpicc cpi.c -o cpi

(Clearly, you'll want to add this directory to your
path once you decide which version of MPICH to use.)

Before you can run the program, you'll want to make
sure SSH is running and that no error or warning messages are
generated when you log onto the remote machines. (SSH is discussed in
Chapter 4.)

Now you can run the program. (Rocks automatically creates the
machines file used by the system, so
that's one less thing to worry about. But you can
use the -machinefile
filename option if you wish.)

[sloanjd@frontend sloanjd]$ /opt/mpich/gnu/bin/mpirun -np 4 cpi
Process 0 on frontend.public
Process 2 on compute-0-1.local
Process 1 on compute-0-0.local
Process 3 on compute-0-0.local
pi is approximately 3.1416009869231245, Error is 0.0000083333333314
wall clock time = 0.010533

That's all there is to it.

Since Rocks also includes the

High-Performance Linpack
(HPL) benchmark, so you might want to run
it. You'll need the HPL.dat
file. With Rocks 3.2.0, you can copy it to your directory from
/var/www/html/rocks-documentation/3.2.0/. To run
the benchmark, use the command

[sloanjd@frontend sloanjd]$ /opt/mpich/gnu/bin/mpirun -nolocal \
> -np 2 /opt/hpl/gnu/bin/xhpl
...

(Add a machine file if you like.) You can find more details in the
Rocks user manual.


/ 142