centos7部署mpi和module環境

1. 概述

本篇博客主要介紹在centos7.9部署和測試mpi並行程序開發環境,並通過module加載不同的環境。

2. 部署過程

2.1 安裝mpich

節點安裝相關依賴環境:yum -y install gcc-gfortran

下載mpich-4.0.2.tar.gz,解壓

https://www.mpich.org/static/downloads/4.0.3/mpich-4.0.3.tar.gz

創建編譯安裝腳本liwl.sh

#!/bin/bash
INSTALL=/opt/hpc/mpi/mpich/4.0.2
./configure --prefix=${INSTALL}

執行bash liwl.sh && make && make install,完成mpich編譯安裝

2.2 安裝openmpi

下載openmpi-4.1.4.tar.gz源碼包,解壓後,進入解壓目錄,在目錄下創建編譯安裝腳本liwl.sh

https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.4.tar.gz

#!/bin/bash
INSTALL=/opt/hpc/mpi/openmpi/4.1.4
./configure --prefix=${INSTALL}

執行bash liwl.sh && make && make install

2.3 安裝module

yum -y install environment-modules

配置moudle,增加mpi:vim /etc/modulefiles/mpi/mpich/4.0.2,內容如下

#%Module
prepend-path PATH /opt/hpc/mpi/mpich/4.0.2/bin
prepend-path LIBRARY_PATH /opt/hpc/mpi/mpich/4.0.2/lib
prepend-path LD_LIBRARY_PATH /opt/hpc/mpi/mpich/4.0.2/lib

配置module,增加openmpi:vim /etc/modulefiles/mpi/openmpi/4.1.4/bin,內容如下:

#%Module
prepend-path PATH /opt/hpc/mpi/openmpi/4.1.4/bin
prepend-path LIBRARY_PATH /opt/hpc/mpi/openmpi/4.1.4/lib
prepend-path LD_LIBRARY_PATH /opt/hpc/mpi/openmpi/4.1.4/lib

2.4 環境測試

執行module avi

$ module avi

------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------------
dot         module-git  module-info modules     null        use.own

-------------------------------- /etc/modulefiles -------------------------------------------------------------------------
mpi/mpich/4.0.2   mpi/openmpi/4.1.4

執行module load mpi/openmpi/4.1.4,加載選擇的環境,通過which mpicc驗證

2.5 編程測試

創建hello.c

#include <mpi.h>
#include <unistd.h>
#include <stdio.h>

int main(int argc, char** argv) {
    // Initialize the MPI environment
    MPI_Init(NULL, NULL);

    // Get the number of processes
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    // Get the rank of the process
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    // Get the name of the processor
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int name_len;
    MPI_Get_processor_name(processor_name, &name_len);

    // Print off a hello world message
    printf("Hello world from processor [%s], rank [%d] out of [%d] processors.begint to sleep...\n", processor_name, world_rank, world_size);
    sleep(60);

    // Finalize the MPI environment.
    MPI_Finalize();
    printf("sleep over.\n");
}

先用mpich編譯

[liwl@node17][~/online1/mpich]
$ module load mpi/mpich/4.0.2
[liwl@node17][~/online1/mpich]
$ which mpicc
/opt/hpc/mpi/mpich/4.0.2/bin/mpicc
[liwl@node17][~/online1/mpich]
$ mpicc -o hello hello.c 
[liwl@node17][~/online1/mpich]
$ ldd hello
        linux-vdso.so.1 =>  (0x00007ffe7bdf6000)
        libmpi.so.12 => /opt/hpc/mpi/mpich/4.0.2/lib/libmpi.so.12 (0x00007f89b9803000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f89b9435000)
        libhwloc.so.5 => /lib64/libhwloc.so.5 (0x00007f89b91f8000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f89b8ef6000)
        librdmacm.so.1 => /lib64/librdmacm.so.1 (0x00007f89b8cdf000)
        libibverbs.so.1 => /lib64/libibverbs.so.1 (0x00007f89b8ac6000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f89b88aa000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f89b86a6000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f89b849e000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f89bc226000)
        libnuma.so.1 => /lib64/libnuma.so.1 (0x00007f89b8292000)
        libltdl.so.7 => /lib64/libltdl.so.7 (0x00007f89b8088000)
        libnl-route-3.so.200 => /lib64/libnl-route-3.so.200 (0x00007f89b7e1b000)
        libnl-3.so.200 => /lib64/libnl-3.so.200 (0x00007f89b7bfa000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f89b79e4000)

再用openmpi編譯

[liwl@node17][~/online1/openmpi]
$ module load mpi/openmpi/4.1.4
[liwl@node17][~/online1/openmpi]
$ which mpicc
/opt/hpc/mpi/openmpi/4.1.4/bin/mpicc
[liwl@node17][~/online1/openmpi]
$ mpicc  -o hello hello.c 
[liwl@node17][~/online1/openmpi]
$ ldd hello
        linux-vdso.so.1 =>  (0x00007fff47ffa000)
        libmpi.so.40 => /opt/hpc/mpi/openmpi/4.1.4/lib/libmpi.so.40 (0x00007fd8ac2d3000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fd8ac0b7000)
        libc.so.6 => /lib64/libc.so.6 (0x00007fd8abce9000)
        libopen-rte.so.40 => /opt/hpc/mpi/openmpi/4.1.4/lib/libopen-rte.so.40 (0x00007fd8aba32000)
        libopen-pal.so.40 => /opt/hpc/mpi/openmpi/4.1.4/lib/libopen-pal.so.40 (0x00007fd8ab722000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007fd8ab51e000)
        librt.so.1 => /lib64/librt.so.1 (0x00007fd8ab316000)
        libm.so.6 => /lib64/libm.so.6 (0x00007fd8ab014000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00007fd8aae11000)
        libz.so.1 => /lib64/libz.so.1 (0x00007fd8aabfb000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fd8ac5eb000)

很明顯能夠看出可執行程序hello鏈接到了不同的庫。最後使用mpirun或者slurm調度系統提交作業即可。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章