利用mdadm工具構建RAID 0/1/5/6/10磁盤陣列實戰(超詳細)

Mdadm介紹:

 mdadm是multiple devices admin的簡稱,它是Linux下的一款標準的軟件 RAID 管理工具。

  • mdadm能夠診斷、監控和收集詳細的陣列信息。
  • mdadm是一個單獨集成化的程序而不是一些分散程序的集合,因此對不同RAID管理命令有共通的語法。
  • mdadm能夠執行幾乎所有的功能而不需要配置文件。(也沒有默認的配置文件)**

 在linux系統中目前以MD(Multiple Devices)虛擬塊設備的方式實現軟件RAID,利用多個底層的塊設備虛擬出一個新的虛擬設備,並且利用條帶化(stripping)技術將數據塊均勻分佈到多個磁盤上來提高虛擬設備的讀寫性能,利用不同的數據冗祭算法來保護用戶數據不會因爲某個塊設備的故障而完全丟失,而且還能在設備被替換後將丟失的數據恢復到新的設備上。

 目前MD支持linear,multipath,raid0(stripping),raid1(mirror),raid4,raid5,raid6,raid10等不同的冗餘級別和級成方式,當然也能支持多個RAID陳列的層疊組成raid1 0,raid5 1等類型的陳列。

環境介紹:

CentOS 7.5-Minimal
VMware Workstation 15
mdadm工具
六塊磁盤:sdb sdc sdd sde sdf sdg  

常用參數:

參數 作用
-a 添加磁盤
-n 指定設備數量
-l 指定RAID級別
-C 創建
-v 顯示過程
-f 模擬設備損壞
-r 移除設備
-Q 查看摘要信息
-D 查看詳細信息
-S 停止RAID磁盤陣列
-x 指定空閒盤(熱備磁盤)個數,空閒盤(熱備磁盤)能在工作盤損壞後自動頂替

mdadm工具指令基本格式:

 mdadm -C -v 目錄 -l 級別 -n 磁盤數量 設備路徑

查看RAID陣列方法:

cat /proc/mdstat        //查看狀態

mdadm -D 目錄           //查看詳細信息

1.虛擬機磁盤準備
在這裏插入圖片描述
2.查看新增的磁盤

[root@localhost ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   20G  0 disk  -新
sdc               8:32   0   20G  0 disk  -新
sdd               8:48   0   20G  0 disk  -新
sde               8:64   0   20G  0 disk  -新
sdf               8:80   0   20G  0 disk  -新
sdg               8:96   0   10G  0 disk  -新
sr0              11:0    1  906M  0 rom 

3.安裝madam工具

[root@localhost ~]# yum -y install mdadm

更改磁盤分區格式:

 分區格式需改爲fd

****************************************
[root@localhost ~]# fdisk /dev/sdb     //修改sdb磁盤分區
...
命令(輸入 m 獲取幫助):n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
分區號 (1-4,默認 1):
起始 扇區 (2048-41943039,默認爲 2048):
將使用默認值 2048
Last 扇區, +扇區 or +size{K,M,G} (2048-41943039,默認爲 41943039):
將使用默認值 41943039
分區 1 已設置爲 Linux 類型,大小設爲 20 GiB

命令(輸入 m 獲取幫助):t
已選擇分區 1
Hex 代碼(輸入 L 列出所有代碼):fd
已將分區“Linux raid autodetect”的類型更改爲“Linux raid autodetect”

命令(輸入 m 獲取幫助):p
磁盤 /dev/sdb:21.5 GB, 21474836480 字節,41943040 個扇區
Units = 扇區 of 1 * 512 = 512 bytes
扇區大小(邏輯/物理):512 字節 / 512 字節
I/O 大小(最小/最佳):512 字節 / 512 字節
磁盤標籤類型:dos
磁盤標識符:0xf056a1fe
   設備 Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   fd  Linux raid autodetect

命令(輸入 m 獲取幫助):w
The partition table has been altered!
Calling ioctl() to re-read partition table.
正在同步磁盤。

****************************************
[root@localhost ~]# fdisk /dev/sdc     //修改sdc磁盤分區
命令(輸入 m 獲取幫助):p
...
   設備 Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    41943039    20970496   fd  Linux raid autodetect

****************************************
[root@localhost ~]# fdisk /dev/sdd      //修改sdd磁盤分區
命令(輸入 m 獲取幫助):p
...
   設備 Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    41943039    20970496   fd  Linux raid autodetect 

****************************************
[root@localhost ~]# fdisk /dev/sde      //修改sde磁盤分區
命令(輸入 m 獲取幫助):p
...

   設備 Boot      Start         End      Blocks   Id  System
/dev/sde1            2048    41943039    20970496   fd  Linux raid autodetect

****************************************
[root@localhost ~]# fdisk /dev/sdf     //修改sdf磁盤分區
命令(輸入 m 獲取幫助):p 
...
   設備 Boot      Start         End      Blocks   Id  System
/dev/sdf1            2048    41943039    20970496   fd  Linux raid autodetect

****************************************
[root@localhost ~]# fdisk /dev/sdg
命令(輸入 m 獲取幫助):p 
...
   設備 Boot      Start         End      Blocks   Id  System
/dev/sdg1            2048    20971519    10484736   fd  Linux raid autodetect

RAID 0 實驗

1.創建RAID 0

[root@localhost ~]# mdadm -C -v /dev/md0 -l 0 -n 2 /dev/sdb1 /dev/sdc1
 ///dev/md0目錄下將sdb1與sdc1兩塊磁盤創建爲RAID級別爲0,磁盤數爲2的RAID0陣列

[root@localhost ~]# cat /proc/mdstat    //查看RAID 0
Personalities : [raid0] 
md0 : active raid0 sdc1[1] sdb1[0]
      41906176 blocks super 1.2 512k chunks
      
unused devices: <none>

[root@localhost ~]# mdadm -D /dev/md0   //查看RAID0詳細信息
/dev/md0:
           Version : 1.2
     Creation Time : Tue Apr 21 15:55:29 2020
        Raid Level : raid0
        Array Size : 41906176 (39.96 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 15:55:29 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 4b439b50:63314c34:0fb14c51:c9930745
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

2.格式化分區

[root@localhost ~]# mkfs.xfs /dev/md0 
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@localhost ~]# blkid /dev/md0 
/dev/md0: UUID="13a0c896-5e79-451f-b6f1-b04b79c1bc40" TYPE="xfs" 

3.格式化後掛載

[root@localhost ~]# mkdir /raid0      //創建掛載目錄

[root@localhost ~]# mount /dev/md0 /raid0/   //將/dev/md0掛載到/raid0

[root@localhost ~]# df -h      //查看掛載是否成功
文件系統                 容量  已用  可用 已用% 掛載點
/dev/mapper/centos-root   17G   11G  6.7G   61% /
devtmpfs                 1.1G     0  1.1G    0% /dev
tmpfs                    1.1G     0  1.1G    0% /dev/shm
...
/dev/md0                  40G   33M   40G    1% /raid0

RAID 1 實驗

1.創建RAID 1

[root@localhost ~]# mdadm -C -v /dev/md1 -l 1 -n 2 /dev/sdd1 /dev/sde1 -x 1 /dev/sdb1 
  ///dev/md1目錄下將sdd1與sde1兩塊磁盤創建爲RAID級別爲1,磁盤數爲2
  的RAID1磁盤陣列並將sdd1作爲備用磁盤

mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sdb1 appears to be part of a raid array:
       level=raid0 devices=2 ctime=Tue Apr 21 15:55:29 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

2.查看RAID 1狀態

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid0] [raid1] 
md1 : active raid1 sdb1[2](S) sde1[1] sdd1[0]
      20953088 blocks super 1.2 [2/2] [UU]
      [==========>..........]  resync = 54.4% (11407360/20953088) finish=0.7min speed=200040K/sec
      
unused devices: <none>


[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 21 16:11:16 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 16:12:34 2020
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

     Resync Status : 77% complete

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
            Events : 12

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

       2       8       17        -      spare   /dev/sdb1

3.格式化並掛載

[root@localhost ~]# mkfs.xfs /dev/md1 

[root@localhost ~]# blkid /dev/md1
/dev/md1: UUID="18a8f33b-1bb6-43c2-8dfc-2b21a871961a" TYPE="xfs"

[root@localhost ~]# mkdir /raid1

[root@localhost ~]# mount /dev/md1 /raid1/

[root@localhost ~]# df -h
文件系統                 容量  已用  可用 已用% 掛載點
/dev/mapper/centos-root   17G   11G  6.7G   61% /
...
/dev/md1                  20G   33M   20G    1% /raid1

4.模擬磁盤損壞

 模擬損壞後查看RAID1陣列詳細信息,發現/dev/sdb1自動替換了損壞的/dev/sdd1磁盤。

[root@localhost ~]# mdadm /dev/md1 -f /dev/sdd1

[root@localhost ~]# mdadm -D /dev/md1   //查看
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 21 16:11:16 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 16:29:38 2020
             State : clean, degraded, recovering //正在自動恢復
    Active Devices : 1
   Working Devices : 2      
    Failed Devices : 1     //已損壞的磁盤
     Spare Devices : 1    //備用設備數

Consistency Policy : resync

    Rebuild Status : 46% complete

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
            Events : 26

    Number   Major   Minor   RaidDevice State
       2       8       17        0      spare rebuilding   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1

       0       8       49        -      faulty   /dev/sdd1
       
****** 備用磁盤正在自動替換損壞的磁盤,等幾分鐘再查看RAID1陣列詳細信息

[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 21 16:11:16 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 16:30:39 2020
             State : clean    //乾淨,已經替換成功了
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1     //已損壞的磁盤
     Spare Devices : 0     //備用設備數爲0了

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
            Events : 36

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1

       0       8       49        -      faulty   /dev/sdd1

5.移除損壞的磁盤

[root@localhost ~]# mdadm -r /dev/md1 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md1

[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 21 16:11:16 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 16:38:32 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0     //因爲我們已經移除了,所以這裏已經沒有顯示了
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
            Events : 37

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1

6.添加新磁盤到RAID1陣列:

[root@localhost ~]# mdadm -a /dev/md1 /dev/sdc1  
     ///dev/sdc1磁盤添加爲RAID1陣列的備用設備
mdadm: added /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Apr 21 16:11:16 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 16:40:20 2020
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1   //剛添加了一塊新磁盤,備用磁盤這裏已經有顯示

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
            Events : 38

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       65        1      active sync   /dev/sde1

       3       8       33        -      spare   /dev/sdc1

注意:

  • 新增加的硬盤需要與原硬盤大小一致。
  • 如果原有陣列缺少工作磁盤(如raid1只有一塊在工作,raid5只有2塊在工作),這時新增加的磁盤直接變爲工作磁盤,如果原有陣列工作正常,則新增加的磁盤爲熱備磁盤。

7.停止RAID陣列

 要停止陣列,需要先將掛載的RAID先取消掛載纔可以停止陣列,並且停止陣列之後會自動刪除創建陣列的目錄。

[root@localhost ~]# umount /dev/md1 

[root@localhost ~]# df -h
文件系統                 容量  已用  可用 已用% 掛載點
/dev/mapper/centos-root   17G   11G  6.7G   61% /
devtmpfs                 1.1G     0  1.1G    0% /dev
tmpfs                    1.1G     0  1.1G    0% /dev/shm
tmpfs                    1.1G  9.7M  1.1G    1% /run
tmpfs                    1.1G     0  1.1G    0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M   13% /boot
overlay                   17G   11G  6.7G   61% /var/lib/docker/overlay2/2131dc663296fd193837265e88fa5c9c62b9bfd924303381cea8b4c39c652c84/merged
shm                       64M     0   64M    0% /var/lib/docker/containers/436f7e6619c1805553ea71d800fd49ab08843cef6ed162acb35b4c32064ea449/mounts/shm
tmpfs                    211M     0  211M    0% /run/user/0

[root@localhost ~]# mdadm -S /dev/md1  
mdadm: stopped /dev/md1

[root@localhost ~]# ls /dev/md1
ls: 無法訪問/dev/md1: 沒有那個文件或目錄

RAID 5 實驗

1.創建RAID 5

[root@localhost ~]# mdadm -C -v /dev/md5 -l 5 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1 -x 1 /dev/sde1 
///dev/md5目錄下將sdb1、sdc1、sdd1三塊磁盤創建爲RAID級別爲5,磁盤
數爲3的RAID5磁盤陣列並將sde1作爲備用磁盤

mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: /dev/sdc1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: /dev/sdd1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: /dev/sde1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md5 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

2.查看RAID 5陣列信息

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
unused devices: <none>


[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Tue Apr 21 16:56:09 2020
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 16:59:56 2020
             State : clean 
    Active Devices : 3
   Working Devices : 4    
    Failed Devices : 0
     Spare Devices : 1    //備用設備數1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 422363cb:e7fd4d3a:aaf61344:9bdd00b3
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       3       8       65        -      spare   /dev/sde1

3.模擬磁盤損壞

[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md5   //提示sdb1已損壞

[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Tue Apr 21 16:56:09 2020
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 17:04:36 2020
             State : clean, degraded, recovering   //正在自動替換
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 16% complete

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 422363cb:e7fd4d3a:aaf61344:9bdd00b3
            Events : 22

    Number   Major   Minor   RaidDevice State
       3       8       65        0      spare rebuilding   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1

**************************
[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Tue Apr 21 16:56:09 2020
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 17:07:58 2020
             State : clean     //自動替換成功
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1     //損壞磁盤數爲1
     Spare Devices : 0     //備用磁盤數爲0,因爲已經替換

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 422363cb:e7fd4d3a:aaf61344:9bdd00b3
            Events : 37

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1

4.格式化並掛載

[root@localhost ~]# mkdir /raid5
[root@localhost ~]# mkfs.xfs /dev/md5
[root@localhost ~]# mount /dev/md5 /raid5/
[root@localhost ~]# df -h
文件系統                 容量  已用  可用 已用% 掛載點
/dev/mapper/centos-root   17G   11G  6.7G   61% /
...
/dev/md5                  40G   33M   40G    1% /raid5

5.停止陣列

[root@localhost ~]# mdadm -S /dev/md5 
mdadm: stopped /dev/md5

RAID 6 實驗

1.創建RAID 6陣列

[root@localhost ~]# mdadm -C -v /dev/md6 -l 6 -n 4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 -x 2 /dev/sdf1 /dev/sdg1 
///dev/md6目錄下將sdb1、sdc1、sdd1、sde1四塊磁盤創建爲RAID級別
爲6,磁盤數爲4的RAID6磁盤陣列並將sdf1、sdg1作爲備用磁盤

mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: partition table exists on /dev/sdb1 but will be lost or
       meaningless after creating array
mdadm: /dev/sdc1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: /dev/sdd1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: /dev/sde1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: /dev/sdf1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: size set to 10467328K
mdadm: largest drive (/dev/sdb1) exceeds size (10467328K) by more than 1%
Continue creating array? y
mdadm: Fail create md6 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.

2.查看RAID 6陣列信息

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0]
      20934656 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>

[root@localhost ~]# mdadm -D /dev/md6 
/dev/md6:
           Version : 1.2
     Creation Time : Tue Apr 21 17:54:25 2020
        Raid Level : raid6
        Array Size : 20934656 (19.96 GiB 21.44 GB)
     Used Dev Size : 10467328 (9.98 GiB 10.72 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 17:58:16 2020
             State : clean 
    Active Devices : 4
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 2

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 9a5c470e:eb95d0b4:2a213dac:f0fd3315
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       81        -      spare   /dev/sdf1
       5       8       97        -      spare   /dev/sdg1

3.模擬磁盤損壞(同時損壞兩塊)

[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb1   //sdb1損壞
mdadm: set /dev/sdb1 faulty in /dev/md6

[root@localhost ~]# mdadm -f /dev/md6 /dev/sdc1   //sdc1損壞
mdadm: set /dev/sdc1 faulty in /dev/md6


[root@localhost ~]# mdadm -D /dev/md6    //查看RAID6陣列狀態
/dev/md6:
           Version : 1.2
     Creation Time : Tue Apr 21 17:54:25 2020
        Raid Level : raid6
        Array Size : 20934656 (19.96 GiB 21.44 GB)
     Used Dev Size : 10467328 (9.98 GiB 10.72 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 18:01:46 2020
             State : clean, degraded, recovering   //正在替換
    Active Devices : 2
   Working Devices : 4
    Failed Devices : 2     //損壞磁盤數2塊
     Spare Devices : 2    //備用設備數2 

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 19% complete

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 9a5c470e:eb95d0b4:2a213dac:f0fd3315
            Events : 29

    Number   Major   Minor   RaidDevice State
       5       8       97        0      spare rebuilding   /dev/sdg1
       4       8       81        1      spare rebuilding   /dev/sdf1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       0       8       17        -      faulty   /dev/sdb1
       1       8       33        -      faulty   /dev/sdc1
       
*****************************
[root@localhost ~]# mdadm -D /dev/md6 
/dev/md6:
           Version : 1.2
     Creation Time : Tue Apr 21 17:54:25 2020
        Raid Level : raid6
        Array Size : 20934656 (19.96 GiB 21.44 GB)
     Used Dev Size : 10467328 (9.98 GiB 10.72 GB)
      Raid Devices : 4
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Tue Apr 21 18:04:02 2020
             State : clean    //已自動替換
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 2
     Spare Devices : 0       //備用設備爲0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 9a5c470e:eb95d0b4:2a213dac:f0fd3315
            Events : 43

    Number   Major   Minor   RaidDevice State
       5       8       97        0      active sync   /dev/sdg1
       4       8       81        1      active sync   /dev/sdf1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       0       8       17        -      faulty   /dev/sdb1
       1       8       33        -      faulty   /dev/sdc1

4.格式化並掛載

 方法同上。

5.停止陣列

[root@localhost ~]# mdadm -S /dev/md6 
mdadm: stopped /dev/md6

RAID 10 實驗

 RAID 1+0是用兩個RAID 1來創建。

1.創建兩個RAID 1陣列

[root@localhost ~]# mdadm -C -v /dev/md1 -l 1 -n 2 /dev/sdb1 /dev/sdc1 
mdadm: /dev/sdb1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Wed Apr 22 00:47:05 2020
mdadm: partition table exists on /dev/sdb1 but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sdc1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Wed Apr 22 00:47:05 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

[root@localhost ~]# mdadm -C -v /dev/md0 -l 1 -n 2 /dev/sdd1 /dev/sde1 
mdadm: /dev/sdd1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:54:25 2020
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sde1 appears to be part of a raid array:
       level=raid6 devices=4 ctime=Tue Apr 21 17:54:25 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

2.查看兩個RAID 1陣列信息

[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Wed Apr 22 00:48:19 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)  
                          
                           *****第一個RAID 1容量20G*****
                           
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Apr 22 00:50:21 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 95cd9b90:8dcbbbef:7974f3aa:d38d7f5b
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1



[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Apr 22 00:48:52 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
        
                  *****第二個RAID 1容量20G******
                  
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Apr 22 00:50:44 2020
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 96% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : ae813945:1174d6cb:ad1e3a33:1303a7d3
            Events : 15

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

3.創建RAID 1+0陣列

[root@localhost ~]# mdadm -C -v /dev/md10 -l 0 -n 2 /dev/md1 /dev/md0 
mdadm: chunk size defaults to 512K
mdadm: Fail create md10 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

4.查看RAID 1+0陣列信息

[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Wed Apr 22 00:55:41 2020
        Raid Level : raid0
        Array Size : 41871360 (39.93 GiB 42.88 GB)
        
                     *****製作的RAID 1+0 容量40G*****
                     
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Apr 22 00:55:41 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : 09a95fcb:c9a2ec94:4461c81e:a9a65c2f
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       9        1        0      active sync   /dev/md1
       1       9        0        1      active sync   /dev/md0
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章