Linux系統管理.Raid的配置文件說明及相關命令介紹

Raid的配置可以從cp /usr/share/doc/raidtools-1.00.3/目錄裏拷貝相應的配置文件到/etc目錄,然後去修改/etc/目錄下的相應的raid文件,並改名爲/etc/raidtab文件.

[root@localhost root]# cp /usr/share/doc/raidtools-1.00.3/raid*.conf.* /etc
[root@localhost root]# ls -l /etc/ |grep raid
-rw-r–r–    1 root     root          542 Mar 13 21:21 raid0.conf.sample
-rw-r–r–    1 root     root          179 Mar 13 21:21 raid1.conf.sample
-rw-r–r–    1 root     root          250 Mar 13 21:21 raid4.conf.sample
-rw-r–r–    1 root     root          462 Mar 13 21:21 raid5.conf.sample
                        ——————- RAID  0 的配置過程 ——————-

[root@localhost root]# vi /etc/raid0.conf.sample      <——–查看RAID 0 的配置文件
# Sample raid-0 configuration

raiddev                /dev/md0      <————-創建raid的設備名稱

raid-level             0    # it’s not obvious but this *must* be  <——-所創建的raid的級別
                             # right after raiddev

persistent-superblock   0    # set this to 1 if you want autostart,                                                                                                     # BUT SETTING TO 1    WILL DESTROY PREVIOUS
                                 # CONTENTS if this is a RAID0 array created
                               # by older raidtools (0.40-0.51) or mdtools!

chunk-size              16            <————塊大小

nr-raid-disks           2             <————Raid的磁盤數量(nr=number)
nr-spare-disks          0            <——-冗餘的磁盤數量

device                  /dev/hda1   <————根據實際的情況,更改此處的信息
raid-disk              0         <————RAID磁盤編號

device                  /dev/hdb1 <————-根據實際的情況,更改此處的信息
raid-disk               1       <————RAID磁盤編號

RAID 0 的配置文件是串接

           ———————— RAID 1的配置過程 ——————-

[root@localhost root]# vi /etc/raid1.conf.sample   <——–查看RAID 1 的配置文件
# Sample raid-1 configuration
raiddev                 /dev/md0   <————-創建raid的設備名稱
raid-level               1       <——-所創建的raid的級別
nr-raid-disks           2       <————Raid的磁盤數量(nr=number)
nr-spare-disks          0     <——-冗餘的磁盤數量
chunk-size               4     <————塊大小

device                  /dev/hda1   <————根據實際的情況,更改此處的信息
raid-disk               0       <————RAID磁盤編號

device                  /dev/hdb1    <————根據實際的情況,更改此處的信息
raid-disk               1       <————RAID磁盤編號

RAID 1 配置文件是冗餘,需要偶數個磁盤數量,最少爲二個.

[root@localhost root]# mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kB
disk 1: /dev/sdc1, 4192933kB, raid superblock at 4192832kB
 

[root@localhost root]# mkfs.ext3 /dev/md0
mke2fs 1.32 (09-Nov-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
524288 inodes, 1048208 blocks
52410 blocks (5.00%) reserved for the super user
First data block=0
32 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost root]# lsraid -A -a /dev/md0  <————查看md0的狀態信息
[dev   9,   0] /dev/md0         8A28FBC4.EA10ACB9.6BB5ABF9.A735D161 online
[dev   8,  17] /dev/sdb1        8A28FBC4.EA10ACB9.6BB5ABF9.A735D161 good
[dev   8,  33] /dev/sdc1        8A28FBC4.EA10ACB9.6BB5ABF9.A735D161 good

不使用的時候請直接刪除/etc/raidtab文件. # rm /etc/raidtab 

                    ——————- RAID 5 的配置過程 ——————-

因爲做RAID 5必須三個硬盤以上,所以要再增加一個磁盤來做RAID 5的實驗.

[root@localhost root]# fdisk –l   <————查看此時的硬盤信息

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1       652   5237158+  83  Linux

Disk /dev/sdb: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1             1       522   4192933+  83  Linux

Disk /dev/sdc: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdc1             1       522   4192933+  83  Linux

Disk /dev/sdd: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
[root@localhost root]# fdisk /dev/sdd       <—————–格式化新加硬盤/dev/sdd

Command (m for help): n      <————–新建一個分區
Command action
   e   extended
   p   primary partition (1-4)
p    <——–新建一個主分區
Partition number (1-4): 1   <—-輸入分區編號
First cylinder (1-522, default 1):            <——-默認回車即可
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-522, default 522):         <——-默認回車即可
Using default value 522

Command (m for help): w    <——–保存,退出!
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost root]# fdisk –l      <——-再次查看硬盤分區信息

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1       652   5237158+  83  Linux

Disk /dev/sdb: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1             1       522   4192933+  83  Linux

Disk /dev/sdc: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdc1             1       522   4192933+  83  Linux

Disk /dev/sdd: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdd1             1       522   4192933+  83  Linux

[root@localhost root]# cp /usr/share/doc/raidtools-1.00.3/raid5.conf.sample  /etc/raidtab   <-拷貝配置文件
[root@localhost root]# vi /etc/raidtab   <——-修改配置文件
# Sample raid-5 configuration
raiddev                 /dev/md0
raid-level               5
nr-raid-disks           3
chunk-size              4

# Parity placement algorithm

#parity-algorithm       left-asymmetric

#
# the best one for maximum performance:
#
parity-algorithm        left-symmetric

#parity-algorithm       right-asymmetric
#parity-algorithm       right-symmetric

# Spare disks for hot reconstruction
#nr-spare-disks         0

device                  /dev/sdb1           <——修改此處分區信息
raid-disk               0

device                  /dev/sdc1           <——修改此處分區信息
raid-disk               1

device                  /dev/sdd1           <——修改此處分區信息
raid-disk               2

[root@localhost root]# mkraid /dev/md0     <——創建raid硬盤: /dev/md0
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kB
/dev/sdb1 appears to be already part of a raid array – use -f to    <———提示加 –f  選項
force the destruction of the old superblock
mkraid: aborted.
(In addition to the above messages, see the syslog and /proc/mdstat as well
for potential clues.)
[root@localhost root]# mkraid -f /dev/md0   <——加 –f  選項後,再次創建raid硬盤

WARNING!         <——————出現下列警告——————->

NOTE: if you are recovering a double-disk error or some other failure mode
that made your array unrunnable but data is still intact then it’s strongly
recommended to use the lsraid utility and to read the lsraid HOWTO.

If your RAID array holds useful and not yet backed up data then –force
and the hot-add/hot-remove functionality should be used with extreme care!
If your /etc/raidtab file is not in sync with the real array configuration,
then –force might DESTROY ALL YOUR DATA. It’s especially dangerous to use
-f if the array is in degraded mode.

If your /etc/raidtab file matches the real layout of on-disk data then
recreating the array will not hurt your data, but be aware of the risks
of doing this anyway: freshly created RAID1 and RAID5 arrays do a full
resync of their mirror/parity blocks, which, if the raidtab is incorrect,
the resync will wipe out data irrecoverably. Also, if your array is in
degraded mode then the raidtab must match the degraded config exactly,
otherwise you’ll get the same kind of data destruction during resync.
(see the failed-disk raidtab option.) You have been warned!

 [ If your array holds no data, or you have it all backed up, or if you
know precisely what you are doing and you still want to proceed then use
the --really-force (or -R) flag. ]  <———-並提示加 –R 參數,進行強制性破壞,重建~!
[root@localhost root]# mkraid -R /dev/md0  <——加 –R 選項後,破壞後,重建raid硬盤
DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kB
disk 1: /dev/sdc1, 4192933kB, raid superblock at 4192832kB
disk 2: /dev/sdd1, 4192933kB, raid superblock at 4192832kB

[root@localhost root]# more /proc/mdstat     <———查看內核的狀態
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sdd1[2] sdc1[1] sdb1[0]  
      8385664 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]
      [==>..................]  resync = 12.1% (510136/4192832) finish=5.9min speed=10324K/sec
unused devices: <none>
[root@localhost root]# lsraid -A -a /dev/md0    <——查看raid是否完好~!
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  17] /dev/sdb1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# mkfs.ext3 /dev/md0     <——-格式化raid磁盤,分區格式爲ext3
mke2fs 1.32 (09-Nov-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1048576 inodes, 2096416 blocks
104820 blocks (5.00%) reserved for the super user
First data block=0
64 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost root]# mount /dev/md0 /opt     <——掛載raid硬盤到/opt目錄
[root@localhost root]# df –lh           <—–查看分區掛載情況
Filesystem            Size    Used    Avail   Use%   Mounted on
/dev/sda1              5.0G    1.1G    3.6G    24%        /
none                    78M      0        78M      0%      /dev/shm
/dev/md0              7.9G    33M     7.5G     1%     /opt
[root@localhost root]# mount      <——查看掛載情況
/dev/sda1 on / type ext3 (rw)
none on /proc type proc (rw)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
none on /dev/shm type tmpfs (rw)
/dev/md0 on /opt type ext3 (rw)

                                    如何恢復一個破壞的RAID設備信息~?

[root@localhost root]# lsraid -A -a /dev/md0   <——查看raid是否完好~!
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  17] /dev/sdb1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# mkfs.ext3 /dev/md0
mke2fs 1.32 (09-Nov-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1048576 inodes, 2096416 blocks
104820 blocks (5.00%) reserved for the super user
First data block=0
64 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost root]# mount /dev/md0 /opt
[root@localhost root]# df -lh
Filesystem            Size    Used    Avail   Use%   Mounted on
/dev/sda1             5.0G    1.1G   3.6G    24%     /
none                    78M      0      78M    0%     /dev/shm
/dev/md0              7.9G    33M   7.5G    1%   /opt
[root@localhost root]# mount
/dev/sda1 on / type ext3 (rw)
none on /proc type proc (rw)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
none on /dev/shm type tmpfs (rw)
/dev/md0 on /opt type ext3 (rw)
[root@localhost root]# lsraid -A -a /dev/md0
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  17] /dev/sdb1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# raidsetfaulty –help    <——軟件模擬raid磁盤錯誤
usage: raidsetfaulty [--all] [--configfile] [--help] [--version] [-achv] </dev/md?>*
[root@localhost root]# raidsetfaulty /dev/md0 /dev/sdb1  <—-軟件模擬raid磁盤中的一個磁盤分區錯誤
[root@localhost root]# lsraid -A -a /dev/md0   <—–再次查看raid磁盤分區是否完好
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  17] /dev/sdb1        83824C00.34A9A7ED.D8D5B7A8.4B582652 failed
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# fdisk /dev/sde   <——新加入/dev/sde磁盤,來替換/dev/sdb1這個毀壞磁盤

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-522, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-522, default 522):
Using default value 522

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost root]# raidhot   <——-查看raidhot開頭命令,按兩次tab鍵,系統會補全命令行
raidhotadd     raidhotremove
[root@localhost root]# raidhotadd /dev/md0 /dev/sde1   <—–加入修改磁盤/dev/sde1
[root@localhost root]# more /proc/mdstat     <——–查看系統內核的信息,系統在自動修復
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F)
      8385664 blocks level 5, 4k chunk, algorithm 2 [3/2] [_UU]
      [>....................]  recovery =  4.2% (176948/4192832) finish=6.4min speed=10408K/sec
unused devices: <none>
[root@localhost root]# more /proc/mdstat      <——–查看系統內核的信息,系統在自動修復
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F)
      8385664 blocks level 5, 4k chunk, algorithm 2 [3/2] [_UU]
      [============>........]  recovery = 62.3% (2615160/4192832) finish=2.5min speed=10315K/sec
unused devices: <none>
[root@localhost root]# lsraid -A -a /dev/md0   <———-查看raid磁盤分區
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  65] /dev/sdb1        83824C00.34A9A7ED.D8D5B7A8.4B582652 failed
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  17] /dev/sde1        83824C00.34A9A7ED.D8D5B7A8.4B582652 spare
[root@localhost root]# more /proc/mdstat     <——–查看系統內核的信息,系統在自動修復
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F)
      8385664 blocks level 5, 4k chunk, algorithm 2 [3/2] [_UU]
      [===============>.....]  recovery = 75.0% (3149228/4192832) finish=1.6min speed=10303K/sec
unused devices: <none>
[root@localhost root]# more /proc/mdstat    <——–查看系統內核的信息,系統在自動修復,修復已經完成
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F)
      8385664 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
[root@localhost root]# lsraid -A -a /dev/md0    <———-查看raid磁盤分區
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  65] /dev/sde1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# raidhotremove /dev/md0 /dev/sdb1   <——–移除受毀磁盤分區/dev/sdb1
[root@localhost root]# lsraid -A -a /dev/md0   <———-再次查看raid磁盤分區是否完好
[dev   9,   0] /dev/md0         83824C00.34A9A7ED.D8D5B7A8.4B582652 online
[dev   8,  65] /dev/sde1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  33] /dev/sdc1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good
[dev   8,  49] /dev/sdd1        83824C00.34A9A7ED.D8D5B7A8.4B582652 good

使用冗餘磁盤自動重建Raid,那需要在/etc/raidtab文件裏增加兩行內容,並修改nr-spare-disks 處的數字:

[root@localhost root]# cat /etc/raidtab
# Sample raid-5 configuration
raiddev                 /dev/md0
raid-level                5
nr-raid-disks            3
nr-spare-disks          1         <———-更改這個冗餘磁盤的數量
chunk-size               4

# Parity placement algorithm

#parity-algorithm       left-asymmetric

#
# the best one for maximum performance:
#
parity-algorithm        left-symmetric

#parity-algorithm       right-asymmetric
#parity-algorithm       right-symmetric

# Spare disks for hot reconstruction

device                  /dev/sdb1
raid-disk               0

device                  /dev/sdc1
raid-disk               1

device                  /dev/sdd1
raid-disk               2

device                  /dev/sde1          <———設備文件的添加
spare-disk              0                     <———冗餘磁盤的數量從0開始編號

注:冗餘磁盤增加的位置,不能放在raid-disk之前,如果做如下的調整將出現問題:

[root@localhost root]# cat /etc/raidtab
# Sample raid-5 configuration
raiddev                 /dev/md0
raid-level                5
nr-raid-disks            3
nr-spare-disks          1         <———-更改這個冗餘磁盤的數量
chunk-size               4

# Parity placement algorithm

#parity-algorithm       left-asymmetric

#
# the best one for maximum performance:
#
parity-algorithm        left-symmetric

#parity-algorithm       right-asymmetric
#parity-algorithm       right-symmetric

# Spare disks for hot reconstruction

device                  /dev/sde1         <———設備文件的添加
spare-disk              0                     <———冗餘磁盤的數量從0開始編號

device                  /dev/sdb1
raid-disk               0

device                  /dev/sdc1
raid-disk               1

device                  /dev/sdd1
raid-disk               2

這樣將出現錯誤,無法創建/dev/md0,所以做的時候,最好在raid磁盤的後面增加冗餘磁盤。

最後再講一個RAID 0+1 或被稱爲RAID 10的模式,在做這個RAID 0+1的模式的時候,最好先做RAID 0,然後再做RAID 1 ~

在RAID 0的配置文件裏添加RAID 1 的成員爲RAID 0 ,可參照下面的配置文件:

[root@localhost root]# vi /etc/raidtab
# Sample raid-0 configuration

raiddev                 /dev/md0

raid-level              0    # it’s not obvious but this *must* be
                              # right after raiddev

persistent-superblock   0    # set this to 1 if you want autostart,
                                    # BUT SETTING TO 1 WILL DESTROY PREVIOUS
                                  # CONTENTS if this is a RAID0 array created
                                # by older raidtools (0.40-0.51) or mdtools!

chunk-size              16

nr-raid-disks            2
nr-spare-disks          0

device                  /dev/sdb1
raid-disk               0

device                  /dev/sdc1
raid-disk               1

raiddev                 /dev/md1

raid-level               1

nr-raid-disks           2

chunk-size              4

device                  /dev/sdd1
raid-disk               0

device                  /dev/md0
raid-disk               1

[root@localhost root]# mkraid /dev/md0
handling MD device /dev/md0
analyzing super-block
[root@localhost root]# mkraid /dev/md1
handling MD device /dev/md1
analyzing super-block
disk 0: /dev/sdd1, 4192933kB, raid superblock at 4192832kB
/dev/sdd1 appears to be already part of a raid array — use -f to
force the destruction of the old superblock
mkraid: aborted.
(In addition to the above messages, see the syslog and /proc/mdstat as well
for potential clues.)
[root@localhost root]# mkraid -R /dev/md1
DESTROYING the contents of /dev/md1 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md1
analyzing super-block
disk 0: /dev/sdd1, 4192933kB, raid superblock at 4192832kB
disk 1: /dev/md0, 8385856kB, raid superblock at 8385792kB
[root@localhost root]# lsraid -A -a /dev/md1
[dev   9,   1] /dev/md1         0D874FBE.5DCF83BF.44319094.24463119 online
[dev   8,  49] /dev/sdd1        0D874FBE.5DCF83BF.44319094.24463119 good
[dev   9,   0] /dev/md0         0D874FBE.5DCF83BF.44319094.24463119 good

[root@localhost root]# lsraid -A -a /dev/md0
[dev   9,   0] /dev/md0         A5689BB7.0C86653E.5E760E64.CCC163AB online
[dev   8,  17] /dev/sdb1        A5689BB7.0C86653E.5E760E64.CCC163AB good
[dev   8,  33] /dev/sdc1        A5689BB7.0C86653E.5E760E64.CCC163AB good

[dev   9,   1] /dev/md1         0D874FBE.5DCF83BF.44319094.24463119 online
[dev   8,  49] /dev/sdd1        0D874FBE.5DCF83BF.44319094.24463119 good
[dev   9,   0] /dev/md0         0D874FBE.5DCF83BF.44319094.24463119 good
[root@localhost root]# more /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md1 : active raid1 md0[1] sdd1[0]
      4192832 blocks [2/2] [UU]
      [>....................]  resync =  0.6% (26908/4192832) finish=261.3min speed=263K/sec
md0 : active raid0 sdc1[1] sdb1[0]
      8385856 blocks 16k chunks

unused devices: <none>
[root@localhost root]#mkfs.ext3 /dev/md1

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章