一.概念
mdadm是multiple devices admin的簡稱,它是Linux下的一款標準的軟件 RAID 管理工具,作者是Neil Brown
二.特點
mdadm能夠診斷、監控和收集詳細的陣列信息
mdadm是一個單獨集成化的程序而不是一些分散程序的集合,因此對不同RAID管理命令有共通的語法
mdadm能夠執行幾乎所有的功能而不需要配置文件(也沒有默認的配置文件)
三.作用 (引用)
在linux系統中目前以MD(Multiple Devices)虛擬塊設備的方式實現軟件RAID,利用多個底層的塊設備虛擬出一個新的虛擬設備,並且利用條帶化(stripping)技術將數據塊均勻分佈到多個磁盤上來提高虛擬設備的讀寫性能,利用不同的數據冗祭算法來保護用戶數據不會因爲某個塊設備的故障而完全丟失,而且還能在設備被替換後將丟失的數據恢復到新的設備上.
目前MD支持linear,multipath,raid0(stripping),raid1(mirror),raid4,raid5,raid6,raid10等不同的冗餘級別和級成方式,當然也能支持多個RAID陳列的層疊組成raid1 0,raid5 1等類型的陳列
四.實驗
試題:建立4個大小爲1G的磁盤,並將其中3個創建爲raid5的陣列磁盤,1個爲熱備份磁盤。測試熱備份磁盤替換陣列中的磁盤並同步數據。移除損壞的磁盤,添加一個新磁盤作爲熱備份磁盤。最後要求開機自動掛載。
4.1創建磁盤
[root@xiao ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
First cylinder (10486-13054, default 10486):
Using default value 10486
Last cylinder, +cylinders or +size{K,M,G} (10486-13054, default 13054): +1G
Command (m for help): n
First cylinder (10618-13054, default 10618):
Using default value 10618
Last cylinder, +cylinders or +size{K,M,G} (10618-13054, default 13054): +1G
Command (m for help): n
First cylinder (10750-13054, default 10750):
Using default value 10750
Last cylinder, +cylinders or +size{K,M,G} (10750-13054, default 13054): +1G
Command (m for help): n
First cylinder (10882-13054, default 10882):
Using default value 10882
Last cylinder, +cylinders or +size{K,M,G} (10882-13054, default 13054): +1G
Command (m for help): t
Partition number (1-8): 8
Hex code (type L to list codes): fd
Changed system type of partition 8 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-8): 7
Hex code (type L to list codes): fd
Changed system type of partition 7 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-8): 6
Hex code (type L to list codes): fd
Changed system type of partition 6 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-8): 5
Hex code (type L to list codes): fd
Changed system type of partition 5 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008ed57
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 10225 81920000 83 Linux
/dev/sda3 10225 10486 2097152 82 Linux swap / Solaris
/dev/sda4 10486 13054 20633279 5 Extended
/dev/sda5 10486 10617 1058045 fd Linux raid autodetect
/dev/sda6 10618 10749 1060258+ fd Linux raid autodetect
/dev/sda7 10750 10881 1060258+ fd Linux raid autodetect
/dev/sda8 10882 11013 1060258+ fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: 設備或資源忙.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
4.2加載內核
[root@xiao ~]# partx -a /dev/sda5 /dev/sda
[root@xiao ~]# partx -a /dev/sda6 /dev/sda
[root@xiao ~]# partx -a /dev/sda7 /dev/sda
[root@xiao ~]# partx -a /dev/sda8 /dev/sda
4.3創建raid5及其熱備份盤
[root@xiao ~]# mdadm -C /dev/md0 -l 5 -n 3 -x 1 /dev/sda{5,6,7,8}
mdadm: /dev/sda5 appears to be part of a raid array:
level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014
mdadm: /dev/sda6 appears to be part of a raid array:
level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014
mdadm: /dev/sda7 appears to be part of a raid array:
level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014
mdadm: /dev/sda8 appears to be part of a raid array:
level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
4.4初始化時間和磁盤陣列的讀寫的應用相關,使用cat /proc/mdstat信息查詢RAID陣列當前重構的速度和預期的完成時間。
[root@xiao ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md0 : active raid5 sda7[4] sda8[3](S) sda6[1] sda5[0]
2113536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[=========>...........] recovery = 45.5% (482048/1056768) finish=0.3min speed=30128K/sec
unused devices: <none>
[root@xiao ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md0 : active raid5 sda7[4] sda8[3](S) sda6[1] sda5[0]
2113536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
4.5掛載raid到/mnt目錄下,並查看是否正常(顯示lost+found爲正常)
[root@xiao ~]# mount /dev/md0 /mnt
[root@xiao ~]# ls /mnt
lost+found
4.6查看raid陣列的詳細信息
[root@xiao ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Dec 17 03:38:08 2014
Raid Level : raid5
Array Size : 2113536 (2.02 GiB 2.16 GB)
Used Dev Size : 1056768 (1032.17 MiB 1082.13 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Dec 17 03:55:11 2014
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : xiao:0 (local to host xiao)
UUID : bce110f2:34f3fbf1:8de472ed:633a374f
Events : 18
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 6 1 active sync /dev/sda6
4 8 7 2 active sync /dev/sda7
3 8 8 - spare /dev/sda8
4.7模擬損壞其中的一個磁盤,這裏我選擇 /dev/sda6磁盤
[root@xiao ~]# mdadm /dev/md0 --fail /dev/sda6
mdadm: set /dev/sda6 faulty in /dev/md0
4.7查看raid陣列詳細信息,發現/dev/sda8自動替換了損壞的/dev/sda6磁盤。
[root@xiao ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Dec 17 03:38:08 2014
Raid Level : raid5
Array Size : 2113536 (2.02 GiB 2.16 GB)
Used Dev Size : 1056768 (1032.17 MiB 1082.13 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Dec 17 04:13:59 2014
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 43% complete
Name : xiao:0 (local to host xiao)
UUID : bce110f2:34f3fbf1:8de472ed:633a374f
Events : 26
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
3 8 8 1 spare rebuilding /dev/sda8
4 8 7 2 active sync /dev/sda7
1 8 6 - faulty /dev/sda6
[root@xiao ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md0 : active raid5 sda7[4] sda8[3] sda6[1](F) sda5[0]
2113536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] #正常的情況會是[UUU],若第一個磁盤損壞則顯示[ _UU ].
4.8 移除損壞的硬盤
[root@xiao ~]# mdadm /dev/md0 -r /dev/sda6
mdadm: hot removed /dev/sda6 from /dev/md0
4.9添加一個新硬盤作爲熱備份盤
[root@xiao ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
First cylinder (11014-13054, default 11014):
Using default value 11014
Last cylinder, +cylinders or +size{K,M,G} (11014-13054, default 13054): +1G
Command (m for help): t
Partition number (1-9): 9
Hex code (type L to list codes): fd
Changed system type of partition 9 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008ed57
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 10225 81920000 83 Linux
/dev/sda3 10225 10486 2097152 82 Linux swap / Solaris
/dev/sda4 10486 13054 20633279 5 Extended
/dev/sda5 10486 10617 1058045 fd Linux raid autodetect
/dev/sda6 10618 10749 1060258+ fd Linux raid autodetect
/dev/sda7 10750 10881 1060258+ fd Linux raid autodetect
/dev/sda8 10882 11013 1060258+ fd Linux raid autodetect
/dev/sda9 11014 11145 1060258+ fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: 設備或資源忙.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@xiao ~]# partx -a /dev/sda9 /dev/sda
[root@xiao ~]# mdadm /dev/md0 --add /dev/sda9
mdadm: added /dev/sda9
[root@xiao ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Dec 17 03:38:08 2014
Raid Level : raid5
Array Size : 2113536 (2.02 GiB 2.16 GB)
Used Dev Size : 1056768 (1032.17 MiB 1082.13 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Dec 17 04:39:35 2014
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : xiao:0 (local to host xiao)
UUID : bce110f2:34f3fbf1:8de472ed:633a374f
Events : 41
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
3 8 8 1 active sync /dev/sda8
4 8 7 2 active sync /dev/sda7
5 8 9 - spare /dev/sda9
5.開機自動掛載
編輯/etc/fsab文件
/dev/md0 /mnt ext3 defaults 0 0
:wq
6.mdadm中文man(引用)
基本語法 : mdadm [mode] [options]
[mode] 有7種:
Assemble:將以前定義的某個陣列加入當前在用陣列。
Build:Build a legacy array ,每個device 沒有 superblocks
Create:創建一個新的陣列,每個device 具有 superblocks
Manage: 管理陣列,比如 add 或 remove
Misc:允許單獨對陣列中的某個 device 做操作,比如抹去superblocks 或 終止在用的陣列。
Follow or Monitor:監控 raid 1,4,5,6 和 multipath 的狀態
Grow:改變raid 容量或 陣列中的 device 數目
可用的 [options]:
-A, --assemble:加入一個以前定義的陣列
-B, --build:Build a legacy array without superblocks.
-C, --create:創建一個新的陣列
-Q, --query:查看一個device,判斷它爲一個 md device 或是 一個 md 陣列的一部分
-D, --detail:打印一個或多個 md device 的詳細信息
-E, --examine:打印 device 上的 md superblock 的內容
-F, --follow, --monitor:選擇 Monitor 模式
-G, --grow:改變在用陣列的大小或形態
-h, --help:幫助信息,用在以上選項後,則顯示該選項信息
--help-options
-V, --version
-v, --verbose:顯示細節
-b, --brief:較少的細節。用於 --detail 和 --examine 選項
-f, --force
-c, --config= :指定配置文件,缺省爲 /etc/mdadm/mdadm.conf
-s, --scan:掃描配置文件或 /proc/mdstat以搜尋丟失的信息。配置文件/etc/mdadm/mdadm.conf
create 或 build 使用的選項:
-c, --chunk=:Specify chunk size of kibibytes. 缺省爲 64.
--rounding=: Specify rounding factor for linear array (==chunk size)
-l, --level=:設定 raid level.
--create可用:linear, raid0, 0, stripe, raid1,1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp.
--build可用:linear, raid0, 0, stripe.
-p, --parity=:設定 raid5 的奇偶校驗規則:eft-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs.缺省爲left-symmetric
--layout=:類似於--parity
-n, --raid-devices=:指定陣列中可用 device 數目,這個數目只能由 --grow 修改
-x, --spare-devices=:指定初始陣列的富餘device 數目
-z, --size=:組建RAID1/4/5/6後從每個device獲取的空間總數
--assume-clean:目前僅用於 --build 選項
-R, --run:陣列中的某一部分出現在其他陣列或文件系統中時,mdadm會確認該陣列。此選項將不作確認。
-f, --force:通常mdadm不允許只用一個device 創建陣列,而且創建raid5時會使用一個device作爲missing drive。此選項正相反。
-a, --auto{=no,yes,md,mdp,part,p}{NN}: