事先在vnware下添加幾塊虛擬磁盤,
進入系統後:
fdisk -l 查看所有都磁盤
對新加對磁盤分區:
fdisk /dev/sdb
1. n 新增分區
2. p 選擇主分區
3. t 選擇系統分區模式
4. fd 選擇linux raid
5 w 保存
對/devsdc同樣對操作
這樣,raid分區完畢,接下來創建raid
mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb /dev/sdc
創建mdadm對config文件:
cp /usr/share/doc/mdadm-2.5.4/mdadm.conf-example /etc/mdadm.conf
修改它:
echo DEVICE /dev/sd[bc]1 >>/etc/mdadm.conf
mdadm -Ds >> /etc/mdadm.conf
此步驟完成
接着使用lvm管理
創建pv
pvcreate /dev/md1
查看: pvdisplay
創建vg
vgcreate VgOnRaid /dev/md1 /dev/md2
如果還有別的塊設備如上面的/dev/md2的話,這裏的/dev/md1就是2個磁盤做raid0
查看: vgdisplay
創建lv
lvcreate -l xxxx
VgOnRaid -n LvOnRaid
查看: lvdisplay
mount -t ext3 /dev
寫入fstab
-----------------------------------------------------------------
[root@TSM54-Test ~]# fdisk -l
Disk /dev/sda: 15.0 GB, 15032385536 bytes
255 heads, 63 sectors/track, 1827 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1827 14570955 8e Linux LVM
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
一.創建Soft RAID
1.創建raid分區
[root@TSM54-Test ~]#
fdisk /dev/sdb
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n Command action
e extended
p primary partition (1-4) p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044):
Using default value 1044
Command (m for help): t
Selected partition 1 Hex code (type L to list codes): L
0 Empty 1e Hidden W95 FAT1 80 Old Minix be Solaris boot
1 FAT12 24 NEC DOS 81 Minix / old Lin bf Solaris
2 XENIX root 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
5 Extended 41 PPC PReP Boot 85 Linux extended c7 Syrinx
6 FAT16 42 SFS 86 NTFS volume set da Non-FS data
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT
10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT
1c Hidden W95 FAT3 75 PC/IX
Hex code (type L to list codes):
fd Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help):
p
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1044 8385898+ fd Linux raid autodetect
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
|
按照上面的方法分別把/dev/sdc,/dev/sdd/也創建raid分區。完成後,用fdisk -l查看
[root@TSM54-Test ~]# fdisk -l
Disk /dev/sda: 15.0 GB, 15032385536 bytes
255 heads, 63 sectors/track, 1827 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1827 14570955 8e Linux LVM
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 1 1044 8385898+ fd Linux raid autodetect
Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 1044 8385898+ fd Linux raid autodetect
Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdd1 1 1044 8385898+ fd Linux raid autodetect
Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
|
2.創建陣列
mdadm可以支持LINEAR、RAID0 (striping)、 RAID1(mirroring)、 RAID4、RAID5、RAID6和MULTIPATH的陣列模式。
命令格式:
mdadm --create device -chunk=X --level=Y --raid-devices=Z devices
[root@TSM54-Test ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
mdadm: array /dev/md0 started.
|
--level表示創建的陣列模式,--raid-devices表示參與陣列的磁盤數量
3.配置文件
[root@TSM54-Test ~]#cp /usr/share/doc/mdadm-2.5.4/mdadm.conf-example /etc/mdadm.conf
[root@TSM54-Test ~]#echo DEVICE /dev/sd[bcd]1 >>/etc/mdadm.conf
[root@TSM54-Test ~]#mdadm -Ds >>/etc/mdadm.conf
|
4.格式化Raid
接下來,只要把/dev/md0作爲一個單獨的磁盤設備進行操作就可以:
[root@TSM54-Test ~]# mkfs.ext3 /dev/md0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2097152 inodes, 4194272 blocks
209713 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
128 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts
or
180 days, whichever comes first. Use tune2fs -c
or -i to override.
[root@TSM54-Test ~]# mkdir /mnt/software
[root@TSM54-Test ~]# mount /dev/md0 /mnt/software
|
5. 開機自動掛載
更改/etc/fstab文件,添加一行
/dev/md0 /mnt/software ext3 defaults 0 0
|
二. 其他的操作
mdadm有7中模式,下面列出了7種模式的命令格式,詳細的選項,請參考man手冊。
ASSEMBLE MODE :madam --assemble md-device options-and-component-devices
mdadm --assembel --scan md-devices-and-options
mdamd --assembel --scan options
BUILD MODE: mdadm --build device --chunk=X --level=Y --raid-devices=Z devices
CREATE MODE: mdadm --create device --chunk=X --level=Y --raid-devices=Z devices
MANAGE MODE: mdadm device options devices
MISC MODE: mdadm options ... devices ...
MONITOR MODE: mdadm --monitor options... devices...
GROW MODE:
1.查看
MISC模式
#mdadm --detail /dev/md0
#mdadm -D /dev/md0
|
2.停止
MISC模式
3.啓動
ASSEMBLE模式
#mdadm -A /dev/md0 /dev/sd[bcd]1
啓動指定的RAID,可以理解爲將一個raid重新裝配到系統中。
如果在前面已經配置了/etc/mdadm.conf文件,可以使用:
#mdadm -As /dev/md0
|
4.添加刪除磁盤
mdadm可以在Manage模式下,對運行中的陣列進行添加及刪除磁盤。常用於標識failed磁盤,增加spare(冗餘)磁盤,以及替換磁盤等。
[root@TSM54-Test ~]# mdadm /dev/md0 --fail /dev/sdd --remove /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0
mdadm: hot removed /dev/sdd
[root@TSM54-Test ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 1 21:35:31 2008
Raid Level : raid5
Array Size : 16777088 (16.00 GiB 17.18 GB)
Device Size : 8388544 (8.00 GiB 8.59 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Aug 1 23:34:12 2008
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 28a22990:eac5c231:3fe907f1:1145e264
Events : 0.6
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 0 0 2 removed
[root@TSM54-Test ~]# mdadm /dev/md0 --add /dev/sdd
mdadm: re-added /dev/sdd
[root@TSM54-Test ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 1 21:35:31 2008
Raid Level : raid5
Array Size : 16777088 (16.00 GiB 17.18 GB)
Device Size : 8388544 (8.00 GiB 8.59 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Aug 1 23:34:12 2008
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 0% complete
UUID : 28a22990:eac5c231:3fe907f1:1145e264
Events : 0.6
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 spare rebuilding /dev/sdd
|
--fail指定壞磁盤,--remove移走。
需要注意的是,對於某些RAID級別,如RAID0,是不能用--fail --remove --add的。
5.監控
MONITOR模式
# nohup mdadm --monitor --mail root --delay 200 /dev/md0 &
|
每200秒監控一次,當RAID出現錯誤時,發送郵件給root用戶。
6.增加spare磁盤
可以通過在創建的時候指定冗餘磁盤
#mdadm --create --verbose /dev/md0 --level=3 --raid-devices=3 -x1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
|
-x(--spare-devices=)參數指定冗餘磁盤的數量。另外,對於full的陣列(例如已經有2個磁盤的RAID1),則直接使用-add參數,mdadm會自動把冗餘的磁盤作爲spare disk。
7.刪除RAID
#mdadm -S /dev/md0
或
#rm /dev/md0
刪除/etc/mdadm.conf文件;去除/etc/fstab文件中相關的行。
最後,用fdisk對磁盤進行重新分區。
|
linux配置軟raid0過程(B)
三 RAID之上建立LVM
1.前面還有一個/dev/sde沒有處理,下面先把它建立成一個RAID1
[root@TSM54-Test ~]# fdisk /dev/sde
The number of cylinders for this disk is set to 1044.
There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help):
n Command action
e extended
p primary partition (1-4) p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1 Last cylinder or +size
or +sizeM or +sizeK (1-1044, default 1044): +5000M
Command (m for help): n Command action
e extended
p primary partition (1-4) p
Partition number (1-4): 2 First cylinder (610-1044, default 610):
Using default value 610 Last cylinder or +size
or +sizeM or +sizeK (610-1044, default 1044):
Using default value 1044
Command (m for help):
p
Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 609 4891761 83 Linux
/dev/sde2 610 1044 3494137+ 83 Linux
Command (m for help): t Partition number (1-4): 1 Hex code (type L to list codes):
fd Changed System type of partition 1 to fd (Linux raid autodetect)
Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes):
fd Changed System
type of partition 2 to fd (Linux raid autodetect)
Command (m for help):
p
Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 609 4891761 fd Linux raid autodetect
/dev/sde2 610 1044 3494137+ fd Linux raid autodetect
Command (m for help): w The partition table has been
Calling ioctl() to re-read partition table.
Syncing disks.
|
完成分區後,執行:
[root@TSM54-Test dev]# cd /dev/
[root@TSM54-Test dev]# ls -l md0
brw-r----- 1 root disk 9, 0 Aug 1 21:58 md0
[root@TSM54-Test dev]# mknod md1 b 9 1
[root@TSM54-Test dev]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sde1 /dev/sde2
mdadm: largest drive (/dev/sde1) exceed size (3494016K) by more than 1% Continue creating array?
y mdadm: array /dev/md1 started.
|
好了,RAID1也做完了,驗證一下
[root@TSM54-Test dev]# mdadm -Ds
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=28a22990:eac5c231:3fe907f1:1145e264
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=57f24dd1:aed3606c:d467132e:a6b3a010
|
2. 開始建立LVM
首先確保/dev/md0已經卸載,使用#umount /mnt/software
(1)創建PV
[root@TSM54-Test ~]# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
[root@TSM54-Test ~]# pvcreate /dev/md1
Physical volume "/dev/md1" successfully created
[root@TSM54-Test ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 13.90 GB / not usable 21.45 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 444
Free PE 0
Allocated PE 444
PV UUID BntsgG-UJLv-agT2-lZ7C-dXY2-51FB-Jxd5tA
--- NEW Physical volume ---
PV Name /dev/md0
VG Name
PV Size 16.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID GriJk2-wyfl-o0CI-NY7t-g75X-zIx3-FJHf1u
--- NEW Physical volume ---
PV Name /dev/md1
VG Name
PV Size 3.33 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID SImCO1-RmvK-OgfZ-dCFZ-LJNC-8wun-Bd9qzS
|
(2)創建VG
[root@TSM54-Test ~]# vgcreate LVMonRaid /dev/md0 /dev/md1
Volume group "LVMonRaid" successfully created
[root@TSM54-Test ~]# vgscan
Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2
Found volume group "LVMonRaid" using metadata type lvm2
|
創建了一個LVMonRaid的卷組。
(3)創建LV
[root@TSM54-Test ~]# lvcreate --size 5000M --name LogicLV1 LVMonRaid
Logical volume "LogicLV1" created
[root@TSM54-Test ~]# lvcreate --size 5000M --name LogicLV2 LVMonRaid
Logical volume "LogicLV2" created
[root@TSM54-Test ~]# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [12.88 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.00 GB] inherit
ACTIVE '/dev/LVMonRaid/LogicLV1' [4.88 GB] inherit
ACTIVE '/dev/LVMonRaid/LogicLV2' [4.88 GB] inherit
注:上面兩條記錄是裝系統時默認創建的。
|
(4)格式化創建文件系統,並掛載使用
[root@TSM54-Test ~]# mkfs.ext3 /dev/LVMonRaid/LogicLV1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
640000 inodes, 1280000 blocks
64000 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1312817152
40 block groups
32768 blocks per group, 32768 fragments per group
16000 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs
-c or -i to override.
[root@TSM54-Test ~]# mkfs.ext3 /dev/LVMonRaid/LogicLV2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
640000 inodes, 1280000 blocks
64000 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1312817152
40 block groups
32768 blocks per group, 32768 fragments per group
16000 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c
or -i to override.
[root@TSM54-Test ~]#
mkdir /mnt/doc
[root@TSM54-Test ~]# mkdir /mnt/music
[root@TSM54-Test ~]# mount -t ext3 /dev/LVMonRaid/LogicLV1 /mnt/doc
[root@TSM54-Test ~]# mount -t ext3 /dev/LVMonRaid/LogicLV2 /mnt/music
如果要開機自動掛載,更改/etc/fstab文件,添加如下兩行:
/dev/LVMonRaid/LogicLV1 /mnt/doc ext3 defaults 0 0
/dev/LVMonRaid/LogicLV2 /mnt/music ext3 defaults 0 0