【初学菜鸟作--LVM逻辑卷管理RAID软阵列】

练习一:创建卷组

准备310G的空闲分区,将类型ID修改为8e LVM

[root@localhost~]# fdisk /dev/sdb

进入交互模式通过新建n-p-分区号-开始位置-结束位置(分区大小)

交互模式t修改类型-分区号-类型为8e

DeviceBoot   Start    End  Blocks   Id  System

/dev/sdb1    1   1217     9775521   8e Linux LVM

W保存后退出

使用其中2块分区组建名为myvg的卷组,查看此卷组信息

先检查有哪些物理卷

[root@localhost~]# pvscan

Nomatching physical volumes found

将两块空闲分区转换成物理卷

例:[root@localhost~]# pvcreate /dev/sdb1

Writingphysical volume data to disk "/dev/sdb1"

Physicalvolume "/dev/sdb1" successfully created

再检查有哪些物理卷,查看其中一个物理卷的详细信息

[root@localhost~]# pvscan

PV/dev/sdb1                 lvm2 [9.32 GB]

PV/dev/sdb2                   lvm2 [9.32GB]

Total:2 [18.65 GB] / in use: 0 [0   ] / in noVG: 2 [18.65 GB]

[root@localhost~]# pvdisplay  /dev/sdb1

"/dev/sdb1" is anew physical volume of "9.32 GB"

--- NEW Physical volume ---

PV Name               /dev/sdb1

VG Name

PV Size               9.32 GB

Allocatable           NO

PE Size (KByte)       0

Total PE              0

Free PE               0

Allocated PE          0

PV UUID               9QuHkE-pXKI-tlWM-vJdv-2qmt-Sd3A-p8Sbwq

 

先查看有哪些卷组

[root@localhost~]# vgdisplay

Novolume groups found

将两个物理卷整编成卷组myvg

[root@localhost~]# vgcreatemyvg /dev/sdb1 /dev/sdb2

Volumegroup "myvg" successfully created

再查看有哪些卷组,并查看卷组myvg的详细信息

[root@localhost~]# vgdisplay

---Volume group ---

VGName               myvg

SystemID

Format                lvm2

MetadataAreas        2

MetadataSequence No  1

VG Access             read/write

VGStatus             resizable

MAXLV                0

CurLV                0

OpenLV               0

MaxPV                0

CurPV                2

ActPV                2

VGSize               18.64 GB

PESize               4.00 MB

TotalPE              4772

AllocPE / Size       0 / 0

Free  PE / Size       4772 / 18.64 GB

VGUUID         oSPZlv-Gt6D-gTQA-Gmw6-OsRd-TRqD-gcfbr0

 

练习二:创建/使用/扩展逻辑卷

划分一个16G的逻辑卷,名称为lvmox,查看逻辑卷信息

[root@localhost~]# lvcreate -L 16G -n lvmoxmyvg

Logicalvolume "lvmox" created

 

[root@localhost~]# lvdisplay

---Logical volume ---

LVName                /dev/myvg/lvmox

VGName                myvg

LVUUID               r22EGe-Cvg5-D1Qf-Q6lt-s3SJ-XuL1-gIALQD

LVWrite Access        read/write

LVStatus              available

#open                 0

LVSize                16.00 GB

CurrentLE             4096

Segments               2

Allocation             inherit

Readahead sectors     auto

-currently set to     256

Blockdevice           253:0

将此逻辑卷格式化为ext3文件系统,并挂载到/mbox目录

格式化该逻辑卷:

[root@localhost~]# mkfs.ext3 /dev/myvg/lvmox

挂载

[root@localhost~]# mkdir /mbox

[root@localhost~]# mount /dev/myvg/lvmox /mbox/

通过mount命令查看:

/dev/mapper/myvg-lvmoxon /mbox type ext3 (rw)

进入/mbox目录,测试读写操作

写入:[root@localhostmbox]#ifconfig> 121.txt

[root@localhostmbox]#ls

121.txtlost+found

读取:[root@localhostmbox]# cat 121.txt

eth0      Link encap:EthernetHWaddr00:0C:29:19:BB:76

将逻辑卷从16G扩展为24G,确保df识别的大小准确

先扩展卷组(增加一个10G物理卷),再扩展逻辑卷

[root@localhostmbox]#vgextendmyvg /dev/sdb3

Nophysical volume label read from /dev/sdb3

Writingphysical volume data to disk "/dev/sdb3"

Physicalvolume "/dev/sdb3" successfully created

Volumegroup "myvg" successfully extended

扩展逻辑卷:

[root@localhostmbox]#lvextend -L +8G /dev/myvg/lvmox

Extendinglogical volume lvmox to 24.00 GB

Logicalvolume lvmox successfully resized

 

resize2fs识别新文件系统的大小

[root@localhostmbox]#resize2fs /dev/myvg/lvmox

创建一个大小为250M的逻辑卷lvtest

[root@localhostmbox]#vgchange -s 1M  myvg

Volumegroup "myvg" successfully changed

查看:

[root@localhostmbox]#vgdisplay

---Volume group ---

VGName               myvg

SystemID

Format                lvm2

MetadataAreas        3

MetadataSequence No  5

VGAccess             read/write

VGStatus             resizable

MAXLV                0

CurLV                1

OpenLV               1

MaxPV                0

CurPV                3

ActPV                3

VGSize               27.96 GB

PESize               1.00 MB

TotalPE              28632

AllocPE / Size       24576 / 24.00 GB

Free  PE / Size       4056 / 3.96 GB

VGUUID              oSPZlv-Gt6D-gTQA-Gmw6-OsRd-TRqD-gcfbr0

 

练习三:逻辑卷综合应用

删除上一练习建立的卷组myvg

保证没有使用或者挂载的时候删除

[root@localhost~]# vgremovemyvg

Doyou really want to remove volume group "myvg" containing 1 logicalvolumes? [y/n]: y

Doyou really want to remove active logical volume lvmox? [y/n]: y

Logicalvolume "lvmox" successfully removed

Volumegroup "myvg" successfully removed

使用其中2个物理卷组成卷组vgnsd,另一个物理卷组成卷组vgdata

[root@localhost~]# vgcreatevgnsd /dev/sdb1 /dev/sdb2

Volumegroup "vgnsd" successfully created

[root@localhost~]# vgcreatevgdata /dev/sdb3

Volumegroup "vgdata" successfully created

从卷组vgnsd中创建一个20G的逻辑卷lvhome

[root@localhost~]# lvcreate -L 16G -n lvhomevgnsd

Logicalvolume "lvhome" created

从卷组vgdata中创建一个4G的逻辑卷lvswap

[root@localhost~]# lvcreate -L 4G -n lvswapvgdata

Logicalvolume "lvswap" created

/home目录迁移到逻辑卷lvhome

[root@localhost~]# mkfs.ext3 /dev/vgnsd/lvhome

 

[root@localhost~]# mkdir /1

[root@localhost~]# mv /home/* /1

 

[root@localhost~]# mount /dev/vgnsd/lvhome /home

 

/dev/mapper/vgnsd-lvhomeon /home type ext3 (rw)

将逻辑卷lvswap扩展到交换空间

格式化逻辑卷lvswap

[root@localhost~]# mkswap /dev/vgdata/lvswap

Settingup swapspace version 1, size = 4294963 kB

 

[root@localhost~]# swapon /dev/vgdata/lvswap

[root@localhost~]# swapon -s

Filename          Type                   Size                    Used                 Priority

/dev/sda3partition       200804                    0                    -1

/dev/mapper/vgdata-lvswap    partition 4194296 0    -2

为第56步配置开机自动挂载,重启后验证

通过以下加载

[root@localhost ~]# vim/etc/fstab

练习四:创建软RAID阵列

添加4块大小均为20GB的空磁盘

将其中第一块、第二块磁盘划分为单个主分区

把上述分区的类型ID改成fd

Device Boot   Start   End      Blocks   Id System

/dev/sdb1    1    2610    20964793+  fd Linux raid autodetect

2)阵列创建练习

创建RAID0设备/dev/md0RAID1设备/dev/md1

[root@localhost ~]# mdadm -C/dev/md0 -l0 -n2 /dev/sdb1 /dev/sdc1

mdadm: array /dev/md0started.

[root@localhost ~]# mdadm -C/dev/md1 -l1 -n2 /dev/sdd /dev/sde

mdadm: array /dev/md1started.

b)查看这两个阵列的容量及成员盘个数    -Q-D

[root@localhost ~]# mdadm -D/dev/md0

/dev/md0:

Version : 0.90

Creation Time : Wed Jun  4 19:04:41 2014

Raid Level : raid0

Array Size : 41929344 (39.99GiB 42.94 GB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 0

Persistence : Superblock ispersistent

 

Update Time : Wed Jun  4 19:04:41 2014

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0

 

Chunk Size : 64K

 

UUID :923d3722:10437de4:f871f97a:b358ef7b

Events : 0.1

 

Number   Major  Minor   RaidDevice State

0       8      17        0      active sync   /dev/sdb1

1       8      33        1      active sync   /dev/sdc1

 

[root@localhost ~]# mdadm -D/dev/md1

/dev/md1:

Version : 0.90

Creation Time : Wed Jun  4 19:05:15 2014

Raid Level : raid1

Array Size : 20971456 (20.00GiB 21.47 GB)

Used DevSize : 20971456(20.00 GiB 21.47 GB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 1

Persistence : Superblock ispersistent

 

Update Time : Wed Jun  4 19:06:59 2014

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0

 

UUID :1a6e3772:e4b55604:dbe09f01:b78a3faa

Events : 0.4

 

Number   Major  Minor   RaidDevice State

0       8      48        0     active sync   /dev/sdd

1       8      64        1      active sync   /dev/sde

解散并删除阵列设备/dev/md0/dev/md1  -S

[root@localhost~]# mdadm -S /dev/md0

mdadm:stopped /dev/md0

[root@localhost~]# rm -rf /dev/md0

创建RAID5软阵列设备 /dev/md0

第一个成员盘用分区来做

其余三个成员盘用整块磁盘来做

fdisk分别查看第一块、第二块磁盘的分区表

[root@localhost ~]# mdadm -C/dev/md0 -l5 -n4 /dev/sdb1 /dev/sd[c-e]

mdadm: /dev/sdb1 appears tobe part of a raid array:

level=raid0 devices=2ctime=Wed Jun  4 19:04:41 2014

mdadm: /dev/sdd appears tobe part of a raid array:

level=raid1 devices=2ctime=Wed Jun  4 19:05:15 2014

mdadm: /dev/sde appears tobe part of a raid array:

level=raid1 devices=2ctime=Wed Jun  4 19:05:15 2014

Continue creating array? y

mdadm: array /dev/md0started.

 

 

练习五:格式化并使用阵列

RAID5阵列/dev/md0格式化成EXT3文件系统

[root@localhost~]# mkfs.ext3 /dev/md0

将阵列设备/dev/md0挂载到/mymd目录

[root@localhost~]# mkdir /mymd

[root@localhost~]# mount /dev/md0 /mymd/

Mount查看

/dev/md0on /mymd type ext3 (rw)

进入/mymd目录,测试读写

写入:[root@localhostmymd]#ls> 12.txt

[root@localhostmymd]#ls

12.txtlost+found

 

读取:[root@localhostmymd]#cat 12.txt

12.txt

lost+found

 

 

练习六:RAID5阵列的故障测试

通过VMware设置拔掉阵列/dev/md0的最后一个成员

[root@localhostmymd]#mdadm -D /dev/md0

/dev/md0:

Version: 0.90

CreationTime : Wed Jun  4 19:10:30 2014

RaidLevel : raid5

ArraySize : 62894016 (59.98 GiB 64.40 GB)

UsedDevSize : 20964672 (19.99 GiB 21.47 GB)

RaidDevices : 4

TotalDevices : 4

PreferredMinor : 0

Persistence: Superblock is persistent

 

UpdateTime : Wed Jun  4 19:16:02 2014

State: clean, degraded

ActiveDevices : 3

WorkingDevices : 3

FailedDevices : 1

SpareDevices : 0

 

Layout: left-symmetric

ChunkSize : 64K

 

UUID: 8a0dd0eb:2fdf8913:00f9e8e9:972e8b80

Events: 0.14

 

Number   Major  Minor   RaidDevice State

0       8      17        0     active sync   /dev/sdb1

1       8      32        1      active sync   /dev/sdc

2       8      48        2      active sync   /dev/sdd

3       0       0        3      removed

 

4       8      64        -      faulty spare   /dev/sde

2)再次访问/mymd,测试读写

读写功能正常

3RAID5阵列的故障盘替换

将已失效的成员盘标记为失败

[root@localhostmymd]# mdadm/dev/md0 -f /dev/sde

mdadm: set /dev/sde faultyin /dev/md0

移除已失效的成员盘

[root@localhostmymd]# mdadm/dev/md0 -r /dev/sde

mdadm: hot removed /dev/sde

重新添加一个完好的成员盘(与其他成员盘大小一致)

[root@localhostmymd]#mdadm /dev/md0 -a /dev/sde

mdadm:added /dev/sde

观察阵列状态信息,查看修复过程

[root@localhostmymd]#watch cat /proc/mdstat

Every2.0s: cat /proc/mdstat                 Wed Jun  4 19:22:10 2014

 

Personalities: [raid0] [raid1] [raid6] [raid5] [raid4]

md0: active raid5 sde[4] sdd[2] sdc[1] sdb1[0]

62894016blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

[================>....]  recovery = 82.3% (17257344/20964672) fi

nish=0.3minspeed=196343K/sec

 

unuseddevices: <none>

 

 

练习七:保存、重组阵列

查询当前正在运行的阵列设置

[root@localhostmymd]#mdadm -vDs

ARRAY/dev/md0 level=raid5 num-devices=4 metadata=0.90UUID=8a0dd0eb:2fdf8913:00f9e8e9:972e8b80

devices=/dev/sdb1,/dev/sdc,/dev/sdd,/dev/sde

保存正在运行的阵列设置为/etc/mdadm.conf

[root@localhostmymd]#mdadm -vDs> /etc/mdadm.conf

解散并删除阵列/dev/md0

[root@localhost~]# umount /dev/md0

[root@localhost~]# mdadm -S /dev/md0

mdadm:stopped /dev/md0

[root@localhost~]# rm -rf /dev/md0

 

重组阵列/dev/md0,并挂载测试

[root@localhost~]# mdadm -A /dev/md0

mdadm:/dev/md0 has been started with 4 drives.

[root@localhost~]# mount /dev/md0 /mymd/

/dev/md0on /mymd type ext3 (rw)

 

 

 


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章