lvm硬盤擴容了,或者使用fdisk格式化大於2T硬盤後,如何通過pvresize擴容

1、背景:

項目上有一塊4T的硬盤,但分區時是多臺服務器批量操作的,用fdisk分區了,只有2T,並且已經做成lvm掛載到/data目錄了。

可以看到,vdb有4T,但是分區vdb1只有2T

[root@backup data]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda           253:0    0   50G  0 disk 
└─vda1        253:1    0   50G  0 part /
vdb           253:16   0    4T  0 disk 
└─vdb1        253:17   0    2T  0 part 
  └─ucap-ucap 252:0    0    2T  0 lvm  /data
loop0           7:0    0  9.6G  0 loop /CDROM

2、重新分區:

下面的操作中可以看到,原來的分區表示msdos,也就是mbr的,使用mklabel可以將分區表改爲gpt的。

分完區後使用 set 1 lvm on 表示將第一塊主分區設置爲lvm類型

[root@backup data]# parted /dev/vdb
GNU Parted 3.1
使用 /dev/vdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 4398GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  標誌
 1      1049kB  2199GB  2199GB  primary               lvm

(parted) mklabel                                                          
新的磁盤標籤類型? gpt                                                    
警告: The existing disk label on /dev/vdb will be destroyed and all data on this disk will be lost. Do you want to
continue?
是/Yes/否/No? yes                                                         
錯誤: Partition(s) 1 on /dev/vdb have been written, but we have been unable to inform the kernel of the change,
probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now
before making further changes.                                                
忽略/Ignore/放棄/Cancel? Ignore                                                                                                            
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 4398GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start  End  Size  File system  Name  標誌

(parted) mkpart primary 1 100%                                                                                                         
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 4398GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     標誌
 1      1049kB  4398GB  4398GB               primary
                                                              
(parted) set 1 lvm on                                                     
(parted) p                                                                
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 4398GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     標誌
 1      1049kB  4398GB  4398GB               primary  lvm
                                                                 
(parted) quit                                                             
信息: You may need to update /etc/fstab.

3、使用pvresize擴容pv

分區後的pv仍然是2T

[root@backup data]# pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/vdb1  ucap lvm2 a--  <2.00t    0 

使用pvresize擴容

[root@backup /]# pvresize -v /dev/vdb1
    Archiving volume group "ucap" metadata (seqno 3).
    Resizing volume "/dev/vdb1" to 8589930496 sectors.
    Resizing physical volume /dev/vdb1 from 524287 to 1048575 extents.
    Updating physical volume "/dev/vdb1"
    Creating volume group backup "/etc/lvm/backup/ucap" (seqno 4).
  Physical volume "/dev/vdb1" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

[root@backup /]# pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/vdb1  ucap lvm2 a--  <4.00t 2.00t

[root@backup /]# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  ucap   1   1   0 wz--n- <4.00t 2.00t

[root@backup /]# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ucap ucap -wi-ao---- <2.00t

擴容後pv、vg已經是4T了,並且有2T剩餘待分配的

4、lv擴容

[root@backup /]# lvextend -l +524287 /dev/ucap/ucap
  Size of logical volume ucap/ucap changed from 2.00 TiB (524288 extents) to <4.00 TiB (1048575 extents).
  Logical volume ucap/ucap successfully resized.

[root@backup /]# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ucap ucap -wi-ao---- <4.00t

擴容命令還可以使用 lvextend -l +100%FREE /dev/ucap/ucap

5、文件系統擴容

[root@backup /]# df -hT
文件系統              類型      容量  已用  可用 已用% 掛載點
/dev/vda1             ext4       50G   13G   35G   27% /
devtmpfs              devtmpfs  3.9G     0  3.9G    0% /dev
tmpfs                 tmpfs     3.9G  320K  3.9G    1% /dev/shm
tmpfs                 tmpfs     3.9G  872K  3.9G    1% /run
tmpfs                 tmpfs     3.9G     0  3.9G    0% /sys/fs/cgroup
tmpfs                 tmpfs     783M     0  783M    0% /run/user/0
/dev/mapper/ucap-ucap xfs       2.0T   47G  2.0T    3% /data
/dev/loop0            iso9660   9.6G  9.6G     0  100% /CDROM
overlay               overlay   2.0T   47G  2.0T    3% /data/docker/overlay2/d9ef78da5a0f54291a51bb9067659c04e419da1ccdd7bf9e9b590432e2d78b10/merged
shm                   tmpfs      64M     0   64M    0% /data/docker/containers/df076fd9052890154c54f9d2525b361faafce7e02407d4e125346edce6c9fa20/mounts/shm

[root@backup /]# xfs_growfs /dev/ucap/ucap
meta-data=/dev/mapper/ucap-ucap  isize=512    agcount=4, agsize=134217472 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=536869888, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=262143, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 536869888 to 1073740800

[root@backup /]# df -hT
文件系統              類型      容量  已用  可用 已用% 掛載點
/dev/vda1             ext4       50G   13G   35G   27% /
devtmpfs              devtmpfs  3.9G     0  3.9G    0% /dev
tmpfs                 tmpfs     3.9G  320K  3.9G    1% /dev/shm
tmpfs                 tmpfs     3.9G  840K  3.9G    1% /run
tmpfs                 tmpfs     3.9G     0  3.9G    0% /sys/fs/cgroup
tmpfs                 tmpfs     783M     0  783M    0% /run/user/0
/dev/mapper/ucap-ucap xfs       4.0T   47G  4.0T    2% /data
/dev/loop0            iso9660   9.6G  9.6G     0  100% /CDROM
overlay               overlay   4.0T   47G  4.0T    2% /data/docker/overlay2/d9ef78da5a0f54291a51bb9067659c04e419da1ccdd7bf9e9b590432e2d78b10/merged
shm                   tmpfs      64M     0   64M    0% /data/docker/containers/df076fd9052890154c54f9d2525b361faafce7e02407d4e125346edce6c9fa20/mounts/shm

查看原掛載節點文件系統格式,xfs文件系統用xfs_growfs,ext*文件系統用resize2fs

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章