配置cinder-volume服務使用ceph作爲後端存儲

在ceph監視器上執行

CINDER_PASSWD='cinder1234!'
controllerHost='controller'
RABBIT_PASSWD='0penstackRMQ'

1.創建pool池

爲cinder-volume服務創建pool池(因爲我只有一個OSD節點,所以要將副本數設置爲1)
ceph osd pool create cinder-volumes 32
ceph osd pool set cinder-volumes size 1
ceph osd pool application enable  cinder-volumes rbd
ceph osd lspools

2.查看pool池的使用情況

ceph df

3.創建賬號

ceph auth get-or-create client.cinder-volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volumes, allow rwx pool=glance-images' -o /etc/ceph/ceph.client.cinder-volumes.keyring
#查看
ceph auth ls | grep -EA3 'client.(cinder-volumes)'

4.修改ceph.conf配置文件並同步到所有的監視器節點(這步一定要操作)

su - cephd
cd ~/ceph-cluster/
cat <<EOF>> ceph.conf
[client.cinder-volumes]
keyring = /etc/ceph/ceph.client.cinder-volumes.keyring
EOF
ceph-deploy --overwrite-conf admin ceph-mon01
exit

5.安裝cinder-volume組件和ceph客戶端(如果ceph監視器是在控制節點上不需要執行這一步)

yum -y install openstack-cinder python-keystone ceph-common

6.使用uuidgen生成一個uuid(確保cinder和libvirt中的UUID一致)

uuidgen
運行uuidgen命令可以得到下面的UUID值:

086037e4-ad59-4c61-82c9-86edc31b0bc0

7.配置cinder-volume服務與cinder-api服務進行交互

openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:${RABBIT_PASSWD}@${controllerHost}:5672
openstack-config --set /etc/cinder/cinder.conf cache backend  oslo_cache.memcache_pool
openstack-config --set /etc/cinder/cinder.conf cache enabled  true
openstack-config --set /etc/cinder/cinder.conf cache memcache_servers  ${controllerHost}:11211
openstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_uri  http://${controllerHost}:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_url  http://${controllerHost}:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_type password
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_domain_id  default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  user_domain_id  default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_name  service
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  username  cinder
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  password  ${CINDER_PASSWD}
openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path  /var/lib/cinder/tmp

8.配置cinder-volume服務使用的後端存儲爲ceph

openstack-config --set /etc/cinder/cinder.conf  DEFAULT  enabled_backends  ceph

9.配置cinder-volume服務驅動ceph

openstack-config --set /etc/cinder/cinder.conf  ceph volume_driver  cinder.volume.drivers.rbd.RBDDriver
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_pool  cinder-volumes
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_user cinder-volumes
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_ceph_conf  /etc/ceph/ceph.conf
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_flatten_volume_from_snapshot  false
openstack-config --set /etc/cinder/cinder.conf  ceph bd_max_clone_depth  5
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_store_chunk_size  4
openstack-config --set /etc/cinder/cinder.conf  ceph rados_connect_timeout  -1
openstack-config --set /etc/cinder/cinder.conf  ceph glance_api_version 2
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_secret_uuid  086037e4-ad59-4c61-82c9-86edc31b0bc0

10.啓動cinder-volume服務

systemctl enable openstack-cinder-volume.service
systemctl start openstack-cinder-volume.service
systemctl status openstack-cinder-volume.service

在需要掛載ceph卷的所有計算節點上執行

1.創建secret文件(UUID需要與cinder服務中一致)

cat << EOF > ~/secret.xml
<secret ephemeral='no' private='no'>
     <uuid>086037e4-ad59-4c61-82c9-86edc31b0bc0</uuid>
     <usage type='ceph'>
         <name>client.cinder-volumes secret</name>
     </usage>
</secret>
EOF

2.從ceph監視器上獲取cinder-volumes賬戶的密鑰環

ceph auth get-key client.cinder-volumes
得到如下的結果:
AQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==

3.在libvirt中註冊UUID

virsh secret-define --file ~/secret.xml

4.在libvirt中添加UUID和cinder-volumes密鑰環

virsh secret-set-value --secret 086037e4-ad59-4c61-82c9-86edc31b0bc0 --base64 AQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==

5.查看libvirt中添加的UUID

virsh secret-list

6.重啓libvirt

systemctl restart libvirtd.service
systemctl status libvirtd.service

出錯回滾的方案

1.刪除pool池

先在所有的監視器節點上開啓刪除pool的權限,然後纔可以刪除。
刪除pool時ceph要求必須輸入兩次pool名稱,同時加上--yes-i-really-really-mean-it選項。
echo '
mon_allow_pool_delete = true
[mon]
mon allow pool delete = true
' >> /etc/ceph/ceph.conf
systemctl restart ceph-mon.target
ceph osd pool delete cinder-volumes cinder-volumes  --yes-i-really-really-mean-it

2.刪除賬號

ceph auth del client.cinder-volumes

3.刪除libvirt中註冊的UUID和cinder-volumes密鑰環

查看:
virsh secret-list
刪除(secret-undefine後跟uuid值):
virsh secret-undefine  086037e4-ad59-4c61-82c9-86edc31b0bc0

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章