一、openstack簡介
1、Openstack是一個IaaS平臺管理解決方案
2、OpenStark是由網絡主機服務商Rackspace和美國宇航局聯合推出的一個開源項目,目的是制定一套開源軟件標準,任何公司或個人都可以搭建自己的雲計算環境(IaaS),從此打破了Amazon等少數公司的壟斷,意義非凡。
3、OpenStack由一系列的子項目組成:
Identity (Keystone)
Compute (Nova)
Image (Glance)
Block Storage (Cinder)
Network (Neutron)
Object Storage (Swift)
Dashboard (Horizon)
Metering (Ceilometer)
Orchestration (Heat)
二、Openstark部署
- 手動部署
- Fuel:Mirantis提供的企業級別的自動化部署工具
- RDO:Redhat提供的openstark的部署方法
- Devstack:快速搭建開發環境的工具
- Openshit:Ubuntu 14.04下openstark的快速部署工具
(1)下載源碼
$:git clone https://github.com/windworst/openshit.git
(2)install & configure Openstark
修改配置文件:
zc@linux-B7102T76V12HR-2T-N:~/openshit$ cat setting.conf
# This is OpenShit configure file
# All of settings in this file
# Update to Openstack component configure file
# node ip
SET_CONTROLLER_IP=127.0.0.1
SET_COMPUTE_IP=127.0.0.1
SET_INTERFACE_NAME=eth1
#vnc
SET_VNC_IP=$SET_CONTROLLER_IP
SET_VNC_CONNECT_IP=$SET_CONTROLLER_IP
# mysql configure
SET_MYSQL_IP=$SET_CONTROLLER_IP
SET_MYSQL_USER=root
SET_MYSQL_PASS=smartcore
SET_MYSQL_PORT=3306
# rabbit password
SET_RABBITMQ_IP=$SET_CONTROLLER_IP
SET_RABBITMQ_PASS=smartcore
# keystone service configure
SET_KEYSTONE_IP=$SET_COMPUTE_IP
SET_KEYSTONE_AUTH_URL=http://$SET_KEYSTONE_IP:35357/v2.0
SET_KEYSTONE_AUTH_URL_PUBLIC=http://$SET_KEYSTONE_IP:5000/v2.0
SET_OS_SERVICE_TOKEN=admin
SET_KEYSTONE_ADMIN_TENANT=admin
SET_KEYSTONE_ADMIN_ROLE=admin
SET_KEYSTONE_ADMIN=admin
SET_KEYSTONE_DBPASS=smartcore
SET_KEYSTONE_ADMIN_PASS=smartcore
# glance service configure
SET_GLANCE_IP=$SET_CONTROLLER_IP
SET_GLANCE_DBPASS=smartcore
SET_GLANCE_PASS=smartcore
# nova service configure
SET_NOVA_IP=$SET_CONTROLLER_IP
SET_NOVA_DBPASS=smartcore
SET_NOVA_PASS=smartcore
# dashboard service configure
SET_DASH_DBPASS=smartcore
# cinder service configure
SET_CINDER_IP=$SET_CONTROLLER_IP
SET_CINDER_DBPASS=smartcore
SET_CINDER_PASS=smartcore
# neutron service configure
SET_NEUTRON_IP=$SET_CONTROLLER_IP
SET_NEUTRON_DBPASS=smartcore
SET_NEUTRON_PASS=smartcore
SET_NEUTRON_METADATA_SECRET=smartcore
# heat service configure
#SET_HEAT_DBPASS=
#SET_HEAT_PASS=
# ceilometer service configure
#SET_CEILOMETER_DBPASS=
#SET_CEILOMETER_PASS=
# trove service configure
#SET_TROVE_DBPASS=
#SET_TROVE_PASS=
安裝:
$:./openshit.sh --all install && ./openshit.sh --all config
導入環境變量:
$:source admin-env.sh
(3)clean & uninstall
$:./openshit.sh --all clean && ./openshit.sh --all uninstall
三、Openstack簡單使用
1、管理openstark服務
查看服務狀態
$:nova service-list
管理全部服務
$: ./openshit --all stop
$: ./openshit --all start
單獨管理服務
$:service nova-cert status
$:service nova-cert start
$:service nova-cert stop
2、發佈CentOS鏡像
$:glance image-create --name=cirros --disk-format=qcow2 --container-format=ovf --is-piblic=true < /home/cirros-0.3.0-x86_64-disk.img
$:glance image-list
3、命令行創建虛擬機實例
創建網絡
$:nova network-create vmnet --fixed-range-v4=10.0.0.0/24 --bridge-interface=br100
查看網絡
$:nova-manage network list
查看flavor
$:nova flavor-list
創建vm
$:nova boot --flavor 1 --image cirros vm01
查看vm
$:nova list
四、ceph與openstack集成
1、安裝ceph客戶端
$:apt-get install python-ceph
$:apt-get install ceph-common
2、創建pool及ceph用戶
創建pool
$:ceph osd pool create datastore 512
創建用戶
$:ceph auth get-or-create client.icehouse mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=datastore'
$:ceph auth get-or-create client.icehouse | ssh XX.XX.XX.XX sudo tee /etc/ceph/ceph.client.icehouse.keyring
$:ssh XX.XX.XX.XX sudo chmod +r /etc/ceph/ceph.client.icehouse.keyring
將/etc/ceph/ceph.conf文件拷貝到openstack節點上:
ssh xx.xx.xx.xx sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
配置glance-api.conf文件
[DEFAULT]
default_store = rbd
[glance_store]
store = rbd
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = icehouse
rbd_store_pool = datastore
show_image_direct_url = True
重啓glance服務
$:service glance-api restart
$:service glance-registry restart
上傳一個鏡像,測試ceph是否配置成功作爲glance後端使用:
參考上邊glance使用
查看ceph中datastore pool的列表
$:rados --pool=datastore ls
在openstack計算節點上生成一個uuid:
$:uuidgen
創建一個臨時文件
$:vim secort.xml
<secore ephemeral='no' private='no'>
<uuid>sgda29dhj3bsybhfjbsjiv</uuid>
<usage type='ceph'>
<name>client.icehouse secret</name>
</usage>
</secret>
從創建的secret.xml文件創建祕鑰:
$:virsh secret-define --file secret.xml
設定libvirt使用上面的祕鑰:
$:virsh secret-set-value --secret sdwdbhad7239dnjsadjd --base64 $(cat client.icehouse.key) && rm client.icehouse.key secret.xml
查看祕鑰
$:virsh secret-list
ceph與cinder
配置clinder.conf配置文件
[DEFAULT]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = datastore
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_user = icehouse
glance_api_version = 2
rbd_secret_uuid = uasdbnb723hbrsh83bd
重啓服務
$:service cinder-api restart
$:service cinder-scheduler restart
$:service cinder-volume restart
測試cinder是否使用ceph
創建卷cephVolume:
$:cinder create --display-name cephVolume 1
通過cinder list與rados --pool=datastore ls驗證cephVolume是否放在cinder上
ceph與glane
ceph與nova
修改計算節點的nova.conf文件
[libvirt]
images_type = rbd
images_rbd_pool = datastore
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = icehouse
rbd_secret_uuid = sjhdh398jrsdfh3jr8
inject_password = false
inject_key = false
inject_partition = -2
重啓nova
$:./openshit --all restart
測試nova是否使用ceph:
創建虛擬機,使用上面的方法
nova list
rados --pool=datastore ls