OpenStack Grizzly Multihost部署文檔

OpenStack Grizzly Multihost部署文檔

OpenStack G版本的Multihost部署文檔,參考了幾位前輩的部署步驟,自己增加了一些變量修改,主要是希望搭建一個可用的生產環境,雖然版本比較初級,但是會逐漸完善

生產環境中部署OpenStack基本的要求的是穩定,安全和可擴展性,使用Multihost方式部署的好處是保證了網絡的高可用,服務器數量捉急,所以選擇mseknibilel的部署方式會比較糾結於控制節點和網絡節點的資源浪費。所以本文檔參考Longgeek的這篇文章,只做控制節點和計算節點,1個控制節點配多個計算節點,Quantum部署在計算節點上。

環境要求

先安裝1個控制節點和1個計算節點,計算節點可以動態增加,只要將IP地址遞增即可

節點類型 網卡配置
控制節點 eth0 (172.16.0.51), eth1 (59.65.233.231)
計算節點 eth0 (172.16.0.52), eth1 (10.10.10.52), eth2 (59.65.233.233)

第一次搭建的時候出了些問題,網卡端口和網絡配置沒一一映射,通過Linux的mii-tool指令,可以查看每個端口的連接情況

控制節點

 基本環境變量配置
  1. export YS_CON_MANAGE_IP=172.16.0.51
  2. export YS_CON_MANAGE_NETMASK=255.255.0.0
  3. export YS_CON_EXT_IP=59.65.233.231
  4. export YS_CON_EXT_NETMASK=255.255.255.0
  5. export YS_CON_EXT_GATEWAY=59.65.233.254
  6. export YS_CON_EXT_DNS=202.204.65.5
  7. export YS_CON_SERVICE_ENDPOINT_IP=172.16.0.51
  8. export YS_CON_MYSQL_USER=root
  9. export YS_CON_MYSQL_PASS=123qwe
  10. export ADMIN_PASSWORD=123qwe
  11. export ADMIN_TOKEN=ceit
  12. export OS_TENANT_NAME=admin
  13. export OS_USERNAME=admin
  14. export OS_PASSWORD=$ADMIN_PASSWORD
  15. export OS_AUTH_URL="http://${YS_CON_MANAGE_IP}:5000/v2.0/"
  16. export OS_REGION_NAME=RegionOne
  17. export SERVICE_TOKEN=${AMDIN_TOKEN}
  18. export SERVICE_ENDPOINT=http://${YS_CON_MANAGE_IP}:35357/v2.0/

網絡設置

設置網卡信息

  1. cat > /etc/network/interfaces << _EOF_
  2. auto eth0
  3. iface eth0 inet static
  4. address $YS_CON_MANAGE_IP
  5. netmask $YS_CON_MANAGE_NETMASK
  6. auto eth1
  7. iface eth1 inet static
  8. address $YS_CON_EXT_IP
  9. netmask $YS_CON_EXT_NETMASK
  10. gateway $YS_CON_EXT_GATEWAY
  11. dns-nameservers $YS_CON_EXT_DNS
  12. _EOF_

重啓網絡服務

  1. /etc/init.d/networking restart

添加源

添加Ubuntu Grizzly源,並升級

  1. cat > /etc/apt/sources.list.d/grizzly.list << _EOF_
  2. deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
  3. deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
  4. _EOF_
  5. apt-get update
  6. apt-get -y upgrade --force-yes
  7. apt-get install -y ubuntu-cloud-keyring --force-yes

安裝MySQL和RabbitMQ

安裝MySQL

  1. export DEBIAN_FRONTEND=noninteractive
  2. apt-get install -q -y mysql-server python-mysqldb
  3. mysqladmin -u $YS_CON_MYSQL_USER password $YS_CON_MYSQL_PASS

修改/etc/mysql/my.cnf文件綁定地址從127.0.0.1到0.0.0.0,禁止 mysql 做域名解析,防止連接錯誤,然後重新啓動mysql服務.

  1. sed -i -e 's/127.0.0.1/0.0.0.0/g' -e '/skip-external-locking/a skip-name-resolve' /etc/mysql/my.cnf
  2. /etc/init.d/mysql restart

安裝 RabbitMQ

  1. apt-get install -y rabbitmq-server --force-yes

時間同步NTP

安裝NTP並配置以計算節點爲同步時鐘

  1. apt-get install -y ntp
  2. sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
  3. service ntp restart

允許路由轉發

開啓路由轉發

  1. sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
  2. sysctl net.ipv4.ip_forward=1

安裝認證模塊Keystone

安裝認證模塊Keystone

  1. apt-get install -y keystone --force-yes

增加數據庫連接權限

  1. mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
  2. create database keystone;
  3. grant all on keystone.* to 'keystone'@'%' identified by 'keystone';"

修改/etc/keystone/keystone.conf配置文件

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/keystone/keystone.conf;
  5. done << _EOF_
  6. admin_token = $ADMIN_TOKEN
  7. token_format = UUID
  8. debug = True
  9. verbose = True
  10. connection = mysql://keystone:keystone@${YS_CON_MANAGE_IP}/keystone
  11. _EOF_

啓用keystone然後同步數據庫

  1. service keystone restart
  2. keystone-manage db_sync

導入keystone數據,如果剪切板有限制的話最好分兩次粘貼

第一部分:

  1. ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}
  2. SERVICE_PASSWORD=${ADMIN_PASSWORD:-password}
  3. export SERVICE_TOKEN=$ADMIN_TOKEN
  4. export SERVICE_ENDPOINT="http://${YS_CON_SERVICE_ENDPOINT_IP}:35357/v2.0"
  5. SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
  6. KEYSTONE_REGION=RegionOne
  7. KEYSTONE_IP=$YS_CON_SERVICE_ENDPOINT_IP
  8. SWIFT_IP=$YS_CON_SERVICE_ENDPOINT_IP
  9. COMPUTE_IP=$KEYSTONE_IP
  10. EC2_IP=$KEYSTONE_IP
  11. GLANCE_IP=$KEYSTONE_IP
  12. VOLUME_IP=$KEYSTONE_IP
  13. QUANTUM_IP=$KEYSTONE_IP
  14. get_id () {
  15. echo `$@ | awk '/ id / { print $4 }'`
  16. }
  17. ADMIN_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=admin)
  18. SERVICE_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=$SERVICE_TENANT_NAME)
  19. DEMO_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=demo)
  20. INVIS_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=invisible_to_admin)
  21. ADMIN_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=admin --pass="$ADMIN_PASSWORD" [email protected])
  22. DEMO_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=demo --pass="$ADMIN_PASSWORD" [email protected])
  23. ADMIN_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=admin)
  24. KEYSTONEADMIN_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=KeystoneAdmin)
  25. KEYSTONESERVICE_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=KeystoneServiceAdmin)
  26. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
  27. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $DEMO_TENANT
  28. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT
  29. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT
  30. MEMBER_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=Member)
  31. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $DEMO_TENANT
  32. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $INVIS_TENANT
  33. NOVA_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
  34. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE
  35. GLANCE_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
  36. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE
  37. SWIFT_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=swift --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
  38. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER --role-id $ADMIN_ROLE
  39. RESELLER_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=ResellerAdmin)

第二部分:

  1. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $RESELLER_ROLE
  2. QUANTUM_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
  3. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE
  4. CINDER_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
  5. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id ${ADMIN_ROLE}
  6. KEYSTONE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name keystone --type identity --description 'OpenStack Identity'| awk '/ id / { print $4 }')
  7. COMPUTE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=nova --type=compute --description='OpenStack Compute Service'| awk '/ id / { print $4 }')
  8. CINDER_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=cinder --type=volume --description='OpenStack Volume Service'| awk '/ id / { print $4 }')
  9. GLANCE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=glance --type=image --description='OpenStack Image Service'| awk '/ id / { print $4 }')
  10. SWIFT_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=swift --type=object-store --description='OpenStack Storage Service' | awk '/ id / { print $4 }')
  11. EC2_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=ec2 --type=ec2 --description='OpenStack EC2 service'| awk '/ id / { print $4 }')
  12. QUANTUM_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=quantum --type=network --description='OpenStack Networking service'| awk '/ id / { print $4 }')

第三部分:

  1. if [ "$KEYSTONE_WLAN_IP" != '' ];then
  2. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID --publicurl http://"$KEYSTONE_WLAN_IP":5000/v2.0 --adminurl http://"$KEYSTONE_WLAN_IP":35357/v2.0 --internalurl http://"$KEYSTONE_WLAN_IP":5000/v2.0
  3. fi
  4. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID --publicurl http://"$KEYSTONE_IP":5000/v2.0 --adminurl http://"$KEYSTONE_IP":35357/v2.0 --internalurl http://"$KEYSTONE_IP":5000/v2.0
  5. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$COMPUTE_ID --publicurl http://"$COMPUTE_IP":8774/v2/\$\(tenant_id\)s --adminurl http://"$COMPUTE_IP":8774/v2/\$\(tenant_id\)s --internalurl http://"$COMPUTE_IP":8774/v2/\$\(tenant_id\)s
  6. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$CINDER_ID --publicurl http://"$VOLUME_IP":8776/v1/\$\(tenant_id\)s --adminurl http://"$VOLUME_IP":8776/v1/\$\(tenant_id\)s --internalurl http://"$VOLUME_IP":8776/v1/\$\(tenant_id\)s
  7. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$GLANCE_ID --publicurl http://"$GLANCE_IP":9292/v2 --adminurl http://"$GLANCE_IP":9292/v2 --internalurl http://"$GLANCE_IP":9292/v2
  8. if [ "$SWIFT_WLAN_IP" != '' ];then
  9. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID --publicurl http://"$SWIFT_WLAN_IP":8080/v1/AUTH_\$\(tenant_id\)s --adminurl http://"$SWIFT_WLAN_IP":8080/v1 --internalurl http://"$SWIFT_WLAN_IP":8080/v1/AUTH_\$\(tenant_id\)s
  10. fi
  11. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID --publicurl http://"$SWIFT_IP":8080/v1/AUTH_\$\(tenant_id\)s --adminurl http://"$SWIFT_IP":8080/v1 --internalurl http://"$SWIFT_IP":8080/v1/AUTH_\$\(tenant_id\)s
  12. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$EC2_ID --publicurl http://"$EC2_IP":8773/services/Cloud --adminurl http://"$EC2_IP":8773/services/Admin --internalurl http://"$EC2_IP":8773/services/Cloud
  13. keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$QUANTUM_ID --publicurl http://"$QUANTUM_IP":9696/ --adminurl http://"$QUANTUM_IP":9696/ --internalurl http://"$QUANTUM_IP":9696/

導入環境變量

  1. cat > /root/tenantrc.sh << _EOF_
  2. export OS_TENANT_NAME=admin
  3. export OS_USERNAME=admin
  4. export OS_PASSWORD=$ADMIN_PASSWORD
  5. export OS_AUTH_URL="http://${YS_CON_MANAGE_IP}:5000/v2.0/"
  6. export OS_REGION_NAME=RegionOne
  7. export SERVICE_TOKEN=${AMDIN_TOKEN}
  8. export SERVICE_ENDPOINT=http://${YS_CON_MANAGE_IP}:35357/v2.0/
  9. _EOF_
  10. echo 'source /root/export.sh' >> /root/.bashrc
  11. source /root/export.sh

驗證keystone

  1. keystone user-list

安裝鏡像管理模塊Glance

安裝鏡像管理模塊Glance

  1. apt-get install -y glance --force-yes

創建一個 glance 數據庫並授權

  1. mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
  2. create database glance;
  3. grant all on glance.* to 'glance'@'%' identified by 'glance';"

更新 /etc/glance/glance-api.conf 文件

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/glance/glance-api.conf;
  5. done << _EOF_
  6. verbose = True
  7. debug = True
  8. sql_connection = mysql://glance:glance@${YS_CON_MANAGE_IP}/glance
  9. workers = 4
  10. registry_host = ${YS_CON_MANAGE_IP}
  11. notifier_strategy = rabbit
  12. rabbit_host = ${YS_CON_MANAGE_IP}
  13. rabbit_userid = guest
  14. rabbit_password = guest
  15. auth_host = ${YS_CON_MANAGE_IP}
  16. auth_port = 35357
  17. auth_protocol = http
  18. admin_tenant_name = service
  19. admin_user = glance
  20. admin_password = ${ADMIN_PASSWORD}
  21. _EOF_
  22. echo "config_file = /etc/glance/glance-api-paste.ini" >> /etc/glance/glance-api.conf
  23. echo "flavor = keystone" >> /etc/glance/glance-api.conf

更新/etc/glance/glance-registry.conf文件

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/glance/glance-api.conf;
  5. done << _EOF_
  6. verbose = True
  7. debug = True
  8. sql_connection = mysql://glance:glance@${YS_CON_MANAGE_IP}/glance
  9. auth_host = ${YS_CON_MANAGE_IP}
  10. auth_port = 35357
  11. auth_protocol = http
  12. admin_tenant_name = service
  13. admin_user = glance
  14. admin_password = ${ADMIN_PASSWORD}
  15. _EOF_
  16. echo "config_file = /etc/glance/glance-api-paste.ini" >> /etc/glance/glance-api.conf
  17. echo "flavor = keystone" >> /etc/glance/glance-api.conf

啓動 glance-api 和 glance-registry 服務並同步到數據庫:

  1. /etc/init.d/glance-api restart
  2. /etc/init.d/glance-registry restart
  3. glance-manage version_control 0
  4. glance-manage db_sync

測試 glance 的安裝,上傳一個鏡像。下載 Cirros 鏡像並上傳:

  1. wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
  2. glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img

查看上傳的鏡像

  1. glance image-list

安裝塊存儲管理Cinder

安裝塊存儲管理Cinder

  1. apt-get install -y cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient iscsitarget open-iscsi iscsitarget-dkms --force-yes

配置 iscsi 並啓動服務:

  1. sed -i 's/false/true/g' /etc/default/iscsitarget
  2. /etc/init.d/iscsitarget restart
  3. /etc/init.d/open-iscsi restart

創建 cinder 數據庫並授權用戶訪問:

  1. mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
  2. create database cinder;
  3. grant all on cinder.* to 'cinder'@'%' identified by 'cinder';"

修改 /etc/cinder/cinder.conf:

  1. cat > /etc/cinder/cinder.conf << _EOF_
  2. [DEFAULT]
  3. verbose = True
  4. debug = True
  5. iscsi_helper = ietadm
  6. auth_strategy = keystone
  7. volume_group = cinder-volumes
  8. volume_name_template = volume-%s
  9. state_path = /var/lib/cinder
  10. volumes_dir = /var/lib/cinder/volumes
  11. rootwrap_config = /etc/cinder/rootwrap.conf
  12. api_paste_config = /etc/cinder/api-paste.ini
  13. rabbit_host = $YS_CON_MANAGE_IP
  14. rabbit_password = guest
  15. rpc_backend = cinder.openstack.common.rpc.impl_kombu
  16. sql_connection = mysql://cinder:cinder@${YS_CON_MANAGE_IP}/cinder
  17. osapi_volume_extension = cinder.api.contrib.standard_extensions
  18. _EOF_

修改 /etc/cinder/api-paste.ini 文件末尾 filter:authtoken 字段

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/cinder/api-paste.ini;
  5. done << _EOF_
  6. service_host = ${YS_CON_MANAGE_IP}
  7. service_port = 5000
  8. auth_host = ${YS_CON_MANAGE_IP}
  9. auth_port = 35357
  10. auth_protocol = http
  11. admin_tenant_name = service
  12. admin_user = cinder
  13. admin_password = ${ADMIN_PASSWORD}
  14. signing_dir = /var/lib/cinder
  15. _EOF_

創建一個卷組,命名爲 cinder-volumes:

  1. dd if=/dev/zero of=/opt/cinder-volumes bs=1 count=0 seek=5G
  2. losetup /dev/loop2 /opt/cinder-volumes
  3. fdisk /dev/loop2
  4. #按下面步驟輸入
  5. n
  6. p
  7. 1
  8. ENTER
  9. ENTER
  10. t
  11. 8e
  12. w

分區現在有了,創建物理卷和卷組

  1. pvcreate /dev/loop2
  2. vgcreate cinder-volumes /dev/loop2

這個卷組在系統重啓會失效,把它寫到 rc.local 中:

  1. echo 'losetup /dev/loop2 /opt/cinder-volumes' >> /etc/rc.local

同步數據庫並重啓服務:

  1. cinder-manage db sync
  2. /etc/init.d/cinder-api restart
  3. /etc/init.d/cinder-scheduler restart
  4. /etc/init.d/cinder-volume restart

安裝網絡管理模塊Quantum

安裝 Quantum server 和 OpenVSwitch 包

  1. apt-get install -y quantum-server quantum-plugin-openvswitch --force-yes

創建 quantum 數據庫並授權用戶訪問:

  1. mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
  2. create database quantum;
  3. grant all on quantum.* to 'quantum'@'%' identified by 'quantum';"

編輯 /etc/quantum/quantum.conf 文件

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/quantum/quantum.conf;
  5. done << _EOF_
  6. debug = True
  7. verbose = True
  8. rabbit_host = ${YS_CON_MANAGE_IP}
  9. rabbit_password = guest
  10. rabbit_port = 5672
  11. rabbit_userid = guest
  12. auth_host = ${YS_CON_MANAGE_IP}
  13. auth_port = 35357
  14. auth_protocol = http
  15. admin_tenant_name = service
  16. admin_user = quantum
  17. admin_password = ${ADMIN_PASSWORD}
  18. signing_dir = /var/lib/quantum/keystone-signing
  19. _EOF_

編輯 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini;
  5. done << _EOF_
  6. sql_connection = mysql://quantum:quantum@${YS_CON_MANAGE_IP}/quantum
  7. tenant_network_type = gre
  8. enable_tunneling = True
  9. tunnel_id_ranges = 1:1000
  10. _EOF_

啓動 quantum 服務:

  1. /etc/init.d/quantum-server restart

安裝虛擬化管理模塊Nova

安裝 Nova 相關軟件包

  1. apt-get install -y nova-api nova-cert novnc nova-conductor nova-consoleauth nova-scheduler nova-novncproxy --force-yes

創建 nova 數據庫,授權 nova 用戶訪問它:

  1. mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
  2. create database nova;
  3. grant all on nova.* to 'nova'@'%' identified by 'nova';"

在 /etc/nova/api-paste.ini 中修改 autotoken 驗證部分:

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
  4. sed -i "/$pattern/c $line" /etc/nova/api-paste.ini;
  5. done << _EOF_
  6. auth_host = ${YS_CON_MANAGE_IP}
  7. auth_port = 35357
  8. auth_protocol = http
  9. admin_tenant_name = service
  10. admin_user = nova
  11. admin_password = ${ADMIN_PASSWORD}
  12. signing_dir = /tmp/keystone-signing-nova
  13. auth_version = v2.0
  14. _EOF_

修改 /etc/nova/nova.conf, 類似下面這樣:

  1. cat > /etc/nova/nova.conf << _EOD_
  2. [DEFAULT]
  3. # LOGS/STATE
  4. debug = False
  5. verbose = True
  6. logdir = /var/log/nova
  7. state_path = /var/lib/nova
  8. lock_path = /var/lock/nova
  9. rootwrap_config = /etc/nova/rootwrap.conf
  10. dhcpbridge = /usr/bin/nova-dhcpbridge
  11. # SCHEDULER
  12. compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
  13. ## VOLUMES
  14. volume_api_class = nova.volume.cinder.API
  15. # DATABASE
  16. sql_connection = mysql://nova:nova@${YS_CON_MANAGE_IP}/nova
  17. # COMPUTE
  18. libvirt_type = kvm
  19. compute_driver = libvirt.LibvirtDriver
  20. instance_name_template = instance-%08x
  21. api_paste_config = /etc/nova/api-paste.ini
  22. # COMPUTE/APIS: if you have separate configs for separate services
  23. # this flag is required for both nova-api and nova-compute
  24. allow_resize_to_same_host = True
  25. # APIS
  26. osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
  27. ec2_dmz_host = ${YS_CON_MANAGE_IP}
  28. s3_host = ${YS_CON_MANAGE_IP}
  29. metadata_host = ${YS_CON_MANAGE_IP}
  30. metadata_listen = 0.0.0.0
  31. # RABBITMQ
  32. rabbit_host = ${YS_CON_MANAGE_IP}
  33. rabbit_password = guest
  34. # GLANCE
  35. image_service = nova.image.glance.GlanceImageService
  36. glance_api_servers = ${YS_CON_MANAGE_IP}:9292
  37. # NETWORK
  38. network_api_class = nova.network.quantumv2.api.API
  39. quantum_url = http://${YS_CON_MANAGE_IP}:9696
  40. quantum_auth_strategy = keystone
  41. quantum_admin_tenant_name = service
  42. quantum_admin_username = quantum
  43. quantum_admin_password = ${ADMIN_PASSWORD}
  44. quantum_admin_auth_url = http://${YS_CON_MANAGE_IP}:35357/v2.0
  45. service_quantum_metadata_proxy = True
  46. libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
  47. linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
  48. firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
  49. # NOVNC CONSOLE
  50. novncproxy_base_url = http://${YS_CON_EXT_IP}:6080/vnc_auto.html
  51. # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
  52. vncserver_proxyclient_address = ${YS_CON_EXT_IP}
  53. vncserver_listen = 0.0.0.0
  54. # AUTHENTICATION
  55. auth_strategy = keystone
  56. [keystone_authtoken]
  57. auth_host = $YS_CON_MANAGE_IP
  58. auth_port = 35357
  59. auth_protocol = http
  60. admin_tenant_name = service
  61. admin_user = nova
  62. admin_password = ${ADMIN_PASSWORD}
  63. signing_dir = /tmp/keystone-signing-nova
  64. _EOD_

同步數據庫,啓動 nova 相關服務:

  1. nova-manage db sync
  2. cd /etc/init.d/; for i in $( ls nova-* ); do sudo /etc/init.d/$i restart; done

檢查 nova 相關服務笑臉

  1. nova-manage service list

安裝WEB控制模塊Horizon

安裝Horizon

  1. apt-get install -y openstack-dashboard memcached --force-yes

如果你不喜歡 Ubuntu 的主題,可以禁用它,使用默認界面:

  1. dpkg --purge openstack-dashboard-ubuntu-theme

重啓apache2和memcache

  1. service apache2 restart; service memcached restart

安裝完成

現在可以通過瀏覽器 http://YS_CON_EXT_IP/horizon 使用 admin:ADMIN_PASSWORD 來登錄界面。

所有計算節點

基本環境變量配置

  1. export YS_CON_MANAGE_IP=172.16.0.51
  2. export YS_CON_EXT_IP=59.65.233.231
  3. export YS_COM_MANAGE_IP=172.16.0.52
  4. export YS_COM_MANAGE_NETMASK=255.255.0.0
  5. export YS_COM_DATA_IP=10.10.10.52
  6. export YS_COM_DATA_NETMASK=255.255.255.0
  7. export YS_COM_EXT_IP=59.65.233.233
  8. export YS_COM_EXT_NETMASK=255.255.255.0
  9. export YS_COM_EXT_GATEWAY=59.65.233.254
  10. export YS_COM_EXT_DNS=202.204.65.5
  11. export YS_COM_SERVICE_ENDPOINT_IP=172.16.0.52
  12. export YS_COM_MYSQL_USER=root
  13. export YS_COM_MYSQL_PASS=123qwe
  14. export ADMIN_PASSWORD=123qwe
  15. export ADMIN_TOKEN=ceit

網絡設置

設置網卡信息

  1. cat > /etc/network/interfaces << _EOF_
  2. auto eth0
  3. iface eth0 inet static
  4. address $YS_COM_MANAGE_IP
  5. netmask $YS_COM_MANAGE_NETMASK
  6. auto eth1
  7. iface eth0 inet static
  8. address $YS_COM_DATA_IP
  9. netmask $YS_COM_DATA_NETMASK
  10. auto eth2
  11. iface eth2 inet static
  12. address $YS_COM_EXT_IP
  13. netmask $YS_COM_EXT_NETMASK
  14. gateway $YS_COM_EXT_GATEWAY
  15. dns-nameservers $YS_COM_EXT_DNS
  16. _EOF_

重啓網絡服務

  1. /etc/init.d/networking restart

添加源

添加Ubuntu Grizzly源,並升級

  1. cat > /etc/apt/sources.list.d/grizzly.list << _EOF_
  2. deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
  3. deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
  4. _EOF_
  5. apt-get update
  6. apt-get -y upgrade --force-yes
  7. apt-get install -y ubuntu-cloud-keyring

時間同步NTP

安裝NTP並配置以計算節點爲同步時鐘

  1. apt-get install -y ntp
  2. sed -i 's/server ntp.ubuntu.com/server ${YS_CON_MANAGE_IP}/g' /etc/ntp.conf
  3. service ntp restart

允許路由轉發

開啓路由轉發

  1. sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
  2. sysctl net.ipv4.ip_forward=1

安裝OpenVSwitch

安裝OpenVSwitch,必須以下順序

  1. apt-get install -y openvswitch-datapath-source --force-yes
  2. module-assistant auto-install openvswitch-datapath
  3. apt-get install -y openvswitch-switch openvswitch-brcompat --force-yes

設置 ovs-brcompatd 啓動:

  1. sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
  2. echo 'brcompat' >> /etc/modules

啓動 openvswitch-switch:

  1. /etc/init.d/openvswitch-switch restart

再次啓動,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服務都啓動:

  1. /etc/init.d/openvswitch-switch restart

直到檢查出現:

  1. lsmod | grep brcompat
  2. brcompat 13512 0
  3. openvswitch 84038 7 brcompat

如果還是啓動不了的話,用下面命令:

  1. /etc/init.d/openvswitch-switch force-reload-kmod

創建網橋:

  1. ovs-vsctl add-br br-int # br-int 用於 vm 整合
  2. ovs-vsctl add-br br-ex # br-ex 用於從互聯網上訪問 vm
  3. ovs-vsctl add-port br-ex eth2 # br-ex 橋接到 eth2

做完上面操作後,如果用ssh連接到eth2的話一定會斷開,到機器上修改配置文件:

  1. ifconfig eth2 0
  2. ifconfig br-ex ${YS_COM_EXT_IP}/24
  3. route add default gw ${YS_COM_EXT_GATEWAY} dev br-ex
  4. echo "nameserver ${YS_COM_EXT_DNS}" > /etc/resolv.conf
  5. cat > /etc/network/interfaces << _EOF_
  6. auto eth0
  7. iface eth0 inet static
  8. address $YS_COM_MANAGE_IP
  9. netmask $YS_COM_MANAGE_NETMASK
  10. auto eth1
  11. iface eth0 inet static
  12. address $YS_COM_DATA_IP
  13. netmask $YS_COM_DATA_NETMASK
  14. auto eth2
  15. iface eth2 inet manual
  16. up ifconfig \$IFACE 0.0.0.0 up
  17. up ip link set \$IFACE promisc on
  18. down ip link set \$IFACE promisc off
  19. down ifconfig \$IFACE down
  20. auto br-ex
  21. iface br-ex inet static
  22. address $YS_COM_EXT_IP
  23. netmask $YS_COM_EXT_NETMASK
  24. gateway $YS_COM_EXT_GATEWAY
  25. dns-nameservers $YS_COM_EXT_DNS
  26. _EOF_

重啓網卡可能會出現:

  1. /etc/init.d/networking restart
  2. RTNETLINK answers: File exists
  3. Failed to bring up br-ex.

br-ex 可能有 ip 地址,但沒有網關和 DNS,需要手工配置一下,或者重啓機器. 重啓機器後就正常了

文檔更新:發現網絡節點的 eth2 網卡在系統重啓後沒有激活,寫入到 rc.local中:

  1. echo 'ifconfig eth2 up' >> /etc/rc.local

查看橋接的網絡

  1. ovs-vsctl list-br
  2. ovs-vsctl show

安裝網絡管理組件Quantum

安裝 Quantum openvswitch agent, metadata-agent l3 agent 和 dhcp agent

  1. apt-get install -y quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent --force-yes

編輯 /etc/quantum/quantum.conf 文件:

  1. while read line;
  2. do
  3. pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`<%

文章來源 : http://www.openstack.cn/p203.html

發佈了23 篇原創文章 · 獲贊 6 · 訪問量 12萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章