OpenStack Grizzly Multihost部署文檔
OpenStack G版本的Multihost部署文檔,參考了幾位前輩的部署步驟,自己增加了一些變量修改,主要是希望搭建一個可用的生產環境,雖然版本比較初級,但是會逐漸完善
生產環境中部署OpenStack基本的要求的是穩定,安全和可擴展性,使用Multihost方式部署的好處是保證了網絡的高可用,服務器數量捉急,所以選擇mseknibilel的部署方式會比較糾結於控制節點和網絡節點的資源浪費。所以本文檔參考Longgeek的這篇文章,只做控制節點和計算節點,1個控制節點配多個計算節點,Quantum部署在計算節點上。
環境要求
先安裝1個控制節點和1個計算節點,計算節點可以動態增加,只要將IP地址遞增即可
節點類型 | 網卡配置 |
控制節點 | eth0 (172.16.0.51), eth1 (59.65.233.231) |
計算節點 | eth0 (172.16.0.52), eth1 (10.10.10.52), eth2 (59.65.233.233) |
第一次搭建的時候出了些問題,網卡端口和網絡配置沒一一映射,通過Linux的mii-tool指令,可以查看每個端口的連接情況
控制節點
export YS_CON_MANAGE_IP=172.16.0.51
export YS_CON_MANAGE_NETMASK=255.255.0.0
export YS_CON_EXT_IP=59.65.233.231
export YS_CON_EXT_NETMASK=255.255.255.0
export YS_CON_EXT_GATEWAY=59.65.233.254
export YS_CON_EXT_DNS=202.204.65.5
export YS_CON_SERVICE_ENDPOINT_IP=172.16.0.51
export YS_CON_MYSQL_USER=root
export YS_CON_MYSQL_PASS=123qwe
export ADMIN_PASSWORD=123qwe
export ADMIN_TOKEN=ceit
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASSWORD
export OS_AUTH_URL="http://${YS_CON_MANAGE_IP}:5000/v2.0/"
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=${AMDIN_TOKEN}
export SERVICE_ENDPOINT=http://${YS_CON_MANAGE_IP}:35357/v2.0/
網絡設置
設置網卡信息
cat > /etc/network/interfaces << _EOF_
auto eth0
iface eth0 inet static
address $YS_CON_MANAGE_IP
netmask $YS_CON_MANAGE_NETMASK
auto eth1
iface eth1 inet static
address $YS_CON_EXT_IP
netmask $YS_CON_EXT_NETMASK
gateway $YS_CON_EXT_GATEWAY
dns-nameservers $YS_CON_EXT_DNS
_EOF_
重啓網絡服務
/etc/init.d/networking restart
添加源
添加Ubuntu Grizzly源,並升級
cat > /etc/apt/sources.list.d/grizzly.list << _EOF_
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
_EOF_
apt-get update
apt-get -y upgrade --force-yes
apt-get install -y ubuntu-cloud-keyring --force-yes
安裝MySQL和RabbitMQ
安裝MySQL
export DEBIAN_FRONTEND=noninteractive
apt-get install -q -y mysql-server python-mysqldb
mysqladmin -u $YS_CON_MYSQL_USER password $YS_CON_MYSQL_PASS
修改/etc/mysql/my.cnf文件綁定地址從127.0.0.1到0.0.0.0,禁止 mysql 做域名解析,防止連接錯誤,然後重新啓動mysql服務.
sed -i -e 's/127.0.0.1/0.0.0.0/g' -e '/skip-external-locking/a skip-name-resolve' /etc/mysql/my.cnf
/etc/init.d/mysql restart
安裝 RabbitMQ
apt-get install -y rabbitmq-server --force-yes
時間同步NTP
安裝NTP並配置以計算節點爲同步時鐘
apt-get install -y ntp
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart
允許路由轉發
開啓路由轉發
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl net.ipv4.ip_forward=1
安裝認證模塊Keystone
安裝認證模塊Keystone
apt-get install -y keystone --force-yes
增加數據庫連接權限
mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
create database keystone;
grant all on keystone.* to 'keystone'@'%' identified by 'keystone';"
修改/etc/keystone/keystone.conf配置文件
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/keystone/keystone.conf;
done << _EOF_
admin_token = $ADMIN_TOKEN
token_format = UUID
debug = True
verbose = True
connection = mysql://keystone:keystone@${YS_CON_MANAGE_IP}/keystone
_EOF_
啓用keystone然後同步數據庫
service keystone restart
keystone-manage db_sync
導入keystone數據,如果剪切板有限制的話最好分兩次粘貼
第一部分:
ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}
SERVICE_PASSWORD=${ADMIN_PASSWORD:-password}
export SERVICE_TOKEN=$ADMIN_TOKEN
export SERVICE_ENDPOINT="http://${YS_CON_SERVICE_ENDPOINT_IP}:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
KEYSTONE_REGION=RegionOne
KEYSTONE_IP=$YS_CON_SERVICE_ENDPOINT_IP
SWIFT_IP=$YS_CON_SERVICE_ENDPOINT_IP
COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
QUANTUM_IP=$KEYSTONE_IP
get_id () {
echo `$@ | awk '/ id / { print $4 }'`
}
ADMIN_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=admin)
SERVICE_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=$SERVICE_TENANT_NAME)
DEMO_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=demo)
INVIS_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=invisible_to_admin)
ADMIN_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=admin --pass="$ADMIN_PASSWORD" [email protected])
DEMO_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=demo --pass="$ADMIN_PASSWORD" [email protected])
ADMIN_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=admin)
KEYSTONEADMIN_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=KeystoneAdmin)
KEYSTONESERVICE_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=KeystoneServiceAdmin)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $DEMO_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT
MEMBER_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=Member)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $DEMO_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $INVIS_TENANT
NOVA_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE
GLANCE_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE
SWIFT_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=swift --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER --role-id $ADMIN_ROLE
RESELLER_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=ResellerAdmin)
第二部分:
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $RESELLER_ROLE
QUANTUM_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE
CINDER_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT [email protected])
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id ${ADMIN_ROLE}
KEYSTONE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name keystone --type identity --description 'OpenStack Identity'| awk '/ id / { print $4 }')
COMPUTE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=nova --type=compute --description='OpenStack Compute Service'| awk '/ id / { print $4 }')
CINDER_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=cinder --type=volume --description='OpenStack Volume Service'| awk '/ id / { print $4 }')
GLANCE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=glance --type=image --description='OpenStack Image Service'| awk '/ id / { print $4 }')
SWIFT_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=swift --type=object-store --description='OpenStack Storage Service' | awk '/ id / { print $4 }')
EC2_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=ec2 --type=ec2 --description='OpenStack EC2 service'| awk '/ id / { print $4 }')
QUANTUM_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=quantum --type=network --description='OpenStack Networking service'| awk '/ id / { print $4 }')
第三部分:
if [ "$KEYSTONE_WLAN_IP" != '' ];then
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID --publicurl http://"$KEYSTONE_WLAN_IP":5000/v2.0 --adminurl http://"$KEYSTONE_WLAN_IP":35357/v2.0 --internalurl http://"$KEYSTONE_WLAN_IP":5000/v2.0
fi
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID --publicurl http://"$KEYSTONE_IP":5000/v2.0 --adminurl http://"$KEYSTONE_IP":35357/v2.0 --internalurl http://"$KEYSTONE_IP":5000/v2.0
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$COMPUTE_ID --publicurl http://"$COMPUTE_IP":8774/v2/\$\(tenant_id\)s --adminurl http://"$COMPUTE_IP":8774/v2/\$\(tenant_id\)s --internalurl http://"$COMPUTE_IP":8774/v2/\$\(tenant_id\)s
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$CINDER_ID --publicurl http://"$VOLUME_IP":8776/v1/\$\(tenant_id\)s --adminurl http://"$VOLUME_IP":8776/v1/\$\(tenant_id\)s --internalurl http://"$VOLUME_IP":8776/v1/\$\(tenant_id\)s
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$GLANCE_ID --publicurl http://"$GLANCE_IP":9292/v2 --adminurl http://"$GLANCE_IP":9292/v2 --internalurl http://"$GLANCE_IP":9292/v2
if [ "$SWIFT_WLAN_IP" != '' ];then
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID --publicurl http://"$SWIFT_WLAN_IP":8080/v1/AUTH_\$\(tenant_id\)s --adminurl http://"$SWIFT_WLAN_IP":8080/v1 --internalurl http://"$SWIFT_WLAN_IP":8080/v1/AUTH_\$\(tenant_id\)s
fi
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID --publicurl http://"$SWIFT_IP":8080/v1/AUTH_\$\(tenant_id\)s --adminurl http://"$SWIFT_IP":8080/v1 --internalurl http://"$SWIFT_IP":8080/v1/AUTH_\$\(tenant_id\)s
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$EC2_ID --publicurl http://"$EC2_IP":8773/services/Cloud --adminurl http://"$EC2_IP":8773/services/Admin --internalurl http://"$EC2_IP":8773/services/Cloud
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$QUANTUM_ID --publicurl http://"$QUANTUM_IP":9696/ --adminurl http://"$QUANTUM_IP":9696/ --internalurl http://"$QUANTUM_IP":9696/
導入環境變量
cat > /root/tenantrc.sh << _EOF_
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASSWORD
export OS_AUTH_URL="http://${YS_CON_MANAGE_IP}:5000/v2.0/"
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=${AMDIN_TOKEN}
export SERVICE_ENDPOINT=http://${YS_CON_MANAGE_IP}:35357/v2.0/
_EOF_
echo 'source /root/export.sh' >> /root/.bashrc
source /root/export.sh
驗證keystone
keystone user-list
安裝鏡像管理模塊Glance
安裝鏡像管理模塊Glance
apt-get install -y glance --force-yes
創建一個 glance 數據庫並授權
mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
create database glance;
grant all on glance.* to 'glance'@'%' identified by 'glance';"
更新 /etc/glance/glance-api.conf 文件
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/glance/glance-api.conf;
done << _EOF_
verbose = True
debug = True
sql_connection = mysql://glance:glance@${YS_CON_MANAGE_IP}/glance
workers = 4
registry_host = ${YS_CON_MANAGE_IP}
notifier_strategy = rabbit
rabbit_host = ${YS_CON_MANAGE_IP}
rabbit_userid = guest
rabbit_password = guest
auth_host = ${YS_CON_MANAGE_IP}
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = ${ADMIN_PASSWORD}
_EOF_
echo "config_file = /etc/glance/glance-api-paste.ini" >> /etc/glance/glance-api.conf
echo "flavor = keystone" >> /etc/glance/glance-api.conf
更新/etc/glance/glance-registry.conf文件
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/glance/glance-api.conf;
done << _EOF_
verbose = True
debug = True
sql_connection = mysql://glance:glance@${YS_CON_MANAGE_IP}/glance
auth_host = ${YS_CON_MANAGE_IP}
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = ${ADMIN_PASSWORD}
_EOF_
echo "config_file = /etc/glance/glance-api-paste.ini" >> /etc/glance/glance-api.conf
echo "flavor = keystone" >> /etc/glance/glance-api.conf
啓動 glance-api 和 glance-registry 服務並同步到數據庫:
/etc/init.d/glance-api restart
/etc/init.d/glance-registry restart
glance-manage version_control 0
glance-manage db_sync
測試 glance 的安裝,上傳一個鏡像。下載 Cirros 鏡像並上傳:
wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img
查看上傳的鏡像
glance image-list
安裝塊存儲管理Cinder
安裝塊存儲管理Cinder
apt-get install -y cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient iscsitarget open-iscsi iscsitarget-dkms --force-yes
配置 iscsi 並啓動服務:
sed -i 's/false/true/g' /etc/default/iscsitarget
/etc/init.d/iscsitarget restart
/etc/init.d/open-iscsi restart
創建 cinder 數據庫並授權用戶訪問:
mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
create database cinder;
grant all on cinder.* to 'cinder'@'%' identified by 'cinder';"
修改 /etc/cinder/cinder.conf:
cat > /etc/cinder/cinder.conf << _EOF_
[DEFAULT]
verbose = True
debug = True
iscsi_helper = ietadm
auth_strategy = keystone
volume_group = cinder-volumes
volume_name_template = volume-%s
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
rabbit_host = $YS_CON_MANAGE_IP
rabbit_password = guest
rpc_backend = cinder.openstack.common.rpc.impl_kombu
sql_connection = mysql://cinder:cinder@${YS_CON_MANAGE_IP}/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
_EOF_
修改 /etc/cinder/api-paste.ini 文件末尾 filter:authtoken 字段
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/cinder/api-paste.ini;
done << _EOF_
service_host = ${YS_CON_MANAGE_IP}
service_port = 5000
auth_host = ${YS_CON_MANAGE_IP}
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = ${ADMIN_PASSWORD}
signing_dir = /var/lib/cinder
_EOF_
創建一個卷組,命名爲 cinder-volumes:
dd if=/dev/zero of=/opt/cinder-volumes bs=1 count=0 seek=5G
losetup /dev/loop2 /opt/cinder-volumes
fdisk /dev/loop2
#按下面步驟輸入
n
p
1
ENTER
ENTER
t
8e
w
分區現在有了,創建物理卷和卷組
pvcreate /dev/loop2
vgcreate cinder-volumes /dev/loop2
這個卷組在系統重啓會失效,把它寫到 rc.local 中:
echo 'losetup /dev/loop2 /opt/cinder-volumes' >> /etc/rc.local
同步數據庫並重啓服務:
cinder-manage db sync
/etc/init.d/cinder-api restart
/etc/init.d/cinder-scheduler restart
/etc/init.d/cinder-volume restart
安裝網絡管理模塊Quantum
安裝 Quantum server 和 OpenVSwitch 包
apt-get install -y quantum-server quantum-plugin-openvswitch --force-yes
創建 quantum 數據庫並授權用戶訪問:
mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
create database quantum;
grant all on quantum.* to 'quantum'@'%' identified by 'quantum';"
編輯 /etc/quantum/quantum.conf 文件
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/quantum/quantum.conf;
done << _EOF_
debug = True
verbose = True
rabbit_host = ${YS_CON_MANAGE_IP}
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
auth_host = ${YS_CON_MANAGE_IP}
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = ${ADMIN_PASSWORD}
signing_dir = /var/lib/quantum/keystone-signing
_EOF_
編輯 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini;
done << _EOF_
sql_connection = mysql://quantum:quantum@${YS_CON_MANAGE_IP}/quantum
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
_EOF_
啓動 quantum 服務:
/etc/init.d/quantum-server restart
安裝虛擬化管理模塊Nova
安裝 Nova 相關軟件包
apt-get install -y nova-api nova-cert novnc nova-conductor nova-consoleauth nova-scheduler nova-novncproxy --force-yes
創建 nova 數據庫,授權 nova 用戶訪問它:
mysql -u$YS_CON_MYSQL_USER -p$YS_CON_MYSQL_PASS -e "
create database nova;
grant all on nova.* to 'nova'@'%' identified by 'nova';"
在 /etc/nova/api-paste.ini 中修改 autotoken 驗證部分:
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
sed -i "/$pattern/c $line" /etc/nova/api-paste.ini;
done << _EOF_
auth_host = ${YS_CON_MANAGE_IP}
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = ${ADMIN_PASSWORD}
signing_dir = /tmp/keystone-signing-nova
auth_version = v2.0
_EOF_
修改 /etc/nova/nova.conf, 類似下面這樣:
cat > /etc/nova/nova.conf << _EOD_
[DEFAULT]
# LOGS/STATE
debug = False
verbose = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
rootwrap_config = /etc/nova/rootwrap.conf
dhcpbridge = /usr/bin/nova-dhcpbridge
# SCHEDULER
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
## VOLUMES
volume_api_class = nova.volume.cinder.API
# DATABASE
sql_connection = mysql://nova:nova@${YS_CON_MANAGE_IP}/nova
# COMPUTE
libvirt_type = kvm
compute_driver = libvirt.LibvirtDriver
instance_name_template = instance-%08x
api_paste_config = /etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host = True
# APIS
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host = ${YS_CON_MANAGE_IP}
s3_host = ${YS_CON_MANAGE_IP}
metadata_host = ${YS_CON_MANAGE_IP}
metadata_listen = 0.0.0.0
# RABBITMQ
rabbit_host = ${YS_CON_MANAGE_IP}
rabbit_password = guest
# GLANCE
image_service = nova.image.glance.GlanceImageService
glance_api_servers = ${YS_CON_MANAGE_IP}:9292
# NETWORK
network_api_class = nova.network.quantumv2.api.API
quantum_url = http://${YS_CON_MANAGE_IP}:9696
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = ${ADMIN_PASSWORD}
quantum_admin_auth_url = http://${YS_CON_MANAGE_IP}:35357/v2.0
service_quantum_metadata_proxy = True
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
# NOVNC CONSOLE
novncproxy_base_url = http://${YS_CON_EXT_IP}:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address = ${YS_CON_EXT_IP}
vncserver_listen = 0.0.0.0
# AUTHENTICATION
auth_strategy = keystone
[keystone_authtoken]
auth_host = $YS_CON_MANAGE_IP
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = ${ADMIN_PASSWORD}
signing_dir = /tmp/keystone-signing-nova
_EOD_
同步數據庫,啓動 nova 相關服務:
nova-manage db sync
cd /etc/init.d/; for i in $( ls nova-* ); do sudo /etc/init.d/$i restart; done
檢查 nova 相關服務笑臉
nova-manage service list
安裝WEB控制模塊Horizon
安裝Horizon
apt-get install -y openstack-dashboard memcached --force-yes
如果你不喜歡 Ubuntu 的主題,可以禁用它,使用默認界面:
dpkg --purge openstack-dashboard-ubuntu-theme
重啓apache2和memcache
service apache2 restart; service memcached restart
安裝完成
現在可以通過瀏覽器 http://YS_CON_EXT_IP/horizon 使用 admin:ADMIN_PASSWORD 來登錄界面。
所有計算節點
基本環境變量配置
export YS_CON_MANAGE_IP=172.16.0.51
export YS_CON_EXT_IP=59.65.233.231
export YS_COM_MANAGE_IP=172.16.0.52
export YS_COM_MANAGE_NETMASK=255.255.0.0
export YS_COM_DATA_IP=10.10.10.52
export YS_COM_DATA_NETMASK=255.255.255.0
export YS_COM_EXT_IP=59.65.233.233
export YS_COM_EXT_NETMASK=255.255.255.0
export YS_COM_EXT_GATEWAY=59.65.233.254
export YS_COM_EXT_DNS=202.204.65.5
export YS_COM_SERVICE_ENDPOINT_IP=172.16.0.52
export YS_COM_MYSQL_USER=root
export YS_COM_MYSQL_PASS=123qwe
export ADMIN_PASSWORD=123qwe
export ADMIN_TOKEN=ceit
網絡設置
設置網卡信息
cat > /etc/network/interfaces << _EOF_
auto eth0
iface eth0 inet static
address $YS_COM_MANAGE_IP
netmask $YS_COM_MANAGE_NETMASK
auto eth1
iface eth0 inet static
address $YS_COM_DATA_IP
netmask $YS_COM_DATA_NETMASK
auto eth2
iface eth2 inet static
address $YS_COM_EXT_IP
netmask $YS_COM_EXT_NETMASK
gateway $YS_COM_EXT_GATEWAY
dns-nameservers $YS_COM_EXT_DNS
_EOF_
重啓網絡服務
/etc/init.d/networking restart
添加源
添加Ubuntu Grizzly源,並升級
cat > /etc/apt/sources.list.d/grizzly.list << _EOF_
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
_EOF_
apt-get update
apt-get -y upgrade --force-yes
apt-get install -y ubuntu-cloud-keyring
時間同步NTP
安裝NTP並配置以計算節點爲同步時鐘
apt-get install -y ntp
sed -i 's/server ntp.ubuntu.com/server ${YS_CON_MANAGE_IP}/g' /etc/ntp.conf
service ntp restart
允許路由轉發
開啓路由轉發
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl net.ipv4.ip_forward=1
安裝OpenVSwitch
安裝OpenVSwitch,必須以下順序
apt-get install -y openvswitch-datapath-source --force-yes
module-assistant auto-install openvswitch-datapath
apt-get install -y openvswitch-switch openvswitch-brcompat --force-yes
設置 ovs-brcompatd 啓動:
sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
echo 'brcompat' >> /etc/modules
啓動 openvswitch-switch:
/etc/init.d/openvswitch-switch restart
再次啓動,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服務都啓動:
/etc/init.d/openvswitch-switch restart
直到檢查出現:
lsmod | grep brcompat
brcompat 13512 0
openvswitch 84038 7 brcompat
如果還是啓動不了的話,用下面命令:
/etc/init.d/openvswitch-switch force-reload-kmod
創建網橋:
ovs-vsctl add-br br-int # br-int 用於 vm 整合
ovs-vsctl add-br br-ex # br-ex 用於從互聯網上訪問 vm
ovs-vsctl add-port br-ex eth2 # br-ex 橋接到 eth2
做完上面操作後,如果用ssh連接到eth2的話一定會斷開,到機器上修改配置文件:
ifconfig eth2 0
ifconfig br-ex ${YS_COM_EXT_IP}/24
route add default gw ${YS_COM_EXT_GATEWAY} dev br-ex
echo "nameserver ${YS_COM_EXT_DNS}" > /etc/resolv.conf
cat > /etc/network/interfaces << _EOF_
auto eth0
iface eth0 inet static
address $YS_COM_MANAGE_IP
netmask $YS_COM_MANAGE_NETMASK
auto eth1
iface eth0 inet static
address $YS_COM_DATA_IP
netmask $YS_COM_DATA_NETMASK
auto eth2
iface eth2 inet manual
up ifconfig \$IFACE 0.0.0.0 up
up ip link set \$IFACE promisc on
down ip link set \$IFACE promisc off
down ifconfig \$IFACE down
auto br-ex
iface br-ex inet static
address $YS_COM_EXT_IP
netmask $YS_COM_EXT_NETMASK
gateway $YS_COM_EXT_GATEWAY
dns-nameservers $YS_COM_EXT_DNS
_EOF_
重啓網卡可能會出現:
/etc/init.d/networking restart
RTNETLINK answers: File exists
Failed to bring up br-ex.
br-ex 可能有 ip 地址,但沒有網關和 DNS,需要手工配置一下,或者重啓機器. 重啓機器後就正常了
文檔更新:發現網絡節點的 eth2 網卡在系統重啓後沒有激活,寫入到 rc.local中:
echo 'ifconfig eth2 up' >> /etc/rc.local
查看橋接的網絡
ovs-vsctl list-br
ovs-vsctl show
安裝網絡管理組件Quantum
安裝 Quantum openvswitch agent, metadata-agent l3 agent 和 dhcp agent
apt-get install -y quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent --force-yes
編輯 /etc/quantum/quantum.conf 文件:
while read line;
do
pattern=`echo $line | awk '{printf "%s %s",$1,$2}'`
<%
文章來源 : http://www.openstack.cn/p203.html