Ubuntu12.04安裝 OpenStack+G版+單節點+GRE模式

目前最好的中文OpenStack + G版單節點安裝文檔,沒有之一。

關於:ubunt12.04 + Grizzly + 單節點 + GRE模式(文中還有對quantum的通俗解釋)。

原文地址:《OpenStack Grizzly-g3 單節點安裝在 Ubuntu12.04 上》

原文作者:Geek

原文作者的blog:http://www.longgeek.com

原文作者的GitHub地址:https://github.com/longgeek


原文內容:

Grizzly發佈日期爲:2013.04.04

本文 Grizzly 的版本爲:2013.01.g3
本文會安裝 Keystone、Glance、Quantum、Cinder、Nova、Horizon.
Quantum 採用 GRE 模式, 關於 Quantum 模式詳細介紹點擊這裏,在寫這篇文檔之前網上沒有找到相關 Grizzly 安裝的資料,可能本文會有披漏,歡迎大家指正。

文檔更新:

2013.03.29完整測試了整篇文檔,發現 Cinder 又有一個 Bug ,並做了修復。現在 Cinder 可以正常使用了。(本文寫在 G 版發佈之前,發佈後現在這個 bug 已經修復了。)

2013.04.20 更新了 openvswitch 的安裝,適用於 Ubuntu-12.04 和 Ubuntu-12.04.2

目錄

網絡環境

單獨的網絡節點 GRE 模式最少需要三塊網卡,而我這裏是把所有服務都安裝在了一個節點上,並不存在 quantum 多 agent , 所以我在這裏用了兩個網卡。

1.管理網絡: eth0 172.16.0.254/16 用來 Mysql、AMQP、API
2.外部網絡: eth1 192.168.8.20/24 br-ex

網卡設置

eth1 用來做quantum的external網絡,暫時沒有把ip地址寫到配置文件裏,在後面配置ovs時候會在文件增加一個br-ex網卡信息.

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 172.16.0.254
        netmask 255.255.0.0

auto eth1
iface eth1 inet manual
# /etc/init.d/networking restart
# ifconfig eth1 192.168.8.20/24 up
# route add default gw 192.168.8.1 dev eth1
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf

添加 Grizzly 源, 並更新軟件包

# cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
deb  http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
_GEEK_
# apt-get install ubuntu-cloud-keyring
# apt-get update
# apt-get upgrade

安裝 mysql

# apt-get install python-mysqldb mysql-server

使用sed編輯 /etc/mysql/my.cnf 文件的更改綁定地址(0.0.0.0)從本地主機(127.0.0.1)
禁止 mysql 做域名解析,防止 連接 mysql 出現錯誤和遠程連接 mysql 慢的現象。
然後重新啓動mysql服務.

# sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
# sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
# /etc/init.d/mysql restart

安裝 RabbitMQ

安裝休息隊列服務器,RabbitMQ,或者你也可以安裝 Apache Qpid。

# apt-get install rabbitmq-server

安裝和配置Keystone

# apt-get install keystone

刪除默認 keystone 的 sqlite db 文件

# rm -f /var/lib/keystone/keystone.db
創建 keystone 數據庫

在 mysql 裏創建 keystone 庫,並授權 keystone 用戶訪問:

# mysql -uroot -pmysql
mysql> create database keystone;
mysql> grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
mysql> flush privileges; quit;
改/etc/keystone/keystone.conf

修改 /etc/keystone/keystone.conf:

admin_token = www.longgeek.com
debug = True
verbose = True
[sql]
connection = mysql://keystone:[email protected]/keystone      #這一行必須在 [sql] 下面
[signing]
token_format = UUID

啓動 keystone 服務:

 /etc/init.d/keystone restart

同步 keystone 表數據到 db 中:

 keystone-manage db_sync
用腳本導入數據

創建 user、role、tenant、service、endpoint:
下載腳本:

# wget http://download.longgeek.com/openstack/grizzly/keystone.sh

自定義腳本內容:

ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}     #租戶 admin 的密碼
SERVICE_PASSWORD=${SERVICE_PASSWORD:-password}              #nova,glance,cinder,quantum,swift的密碼
export SERVICE_TOKEN="www.longgeek.com"    # token
export SERVICE_ENDPOINT="http://172.16.0.254:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}      #租戶 service,包含了nova,glance,ciner,quantum,swift等服務
KEYSTONE_REGION=RegionOne
KEYSTONE_IP="172.16.0.254"
#KEYSTONE_WLAN_IP="172.16.0.254"
SWIFT_IP="172.16.0.254"
#SWIFT_WLAN_IP="172.16.0.254"
COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
QUANTUM_IP=$KEYSTONE_IP

執行腳本:

# sh keystone.sh
設置環境變量

這裏變量對於 keystone.sh 裏的設置:

# cat > /root/export.sh << _GEEK_
export OS_TENANT_NAME=admin      #這裏如果設置爲 service 其它服務會無法驗證.
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://172.16.0.254:5000/v2.0/
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=www.longgeek.com
export SERVICE_ENDPOINT=http://172.16.0.254:35357/v2.0/
_GEEK_
# echo 'source /root/export.sh' >> /root/.bashrc
# source /root/export.sh
驗證 keystone
keystone user-list
keystone role-list
keystone tenant-list
keystone endpoint-list
Troubleshooting Keystone

1. 查看 5000 和 35357 端口是否在監聽
2. 查看 /var/log/keystone/keystone.log 報錯信息
3. keystone.sh 腳本執行錯誤解決:(檢查腳本內容變量設置)

# mysql -uroot -pmysql
mysql> drop database keystone;
mysql> create database keystone; quit;
# keystone-manage db_sync
# sh keystone.sh

4. 步驟 6.5 出現錯誤,先去查看 log,在檢查 6.4 環境變量是否設置正確

安裝和配置Glance

安裝glance
# apt-get install glance

刪除 glance sqlite 文件:

# rm -f /var/lib/glance/glance.sqlite
創建 glance 數據庫
# mysql -uroot -pmysql
mysql> create database glance;
mysql> grant all on glance.* to 'glance'@'%' identified by 'glance';
mysql> flush privileges;
修改glance配置文件
修改 /etc/glance/glance-api.conf

修改下面的選項,其它默認。

verbose = True
debug = True
sql_connection = mysql://glance:[email protected]/glance
workers = 4
registry_host = 172.16.0.254
notifier_strategy = rabbit
rabbit_host = 172.16.0.254
rabbit_userid = guest
rabbit_password = guest
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-api-paste.ini
flavor = keystone
修改 /etc/glance/glance-registry.conf

修改下面的選項,其它默認。

verbose = True
debug = True
sql_connection = mysql://glance:[email protected]/glance
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-registry-paste.ini
flavor = keystone

啓動 glance 服務:

# /etc/init.d/glance-api restart
# /etc/init.d/glance-registry restart
同步到db
# glance-manage version_control 0
# glance-manage db_sync
檢查glance
# glance p_w_picpath-list
上傳鏡像文件

下載Cirros img作爲測試使用,只有10M:

# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
# glance p_w_picpath-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img
Added new p_w_picpath with ID: f61ee640-82a7-4d6c-8816-608bb91dab7d

Cirros img 是可以使用用戶名和密碼登陸,也可以使用密鑰登陸, user:cirros password:cubswin:)

Troubleshooting Glance

1. 確保配置文件正確,9191 9292 端口存在
2. /var/log/glance/ 兩個log文件
3. 確保環境變量中的OS_TENANT_NAME=admin, 否則會報 401錯誤
4. 上傳鏡像的格式對應命令中指定的格式

安裝 Openvswitch

# apt-get install openvswitch-switch openvswitch-brcompat

設置 ovs-brcompatd 啓動:

# sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch

啓動 openvswitch-switch:

# /etc/init.d/openvswitch-switch restart
 * ovs-brcompatd is not running            #brcompatd沒有啓動
 * ovs-vswitchd is not running
 * ovsdb-server is not running
 * Inserting openvswitch module
 * /etc/openvswitch/conf.db does not exist
 * Creating empty database /etc/openvswitch/conf.db
 * Starting ovsdb-server
 * Configuring Open vSwitch system IDs
 * Starting ovs-vswitchd
 * Enabling gre with iptables

再次啓動,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服務都啓動:

# /etc/init.d/openvswitch-switch restart
# lsmod | grep brcompat
brcompat               13512  0 
openvswitch            84038  7 brcompat

如果還是啓動不了的話,用下面命令:

/etc/init.d/openvswitch-switch force-reload-kmod
添加網橋
添加 External 網絡網橋 br-ex

用 openvswitch 添加網橋 br-ex 並把網卡 eth1 加入 br-ex:

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1

做完上面操作後,eth1 這個網卡是沒有工作的,手工設置 ip:

# ifconfig eth1 0
# ifconfig br-ex 192.168.8.20/24
# route add default gw 192.168.8.1 dev br-ex
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf

在寫到網卡配置文件:

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address 172.16.0.254
        netmask 255.255.0.0

auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        down ifconfig $IFACE down

auto br-ex
iface br-ex inet static
        address 192.168.8.20
        netmask 255.255.255.0
        gateway 192.168.8.1
        dns-nameservers 8.8.8.8

重啓網卡可能會出現:

RTNETLINK answers: File exists
Failed to bring up br-ex.

br-ex 可能有 ip 地址,但沒有網關和 DNS,需要手工配置一下,或者重啓機器. 重啓機器後就正常了。

創建 internal 網絡 br-int
# ovs-vsctl add-br br-int
查看網絡
# ovs-vsctl list-br
br-ex
br-int
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1"
            Interface "eth1"
    ovs_version: "1.4.0+build0"

安裝quantum

安裝 Quantum 服務器和 Client API:

apt-get install quantum-server python-cliff python-pyparsing python-quantumclient

安裝 openvswitch 插件來支持 OVS:

apt-get install quantum-plugin-openvswitch
創建 Quantum DB
# mysql -uroot -pmysql
mysql> create database quantum;
mysql> grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
mysql> flush privileges; quit;
配置 /etc/quantum/quantum.conf
# cat /etc/quantum/quantum.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
debug = True
verbose = True
state_path = /var/lib/quantum
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
api_paste_config = /etc/quantum/api-paste.ini
control_exchange = quantum
rabbit_host = 172.16.0.254
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
notification_driver = quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[QUOTAS]
[DEFAULT_SERVICETYPE]
[SECURITYGROUP]
[AGENT]
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = password
signing_dir = /var/lib/quantum/keystone-signing
配置 Open vSwitch Plugin
# cat /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini | grep -v ^$ | grep -v ^#
[DATABASE]
sql_connection = mysql://quantum:[email protected]/quantum
reconnect_interval = 2
[OVS]
enable_tunneling = True
tenant_network_type = gre
tunnel_id_ranges = 1:1000
local_ip = 10.0.0.1
integration_bridge = br-int
tunnel_bridge = br-tun
[AGENT]
polling_interval = 2
[SECURITYGROUP]
啓動quantum服務
# /etc/init.d/quantum-server restart
安裝 OVS agent
# apt-get install quantum-plugin-openvswitch-agent

啓動 ovs-agent 時候確保 ovs_quantum_plugin.ini 裏有 local_ip 存在. 確保 br-int 網橋已創建.

# /etc/init.d/quantum-plugin-openvswitch-agent restart

啓動 ovs-agent 後會根據配置文件自動創建一個 br-tun 網橋:

# ovs-vsctl list-br
br-ex
br-int
br-tun
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "1.4.0+build0"
安裝 quantum-dhcp-agent
# apt-get install quantum-dhcp-agent

配置 quantum-dhcp-agent:

# cat /etc/quantum/dhcp_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://172.16.0.254:35357/v2.0
dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
state_path = /var/lib/quantum
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq

啓動服務:

# /etc/init.d/quantum-dhcp-agent restart
安裝 L3 Agent
# apt-get install quantum-l3-agent

配置 L3 Agent:

# cat /etc/quantum/l3_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
external_network_bridge = br-ex
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://172.16.0.254:35357/v2.0
l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver

啓動 L3 agent:

# /etc/init.d/quantum-l3-agent restart
配置 Metadata agent
# cat /etc/quantum/metadata_agent.ini | grep -v ^$ | grep -v ^#

[DEFAULT]
debug = True
auth_url = http://172.16.0.254:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = password
state_path = /var/lib/quantum
nova_metadata_ip = 172.16.0.254
nova_metadata_port = 8775

啓動 Metadata agent:

# /etc/init.d/quantum-metadata-agent restart
Troubleshooting Quantum

1. 所有配置文件配置正確,9696 端口啓動
2. /var/log/quantum/下所有 log 文件
3. br-ex、br-int 提前添加好
在文檔末尾會用命令和界面方式結合來理解 Quantum 網絡。

安裝Cinder

在 Grizzly 裏 Cinder 有一個 Bug, 先配置好再說吧:

# apt-get install cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient
創建DB
# mysql -uroot -pmysql
mysql> create database cinder;
mysql> grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
mysql> flush privileges; quit;
建立一個邏輯卷卷組 cinder-volumes

創建一個普通分區,我這裏用的sdb,創建了一個主分區,大小爲所有空間

# fdisk /dev/sdb
n
p
1
Enter
Enter
t
8e
w
# partx -a /dev/sdb
# pvcreate /dev/sdb1
# vgcreate cinder-volumes /dev/sdb1
# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  cinder-volumes   1   0   0 wz--n- 150.00g 150.00g
  localhost        1   2   0 wz--n- 279.12g  12.00m
修改配置文件
修改cinder.conf
# cat /etc/cinder/cinder.conf
[DEFAULT]
# LOG/STATE
verbose = True
debug = True
iscsi_helper = tgtadm
auth_strategy = keystone
volume_group = cinder-volumes
volume_name_template = volume-%s
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
# RPC
rabbit_host = 172.16.0.254
rabbit_password = guest
rpc_backend = cinder.openstack.common.rpc.impl_kombu
# DATABASE
sql_connection = mysql://cinder:[email protected]/cinder
# API
osapi_volume_extension = cinder.api.contrib.standard_extensions
修改api-paste.ini

修改文件末尾[filter:authtoken]字段:

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 172.16.0.254
service_port = 5000
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = password
signing_dir = /var/lib/cinder
同步並啓動服務

同步到 db 中:

# cinder-manage db sync
2013-03-11 13:41:57.885 30326 DEBUG cinder.utils [-] backend <module 'cinder.db.sqlalchemy.migration' from '/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.pyc'> __get_backend /usr/lib/python2.7/dist-packages/cinder/utils.py:561

啓動服務:

# for serv in api scheduler volume
do
    /etc/init.d/cinder-$serv restart
done
# /etc/init.d/tgt restart
檢查
# cinder list
Troubleshooting Cinder

1. 服務和 8776 端口啓動
2. /var/log/cinder 中日誌文件
3. 依賴配置文件指定的volume_group = cinder-volumes, 卷組存在
4. tgt 服務正常.

安裝Nova控制器

同時安裝計算服務,Grizzly 裏 nova-compute 依賴 nova-conductor,戳這裏

# apt-get install nova-api nova-novncproxy novnc nova-ajax-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler
# apt-get install nova-compute nova-conductor
創建數據庫
# mysql -uroot -pmysql
mysql> create database nova;
mysql> grant all on nova.* to 'nova'@'%' identified by 'nova';
mysql> flush privileges; quit;
配置
配置 nova.conf
# cat /etc/nova/nova.conf
[DEFAULT]
# LOGS/STATE
debug = True
verbose = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
rootwrap_config = /etc/nova/rootwrap.conf
dhcpbridge = /usr/bin/nova-dhcpbridge
# SCHEDULER
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
## VOLUMES
volume_api_class = nova.volume.cinder.API
# DATABASE
sql_connection = mysql://nova:[email protected]/nova
# COMPUTE
libvirt_type = kvm
compute_driver = libvirt.LibvirtDriver
instance_name_template = instance-%08x
api_paste_config = /etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host = True
# APIS
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host = 172.16.0.254
s3_host = 172.16.0.254
# RABBITMQ
rabbit_host = 172.16.0.254
rabbit_password = guest
# GLANCE
p_w_picpath_service = nova.p_w_picpath.glance.GlanceImageService
glance_api_servers = 172.16.0.254:9292
# NETWORK
network_api_class = nova.network.quantumv2.api.API
quantum_url = http://172.16.0.254:9696
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = password
quantum_admin_auth_url = http://172.16.0.254:35357/v2.0
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
# NOVNC CONSOLE
novncproxy_base_url = http://192.168.8.20:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address = 172.16.0.254
vncserver_listen = 0.0.0.0
# AUTHENTICATION
auth_strategy = keystone
[keystone_authtoken]
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova
配置 api-paste.ini

修改 [filter:authtoken]:

# vim /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 172.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova
啓動服務
# for serv in api cert scheduler consoleauth novncproxy conductor compute;
do
    /etc/init.d/nova-$serv restart
done
同步數據並啓動服務
# nova-manage db sync
# !for
查看服務

出現笑臉表示對應服務正常,如做狀態是XX的話,注意查看/var/log/nova/下對應服務的log:

# nova-manage service list 2> /dev/null
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        localhost                            internal         enabled    :-)    2013-03-11 02:56:21
nova-scheduler   localhost                            internal         enabled    :-)    2013-03-11 02:56:22
nova-consoleauth localhost                            internal         enabled    :-)    2013-03-11 02:56:22
nova-conductor   localhost                            internal         enabled    :-)    2013-03-11 02:56:22
nova-compute     localhost                            nova             enabled    :-)    2013-03-11 02:56:23
組策略

給默認的租策略: default 添加 ping 響應和 ssh 端口:

# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
Troubleshooting Nova

1. 配置文件指定的參數是否符合實際環境
2. /var/log/nova/中對應服務的log
3. 依賴環境變量, 數據庫連接,端口啓動
4. 硬件是否支持虛擬化等

安裝Horizon

安裝OpenStack Dashboard、Apache 和 WSGI 模塊:

# apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard

配置 Dashboard,修改 Memcache 的監聽地址:
去掉 ubuntu 的 主題:

# mv /etc/openstack-dashboard/ubuntu_theme.py /etc/openstack-dashboard/ubuntu_theme.py.bak
# vim /etc/openstack-dashboard/local_settings.py
DEBUG = True
CACHE_BACKEND = 'memcached://172.16.0.254:11211/'
OPENSTACK_HOST = "172.16.0.254"
# sed -i 's/127.0.0.1/172.16.0.254/g' /etc/memcached.conf

啓動 Memcached 和 Aapache:

# /etc/init.d/memcached restart
# /etc/init.d/apache2 restart

瀏覽器訪問:

 http://172.16.0.254/horizon
用戶:    admin
密碼: password
Troubleshooting Horizon

1. 出現無法登錄的情況,注意查看 /var/log/apache2/error.log 和 /var/log/keystone/keystone.log
一般會出現 401 的錯誤,主要和配置文件有關係,quantum cinder nova 配置文件的 keystone
驗證信息有誤。
2. 登錄出現 [Errno 111] Connection refused 錯誤時候,一般是 cinder-api 和 nova-api 沒有啓動,

配置 External 網絡

介紹

External 就是外部網絡,相當於 Float ip,External 網絡走的是 br-ex,也就是物理 eth1 網卡,對於 External 網絡我們只需要創建一個就夠了,而所有的租戶都用這一個 External 到外網。
我們用管理員創建一個 External 網絡後,剩下的就交給每個租戶自己來創建自己的網絡了。
Quantum 裏的名詞理解:
Network:分爲 External 和 Internal 兩種網絡, 也就是一個交換機。
Subnet:這個網絡在哪個網段,它的網關和 dns 是多少
Router:一個路由器,可以用來隔離不同租戶之間自己創建 的 Internal 網絡.
Interface: 路由器上的 WLAN 和 LAN 口
Port:交換機上的端口,這個端口被誰使用,可以知道 IP 地址信息。

對於配置 Quantum 的網絡來說,就是自己動手插網線、連路由器的一個過程。例如:比如一個公司是通過 ADSL 撥號上網,出口只有一個,公司內部是一個局域網(External網絡),然而這個公司有多個部門組成(多個租戶),A 部門(租戶)需要經常測試,IP 地址或 DHCP 服務器會和其他部門(其他租戶)衝突,只能在找一個路由器(Router-1)來隔離 A 部門和其它部門的網絡, A 部門的網絡地址不能設置成和路由器(Router-1)的 WLAN 口在同一網絡位,因爲路由器的 WLAN 口 IP 和 LAN 口 IP 不能在同一網段,這時候就需要 A 部門自己定義一個私有網段到路由器的 LAN 口,(租戶自己創建自己的 Network 、 Subnet 以及 Router,並把 Interface 加到 Router 上,設置 Interface 的 WLAN口 爲 External ip, LAN 口爲 Subnet 包含的地址)。 A 部門正常可以上外網(Port 通過 Router-1的 Interface 到 External 上)。同理,現在多個部門都需要隔離網絡,那就多個路由器來(Router-2,3,4,5…)隔離。

創建一個 External 網絡

注意 router:external=True 參數,它指這是一個 External 網絡

EXTERNAL_NET_ID=$(quantum net-create external_net1 --router:external=True | awk '/ id / {print $4}')
創建一個 Subnet

由於我的 Quantum 版本是2.0, 而源碼包已經更新到了 2.2 了,命令參數以後可能會有些小變化。我這裏的 quantum 命令不能直接設置 dns 和 host route。下面這個 192.168.8.0/24 就是我外部網絡的網段了,注意網關必須是你指定的這個網絡範圍裏,比如你指定了 cidr 是 192.168.8.32/24,網關是 192.168.8.1, 而 8.1 不再 cidr 的範圍裏。
創建 Float IP 地址的 Subnet, 這個 Subnet 的 DHCP 服務被禁用:

SUBNET_ID=$(quantum subnet-create external_net1 192.168.8.0/24 --name=external_subnet1 --gateway_ip 192.168.8.1 --enable_dhcp=False | awk '/ id / {print $4}')

創建一個 Internal 網絡

這裏爲租戶 demo 創建,需要 demo 的 id:

# DEMO_ID=$(keystone tenant-list | awk '/ demo / {print $2}')
爲 demo 租戶創建 Internal Network

demo 租戶:我給你們部門規劃創建了一套網絡

# INTERNAL_NET_ID=$(quantum net-create demo_net1 --tenant_id $DEMO_ID | awk '/ id / {print $4}')
爲 demo 租戶創建 Subnet

demo 租戶:我給你們定義了一個 網段 10.1.1.0/24 , 網關是10.1.1.1,默認開啓了 dhcp 功能

# DEMO_SUBNET_ID=$(quantum subnet-create demo_net1 10.1.1.0/24 --name=demo_subnet1 --gateway_ip 10.1.1.1 --tenant_id $DEMO_ID| awk '/ id / {print $4}')
爲 demo 租戶創建一個 Router

又給 demo 租戶拿來了一個路由器:

# DEMO_ROUTER_ID=$(quantum router-create --tenant_id $DEMO_ID demo_router1 | awk '/ id / {print $4}')
添加 Router 到 Subnet上

剛纔對 demo 說的話, 應用到剛纔拿來的路由器上,這個路由器 LAN口地址爲: 10.1.1.1, 網段爲 10.1.1.0/24:

# quantum router-interface-add  $DEMO_ROUTER_ID $DEMO_SUBNET_ID
給Router添加 External IP

在給這個路由器的 WLAN 口插上連接外網的網線,並從 External 網絡裏拿一個 IP 地址設置到 WLAN 口:

# quantum router-gateway-set $DEMO_ROUTER_ID $EXTERNAL_NET_ID

給demo租戶創建一個虛擬機

給我們即將要啓動的虛擬機創建一個 Port,指定虛擬機用那個 Subnet 和 Network,在指定一個固定的 IP 地址:

# quantum net-list
+--------------------------------------+---------------+--------------------------------------+
| id                                   | name          | subnets                              |
+--------------------------------------+---------------+--------------------------------------+
| 18ed98d5-9125-4b71-8a37-2c9e3b07b99d | demo_net1     | 75896360-61bb-406e-8c7d-ab53f0cd5b1b |
| 1d05130a-2b1c-4500-aa97-0857fcb3fa2b | external_net1 | 07ba5095-5fa0-4768-9bee-7d44d2a493cf |
+--------------------------------------+---------------+--------------------------------------+
# DEMO_PORT_ID=$(quantum port-create --tenant-id=$DEMO_ID --fixed-ip subnet_id=$DEMO_SUBNET_ID,ip_address=10.1.1.11 demo_net1 | awk '/ id / {print $4}')

用 demo 啓動虛擬機:

# glance p_w_picpath-list
+--------------------------------------+--------+-------------+------------------+---------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size    | Status |
+--------------------------------------+--------+-------------+------------------+---------+--------+
| f61ee640-82a7-4d6c-8816-608bb91dab7d | cirros | qcow2       | ovf              | 9761280 | active |
+--------------------------------------+--------+-------------+------------------+---------+--------+
# nova  --os-tenant-name demo boot --p_w_picpath cirros --flavor 2 --nic port-id=$DEMO_PORT_ID instance01

給 demo 租戶的虛擬機添加 Float ip

虛擬機啓動後,你發現你無法 ping 通 10.1.1.11, 有路由器在隔離你當然是無法 ping 通, 不過虛擬機可以出外網. (因爲quantum版本問題,沒有 DNS 參數選項,虛擬機的DNS有誤,自己修改下虛擬機的resolv.conf), 如果想 ssh 到虛擬機的話,就加一個 Floating IP吧:
查看 demo 租戶的虛擬機的 id

# nova --os_tenant_name=demo list
+--------------------------------------+------------+--------+---------------------+
| ID                                   | Name       | Status | Networks            |
+--------------------------------------+------------+--------+---------------------+
| b0b7f0a1-c387-4853-a076-4b7ba2d32ed1 | instance01 | ACTIVE | demo_net1=10.1.1.11 |
+--------------------------------------+------------+--------+---------------------+

獲取虛擬機的 port id

# quantum port-list -- --device_id b0b7f0a1-c387-4853-a076-4b7ba2d32ed1
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 95602209-8088-4327-a77b-1a23b51237c2 |      | fa:16:3e:9d:41:df | {"subnet_id": "75896360-61bb-406e-8c7d-ab53f0cd5b1b", "ip_address": "10.1.1.11"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

創建一個 Float ip
注意收集 id:

# quantum  --os_tenant_name=demo floatingip-create external_net1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.8.3                          |
| floating_network_id | 1d05130a-2b1c-4500-aa97-0857fcb3fa2b |
| id                  | f3670816-4d76-44e0-8831-5fe601f0cbe0 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 83792f9193e1449bb90f78400974d533     |
+---------------------+--------------------------------------+

關聯浮動 IP 到 VM

# quantum --os_tenant_name=demo floatingip-associate f3670816-4d76-44e0-8831-5fe601f0cbe0 95602209-8088-4327-a77b-1a23b51237c2
Associated floatingip f3670816-4d76-44e0-8831-5fe601f0cbe0

查看剛纔關聯的浮動 IP

# quantum floatingip-show f3670816-4d76-44e0-8831-5fe601f0cbe0
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    | 10.1.1.11                            |
| floating_ip_address | 192.168.8.3                          |
| floating_network_id | 1d05130a-2b1c-4500-aa97-0857fcb3fa2b |
| id                  | f3670816-4d76-44e0-8831-5fe601f0cbe0 |
| port_id             | 95602209-8088-4327-a77b-1a23b51237c2 |
| router_id           | bf89066b-973d-416a-959a-1c2f9965e6d5 |
| tenant_id           | 83792f9193e1449bb90f78400974d533     |
+---------------------+--------------------------------------+
# ping 192.168.8.3
PING 192.168.8.3 (192.168.8.3) 56(84) bytes of data.
64 bytes from 192.168.8.3: icmp_req=1 ttl=63 time=32.0 ms
64 bytes from 192.168.8.3: icmp_req=2 ttl=63 time=0.340 ms
64 bytes from 192.168.8.3: icmp_req=3 ttl=63 time=0.335 ms

租戶如何在界面上創建網絡?

對於瀏覽器最好用 chrome, 而 firefox 有的按鈕點擊不了。
創建一個 test 租戶,我這裏用命令創建:

# TEST_TENANT_ID=$(keystone tenant-create --name test | awk '/ id / {print $4}')
# keystone user-create --name test --pass test --tenant-id $TEST_TENANT_ID

用 test 租戶登錄界面,並創建自己的網絡:

點擊 Netork Topology,可以看到我們在目錄 13 創建的 External 網絡:
grizzly_test


接下來界面的操作對應目錄 14 的步驟
1. 選擇 Networks 按鈕,在點擊 Create Network,輸入網絡名稱:

grizzly_network


選擇 Subnet,輸入名稱,網絡地址和網關:

grizzly_subnet


選擇 Subnet Detail, 輸入 dhcp 範圍,輸入 DNS 地址,也可以添加一個靜態路由,靜態路由可以到別的網絡:

grizzly_dns


這時候就可以在 Network Topology 裏看到剛纔創建的網絡了:

grizzly_net_done


2. 選擇 Routers,點擊 Create Router, 輸入名稱:

grizzly_router


登錄路由器,點擊剛纔創建的 test_router1 名字,進入到 Interface 界面,點擊 Add Interface (LAN口),選擇剛纔創建的網絡 test_subnet:
grizzly_interface_add


在來看看拓撲圖:
interface_add_topology


回到 Interface 界面, 在給這個路由器的 WLAN 口設置一個 IP ,IP 地址從 External 網絡拿一個, 選擇 Add Gateway Interface:
grizzly_interface_gateway


繼續看圖說話:
interface_gateway_add


用 test 租戶創建一個虛擬機後的網絡拓撲圖:
instance_topology


用 admin 管理員用戶登錄查看網絡拓撲圖, 可以看到 External 網絡、demo 和 test 租戶的網絡:
admin_topology


其實 Quantum 的網絡一點都不復雜,只要對應結合到實際生活中就會很好理解.

參考資料

http://www.longgeek.com/2012/07/30/rhel-6-2-openstack-essex-install-only-one-node/
http://www.chenshake.com/openstack-folsom-guide-for-ubuntu-12-04/#i-21
http://liangbo.me/index.php/2012/10/07/openstack-folsom-quantum-openvswitch/
http://www.ibm.com/developerworks/cn/cloud/library/1209_zhanghua_openstacknetwork/
http://docs.openstack.org/folsom/openstack-network/admin/content/index.html
http://docs.openstack.org/trunk/openstack-network/admin/content/index.html
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/index.html


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章