OpenStack Grizzly-g3 單節點安裝在 Ubuntu12.04 上

網絡環境
單獨的網絡節點 GRE 模式最少需要三塊網卡,而我這裏是把所有服務都安裝在了一個節點上,並不存在 quantum 多 agent , 所以我在這裏用了兩個網卡。
1.管理網絡: eth0 192.16.0.254/16 用來 Mysql、AMQP、API
2.外部網絡: eth1 192.168.137.154/24 br-ex
3.用於下載軟件包的網絡: eth2 192.168.137.155/24




網卡設置
eth1 用來做quantum的external網絡,暫時沒有把ip地址寫到配置文件裏,在後面配置ovs時候會在文件增加一個br-ex網卡信息.
將網卡信息修改如下所示:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.16.0.254
netmask 255.255.0.0
#若網絡環境會自動分配IP地址則eth0如下修改:
#iface eth0 inet static
#address 192.16.0.254
#netmask 255.255.0.0
#auto eth0
auto eth1
iface eth1 inet manual
然後重啓網絡:
# /etc/init.d/networking restart



添加 Grizzly 源, 並更新軟件包
# cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
_GEEK_
# apt-get install ubuntu-cloud-keyring python-software-properties software-properties-common python-keyring
# apt-get update
# apt-get upgrade
# apt-get dist-upgrade


安裝 mysql
# apt-get install python-mysqldb mysql-server
使用sed編輯 /etc/mysql/my.cnf 文件的更改綁定地址(0.0.0.0)從本地主機(127.0.0.1)
禁止 mysql 做域名解析,防止 連接 mysql 出現錯誤和遠程連接 mysql 慢的現象。
然後重新啓動mysql服務.
# sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
# sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
# /etc/init.d/mysql restart






安裝 RabbitMQ
# apt-get install rabbitmq-server
安裝和配置Keystone
# apt-get install keystone
刪除默認 keystone 的 sqlite db 文件
# rm -f /var/lib/keystone/keystone.db
創建 keystone 數據庫
在 mysql 裏創建 keystone 庫,並授權 keystone 用戶訪問:
# mysql -uroot -pmysql
mysql> create database keystone;
mysql> grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
mysql> flush privileges; quit;
修改/etc/keystone/keystone.conf
# vi /etc/keystone/keystone.conf
admin_token = ADMIN
debug = True
verbose = True
[sql]
connection = mysql://keystone:[email protected]/keystone #這一行必須在 [sql] 下面
[signing]
token_format = UUID
啓動 keystone 服務: 
# /etc/init.d/keystone restart
同步 keystone 表數據到 db 中: 
# keystone-manage db_sync
用腳本導入數據
創建 user、role、tenant、service、endpoint:
下載腳本:
# wget http://download.longgeek.com/openstack/grizzly/keystone.sh
修改腳本內容:
ADMIN_PASSWORD=${ADMIN_PASSWORD:-password} #租戶 admin 的密碼
SERVICE_PASSWORD=${SERVICE_PASSWORD:-password} #nova,glance,cinder,quantum,swift的密碼
export SERVICE_TOKEN="ADMIN" # token
export SERVICE_ENDPOINT="http://192.16.0.254:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service} #租戶 service,包含了nova,glance,ciner,quantum,swift等服務
KEYSTONE_REGION=RegionOne
KEYSTONE_IP="192.16.0.254"
#KEYSTONE_WLAN_IP="192.16.0.254"
SWIFT_IP="192.16.0.254"
#SWIFT_WLAN_IP="192.16.0.254"
COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
QUANTUM_IP=$KEYSTONE_IP
執行腳本:
# sh keystone.sh
設置環境變量
這裏變量對於 keystone.sh 裏的設置:
# cat > /root/export.sh << _GEEK_
export OS_TENANT_NAME=admin #這裏如果設置爲 service 其它服務會無法驗證.
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://192.16.0.254:5000/v2.0/
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://192.16.0.254:35357/v2.0/
_GEEK_
# echo 'source /root/export.sh' >> /root/.bashrc
# source /root/export.sh
驗證 keystone
# keystone user-list
# keystone role-list
# keystone tenant-list
# keystone endpoint-list

Troubleshooting Keystone
1. 查看 5000 和 35357 端口是否在監聽
2. 查看 /var/log/keystone/keystone.log 報錯信息
3. keystone.sh 腳本執行錯誤解決:(檢查腳本內容變量設置)
# mysql -uroot -pmysql
mysql> drop database keystone;
mysql> create database keystone; quit;
# keystone-manage db_sync
# sh keystone.sh
4.驗證keystone時出現錯誤,先去查看 log,在檢查環境變量是否設置正確




安裝和配置Glance
安裝glance
# apt-get install glance
刪除 glance sqlite 文件:
# rm -f /var/lib/glance/glance.sqlite
創建 glance 數據庫
# mysql -uroot -pmysql
mysql> create database glance;
mysql> grant all on glance.* to 'glance'@'%' identified by 'glance';
mysql> flush privileges;
mysql> quit;
修改glance配置文件
# vi /etc/glance/glance-api.conf
修改下面的選項,其它默認。
verbose = True
debug = True
sql_connection = mysql://glance:[email protected]/glance
workers = 4
registry_host = 192.16.0.254
notifier_strategy = rabbit
rabbit_host = 192.16.0.254
rabbit_userid = guest
rabbit_password = guest
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-api-paste.ini
flavor = keystone

# vi /etc/glance/glance-registry.conf
修改下面的選項,其它默認。
verbose = True
debug = True
sql_connection = mysql://glance:[email protected]/glance
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-registry-paste.ini
flavor = keystone

啓動 glance 服務:
# /etc/init.d/glance-api restart
# /etc/init.d/glance-registry restart
同步到db
# glance-manage version_control 0
# glance-manage db_sync
檢查glance,結果應該爲空,什麼都不顯示
# glance image-list

上傳鏡像文件
下載Cirros img作爲測試使用,只有10M:
# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
# glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-
disk.img
Added new image with ID: xxxxxxxxxxx
Cirros img 是可以使用用戶名和密碼登陸,也可以使用密鑰登陸, user:cirros password:cubswin:)
Troubleshooting Glance
1. 確保配置文件正確,9191 9292 端口存在
2. /var/log/glance/ 兩個log文件
3. 確保環境變量中的OS_TENANT_NAME=admin, 否則會報 401錯誤
4. 上傳鏡像的格式對應命令中指定的格式




安裝 Openvswitch
# apt-get install openvswitch-datapath-source
# module-assistant auto-install openvswitch-datapath
# apt-get install openvswitch-switch openvswitch-brcompat
設置 ovs-brcompatd 啓動:
# sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
啓動 openvswitch-switch:
# /etc/init.d/openvswitch-switch restart
* ovs-brcompatd is not running #brcompatd沒有啓動
* ovs-vswitchd is not running
* ovsdb-server is not running
* Inserting openvswitch module
* /etc/openvswitch/conf.db does not exist
* Creating empty database /etc/openvswitch/conf.db
* Starting ovsdb-server
* Configuring Open vSwitch system IDs
* Starting ovs-vswitchd
* Enabling gre with iptables
再次啓動,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服務都啓動
# /etc/init.d/openvswitch-switch restart
# lsmod | grep brcompat
brcompat 13512 0 
openvswitch 84038 7 brcompat
如果還是啓動不了的話,用下面命令:
# /etc/init.d/openvswitch-switch force-reload-kmod
添加網橋
添加 External 網絡網橋 br-ex
用 openvswitch 添加網橋 br-ex 並把網卡 eth1 加入 br-ex:
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1


做完上面操作後,eth1 這個網卡是沒有工作的,手工設置 ip:
# ifconfig eth1 0
# ifconfig br-ex 192.168.137.154/24
# route add default gw 192.168.137.2 dev br-ex
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf
在寫到網卡配置文件:
# vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.16.0.254
netmask 255.255.0.0
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
auto br-ex
iface br-ex inet static
address 192.168.137.154
netmask 255.255.255.0
gateway 192.168.137.2
dns-nameservers 8.8.8.8

# /etc/init.d/keystone restart
重啓網卡可能會出現:
Failed to bring up br-ex.
br-ex 可能有 ip 地址,但沒有網關和 DNS,需要手工配置一下,或者重啓機器. 重啓機器後就正常了。
# ovs-vsctl add-br br-int
查看網絡
# ovs-vsctl list-br
br-ex
br-int
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "1.4.0+build0"

安裝quantum
# apt-get install quantum-server python-cliff python-pyparsing python-quantumclient
安裝 openvswitch 插件來支持 OVS:
# apt-get install quantum-plugin-openvswitch


創建 Quantum DB
# mysql -uroot -pmysql
mysql> create database quantum;
mysql> grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
mysql> flush privileges; quit;
配置 /etc/quantum/quantum.conf
# vi /etc/quantum/quantum.conf
修改如下選項,其他保留,若原配置中不存在以下選項則添加:
[DEFAULT]
debug = True
verbose = True
state_path = /var/lib/quantum
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
api_paste_config = /etc/quantum/api-paste.ini
control_exchange = quantum
rabbit_host = 192.16.0.254
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
notification_driver = quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[QUOTAS]
[DEFAULT_SERVICETYPE]
[SECURITYGROUP]
[AGENT]
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = password
signing_dir = /var/lib/quantum/keystone-signing
配置 Open vSwitch Plugin
# vi /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[DATABASE]
sql_connection = mysql://quantum:[email protected]/quantum
reconnect_interval = 2
[OVS]
enable_tunneling = True
tenant_network_type = gre
tunnel_id_ranges = 1:1000
local_ip = 10.0.0.1
integration_bridge = br-int
tunnel_bridge = br-tun
[AGENT]
polling_interval = 2
[SECURITYGROUP]
啓動quantum服務
# /etc/init.d/quantum-server restart
安裝 OVS agent
# apt-get install quantum-plugin-openvswitch-agent
啓動 ovs-agent 時候確保 ovs_quantum_plugin.ini 裏有 local_ip 存在. 確保 br-int 網橋已創建.
# /etc/init.d/quantum-plugin-openvswitch-agent restart
啓動 ovs-agent 後會根據配置文件自動創建一個 br-tun 網橋:
# ovs-vsctl list-br
br-ex
br-int
br-tun
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "1.4.0+build0"


安裝 quantum-dhcp-agent
# apt-get install quantum-dhcp-agent
配置 quantum-dhcp-agent:
# vi /etc/quantum/dhcp_agent.ini
修改如下選項,其他保留,若原配置中不存在以下選項則添加:
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://192.16.0.254:35357/v2.0
dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
state_path = /var/lib/quantum
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
啓動服務:
# /etc/init.d/quantum-dhcp-agent restart
安裝 L3 Agent
# apt-get install quantum-l3-agent
配置 L3 Agent:
# vi /etc/quantum/l3_agent.ini 
修改如下選項,其他保留,若原配置中不存在以下選項則添加:
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
external_network_bridge = br-ex
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://192.16.0.254:35357/v2.0
l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
啓動 L3 agent:
# /etc/init.d/quantum-l3-agent restart
配置 Metadata agent
# vi /etc/quantum/metadata_agent.ini
修改如下選項,其他保留,若原配置中不存在以下選項則添加:
[DEFAULT]
debug = True
auth_url = http://192.16.0.254:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = password
state_path = /var/lib/quantum
nova_metadata_ip = 192.16.0.254
nova_metadata_port = 8775
啓動 Metadata agent:
# /etc/init.d/quantum-metadata-agent restart
Troubleshooting Quantum
1. 所有配置文件配置正確,9696 端口啓動
2. /var/log/quantum/下所有 log 文件
3. br-ex、br-int 提前添加好




安裝Cinder
# apt-get install cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient
創建DB
# mysql -uroot -pmysql
mysql> create database cinder;
mysql> grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
mysql> flush privileges; quit;
建立一個邏輯卷卷組 cinder-volumes,有兩種方法,一種是用物理磁盤創建主分區,一種是用文件來模擬,兩者選其一。
方法一:創建一個普通分區,可以用sdb,創建了一個主分區,大小爲所有空間
# fdisk /dev/sdb
n
p
1
Enter
Enter
t
8e
w
# partx -a /dev/sdb
# pvcreate /dev/sdb1
# vgcreate cinder-volumes /dev/sdb1
# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 0 0 wz--n- 150.00g 150.00g
localhost 1 2 0 wz--n- 279.12g 12.00m

方法二:文件模擬
# apt-get install iscsitarget open-iscsi iscsitarget-dkms 
配置iscsi服務
# sed -i 's/false/true/g' /etc/default/iscsitarget
重啓服務
# service iscsitarget start
# service open-iscsi start
創建組
# dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
# losetup /dev/loop2 cinder-volumes
# fdisk /dev/loop2
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w
創建物理卷和卷組:
# pvcreate /dev/loop2
# vgcreate cinder-volumes /dev/loop2
# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 0 0 wz--n- 2.00g 2.00g
修改配置文件修改cinder.conf
# vi /etc/cinder/cinder.conf
修改如下選項,其他保留,若原配置中不存在以下選項則添加:
[DEFAULT]
# LOG/STATE
verbose = True
debug = True
iscsi_helper = tgtadm
auth_strategy = keystone
volume_group = cinder-volumes
volume_name_template = volume-%s
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
# RPC
rabbit_host = 192.16.0.254
rabbit_password = guest
rpc_backend = cinder.openstack.common.rpc.impl_kombu
# DATABASE
sql_connection = mysql://cinder:[email protected]/cinder
# API
osapi_volume_extension = cinder.api.contrib.standard_extensions
修改api-paste.ini
# vi /etc/cinder/api-paste.ini
修改文件末尾[filter:authtoken]字段:
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 192.16.0.254
service_port = 5000
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = password
signing_dir = /var/lib/cinder
同步並啓動服務
同步到 db 中:
# cinder-manage db sync
2013-03-11 13:41:57.885 30326 DEBUG cinder.utils [-] backend <module 'cinder.db.sqlalchemy.migration' from
'/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.pyc'> __get_backend /usr/lib/python2.7/dist-
packages/cinder/utils.py:561
啓動服務:
# for serv in api scheduler volume
do
/etc/init.d/cinder-$serv restart
done
# /etc/init.d/tgt restart
檢查
# cinder list
Troubleshooting Cinder
1. 服務和 8776 端口啓動
2. /var/log/cinder 中日誌文件
3. 依賴配置文件指定的volume_group = cinder-volumes, 卷組存在
4. tgt 服務正常.




檢查kvm是否可行並安裝libvirtd服務
# apt-get install cpu-checker
# kvm-ok
如果kvm加速不行,請確認cpu是否支持虛擬化。
# apt-get install -y kvm libvirt-bin pm-utils
修改/etc/libvirt/qemu.conf:
# vi /etc/libvirt/qemu.conf
修改如下信息:
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]

更新/etc/libvirt/libvirtd.conf
# vi /etc/libvirt/libvirtd.conf
修改如下信息:
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"

更改 /etc/init/libvirt-bin.conf
# vi /etc/init/libvirt-bin.conf
修改如下信息:
env libvirtd_opts="-d -l"
更改/etc/default/libvirt-bin 
# vi /etc/default/libvirt-bin 
修改如下信息:
libvirtd_opts="-d -l"
重啓libvirt服務
# service libvirt-bin restart
或者
# /etc/init.d/libvirt-bin restart






安裝Nova控制器
# apt-get install nova-api nova-novncproxy novnc nova-ajax-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler
# apt-get install nova-compute nova-conductor nova-compute-kvm
創建數據庫
# mysql -uroot -pmysql
mysql> create database nova;
mysql> grant all on nova.* to 'nova'@'%' identified by 'nova';
mysql> flush privileges; quit;
配置nova
# vi /etc/nova/nova.conf
修改如下選項,其他保留,若原配置中不存在以下選項則添加:
[DEFAULT]
# LOGS/STATE
debug = True
verbose = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
rootwrap_config = /etc/nova/rootwrap.conf
dhcpbridge = /usr/bin/nova-dhcpbridge
# SCHEDULER
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
## VOLUMES
volume_api_class = nova.volume.cinder.API
# DATABASE
sql_connection = mysql://nova:[email protected]/nova
# COMPUTE
libvirt_type = kvm
compute_driver = libvirt.LibvirtDriver
instance_name_template = instance-%08x
api_paste_config = /etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host = True
# APIS
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host = 192.16.0.254
s3_host = 192.16.0.254
# RABBITMQ
rabbit_host = 192.16.0.254
rabbit_password = guest
# GLANCE
image_service = nova.image.glance.GlanceImageService
glance_api_servers = 192.16.0.254:9292
# NETWORK
network_api_class = nova.network.quantumv2.api.API
quantum_url = http://192.16.0.254:9696
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = password
quantum_admin_auth_url = http://192.16.0.254:35357/v2.0
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
# NOVNC CONSOLE
novncproxy_base_url = http://192.168.137.154:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address = 192.16.0.254
vncserver_listen = 0.0.0.0
# AUTHENTICATION
auth_strategy = keystone
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova
配置nova-compute,以支持ovs
# vi /etc/nova/nova-compute.conf
加上
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
配置 api-paste.ini
修改 [filter:authtoken]:
# vi /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova

啓動服務
# for serv in api cert scheduler consoleauth novncproxy conductor compute;
do
/etc/init.d/nova-$serv restart
done


同步數據並啓動服務
# nova-manage db sync
# !for
查看服務
出現笑臉:)表示對應服務正常,如做狀態是XX的話,注意查看/var/log/nova/下對應服務的log:
# nova-manage service list 2> /dev/null
Binary Host Zone Status State Updated_At
nova-cert localhost internal enabled :) 2013-03-11 02:56:21
nova-scheduler localhost internal enabled :) 2013-03-11 02:56:22
nova-consoleauth localhost internal enabled :) 2013-03-11 02:56:22
nova-conductor localhost internal enabled :) 2013-03-11 02:56:22
nova-compute localhost nova enabled :) 2013-03-11 02:56:23
組策略
給默認的租策略: default 添加 ping 響應和 ssh 端口:
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
Troubleshooting Nova
1. 配置文件指定的參數是否符合實際環境
2. /var/log/nova/中對應服務的log
3. 依賴環境變量, 數據庫連接,端口啓動
4. 硬件是否支持虛擬化等




安裝Horizon
安裝OpenStack Dashboard、Apache 和 WSGI 模塊:
# apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard
配置 Dashboard,修改 Memcache 的監聽地址:
去掉 ubuntu 的 主題:
# mv /etc/openstack-dashboard/ubuntu_theme.py /etc/openstack-dashboard/ubuntu_theme.py.bak
# 下面是修改監聽端口,可以不處理
# vim /etc/openstack-dashboard/local_settings.py
DEBUG = True
CACHE_BACKEND = 'memcached://192.16.0.254:11211/'
OPENSTACK_HOST = "192.16.0.254"
# sed -i 's/127.0.0.1/192.16.0.254/g' /etc/memcached.conf
啓動 Memcached 和 Aapache:
# /etc/init.d/memcached restart
# /etc/init.d/apache2 restart
瀏覽器訪問:
 http://192.16.0.254/horizon
用戶:    admin
密碼: password
Troubleshooting Horizon
1. 出現無法登錄的情況,注意查看 /var/log/apache2/error.log 和 /var/log/keystone/keystone.log
一般會出現 401 的錯誤,主要和配置文件有關係,quantum cinder nova 配置文件的 keystone
驗證信息有誤。
2. 登錄出現 [Errno 111] Connection refused 錯誤時候,一般是 cinder-api 和 nova-api 沒有啓動,




配置 External 網絡
介紹
我們用管理員創建一個 External 網絡後,剩下的就交給每個租戶自己來創建自己的網絡了。
Quantum 裏的名詞理解:
Network:分爲 External 和 Internal 兩種網絡, 也就是一個交換機。
Subnet:這個網絡在哪個網段,它的網關和 dns 是多少
Router:一個路由器,可以用來隔離不同租戶之間自己創建 的 Internal 網絡.
Interface: 路由器上的 WLAN 和 LAN 口
Port:交換機上的端口,這個端口被誰使用,可以知道 IP 地址信息。
創建一個 External 網絡
注意 router:external=True 參數,它指這是一個 External 網絡
# EXTERNAL_NET_ID=$(quantum net-create external_net1 --router:external=True | awk '/ id / {print $4}')
創建一個 Subnet
創建 Float IP 地址的 Subnet, 這個 Subnet 的 DHCP 服務被禁用:
# SUBNET_ID=$(quantum subnet-create external_net1 192.168.137.0/24 --name=external_subnet1 --gateway_ip 192.168.137.2 --enable_dhcp=False | awk '/ id / {print $4}')
創建一個 Internal 網絡
# DEMO_ID=$(keystone tenant-list | awk '/ demo / {print $2}')
demo 租戶:我給你們部門規劃創建了一套網絡
# INTERNAL_NET_ID=$(quantum net-create demo_net1 --tenant_id $DEMO_ID | awk '/ id / {print $4}')
爲 demo 租戶創建 Subnet
demo 租戶:我給你們定義了一個 網段 10.1.1.0/24 , 網關是10.1.1.1,默認開啓了 dhcp 功能
# DEMO_SUBNET_ID=$(quantum subnet-create demo_net1 10.1.1.0/24 --name=demo_subnet1 --gateway_ip 10.1.1.1 --tenant_id $DEMO_ID| awk '/ id / {print $4}')
爲 demo 租戶創建一個 Router
又給 demo 租戶拿來了一個路由器:
# DEMO_ROUTER_ID=$(quantum router-create --tenant_id $DEMO_ID demo_router1 | awk '/ id / {print $4}')
添加 Router 到 Subnet上
剛纔對 demo 說的話, 應用到剛纔拿來的路由器上,這個路由器 LAN口地址爲: 10.1.1.1, 網段爲 10.1.1.0/24:
# quantum router-interface-add $DEMO_ROUTER_ID $DEMO_SUBNET_ID
給Router添加 External IP
在給這個路由器的 WLAN 口插上連接外網的網線,並從 External 網絡裏拿一個 IP 地址設置到 WLAN 口:
# quantum router-gateway-set $DEMO_ROUTER_ID $EXTERNAL_NET_ID
給demo租戶創建一個虛擬機
給我們即將要啓動的虛擬機創建一個 Port,指定虛擬機用那個 Subnet 和 Network,在指定一個固定的 IP 地址:
# quantum net-list
+--------------------------------------+---------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------------+-------------------------------------------------------+
| a6a8482f-d189-4ced-8b27-cf59331f6ce7 | external_net1 | 5c6a675e-98e5-435d-8bbc-d262accd2286 192.168.137.0/24 |
| afced220-0e54-46f5-925b-095bc90e6010 | demo_net1 | 7a748eba-f553-4323-ae92-3e453292c38d 10.1.1.0/24 |
+--------------------------------------+---------------+-------------------------------------------------------+
# DEMO_PORT_ID=$(quantum port-create --tenant-id=$DEMO_ID --fixed-ip subnet_id=$DEMO_SUBNET_ID,ip_address=10.1.1.11 demo_net1 | awk '/ id / {print $4}')


用 demo 啓動虛擬機:
# glance image-list
+--------------------------------------+--------+-------------+------------------+---------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------+-------------+------------------+---------+--------+
| 11f91e16-cbed-4f60-bfb4-b9ae96651547 | cirros | qcow2 | ovf | 9761280 | active |
+--------------------------------------+--------+-------------+------------------+---------+--------+
# nova --os-tenant-name demo boot --image cirros --flavor 2 --nic port-id=$DEMO_PORT_ID instance01




給 demo 租戶的虛擬機添加 Float ip
虛擬機啓動後,你發現你無法 ping 通 10.1.1.11, 有路由器在隔離你當然是無法 ping 通, 不過虛擬機可以出外網. (因爲
quantum版本問題,沒有 DNS 參數選項,虛擬機的DNS有誤,自己修改下虛擬機的resolv.conf), 如果想 ssh 到虛擬機的話,就加
一個 Floating IP吧:
查看 demo 租戶的虛擬機的 id
# nova --os_tenant_name=demo list
+--------------------------------------+------------+--------+---------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------+--------+---------------------+
| ea3c298c-9c30-4bf9-b6ca-37a719e01ff6 | instance01 | ACTIVE | demo_net1=10.1.1.11 |
+--------------------------------------+------------+--------+---------------------+

問題:鏡像狀態爲error的話,且instance日誌如下:
Unable to get log for instance "e4497913-5ada-4559-8d25-0cabea149acd".
nova-compute.log 出現如下錯誤信息:
1)RROR nova.compute.manager Instance failed to spawn
2)libvirtError: internal error Process exited while reading console log output: char device
是因爲libvirtd服務的權限不夠,要修改/etc/libvirt/qemu.conf
#vi /etc/libvirt/qemu.conf
修改如下:
user = "root"
group = "root"
dynamic_ownership = 1
然後重啓libvirt-bin服務和nova服務
# /etc/init.d/libvirt-bin restart
# for serv in api cert scheduler consoleauth novncproxy conductor compute;
do
/etc/init.d/nova-$serv restart
done
然後再新建實例
獲取虛擬機的 port id
# quantum port-list -- --device_id ea3c298c-9c30-4bf9-b6ca-37a719e01ff6
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 701aafff-4e9e-48fe-891b-e576da51a2a0 | | fa:16:3e:16:2e:13 | {"subnet_id": "7a748eba-f553-4323-ae92-3e453292c38d", "ip_address": "10.1.1.11"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
創建一個 Float ip
注意收集 id:
# quantum --os_tenant_name=demo floatingip-create external_net1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.137.4 |
| floating_network_id | a6a8482f-d189-4ced-8b27-cf59331f6ce7 |
| id | 3e728bc3-88b9-40b2-a743-e83e84d9aa69 |
| port_id | |
| router_id | |
| tenant_id | 13b1f513484d40afaec3fd0b382cac09 |
+---------------------+--------------------------------------+


關聯浮動 IP 到 VM
# quantum --os_tenant_name=demo floatingip-associate 3e728bc3-88b9-40b2-a743-e83e84d9aa69 701aafff-4e9e-48fe-891b-e576da51a2a0
Associated floatingip 3e728bc3-88b9-40b2-a743-e83e84d9aa69
查看剛纔關聯的浮動 IP
# quantum floatingip-show 3e728bc3-88b9-40b2-a743-e83e84d9aa69
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.1.1.11 |
| floating_ip_address | 192.168.137.4 |
| floating_network_id | a6a8482f-d189-4ced-8b27-cf59331f6ce7 |
| id | 3e728bc3-88b9-40b2-a743-e83e84d9aa69 |
| port_id | 701aafff-4e9e-48fe-891b-e576da51a2a0 |
| router_id | b9f116f5-8363-4fdf-b0d4-0ee239275395 |
| tenant_id | 13b1f513484d40afaec3fd0b382cac09 |
+---------------------+--------------------------------------+


測試:
# ping 192.168.137.4
PING 192.168.137.4 (192.168.137.4) 56(84) bytes of data.
64 bytes from 192.168.137.4: icmp_req=1 ttl=63 time=25.0 ms
64 bytes from 192.168.137.4: icmp_req=2 ttl=63 time=0.963 ms
64 bytes from 192.168.137.4: icmp_req=3 ttl=63 time=0.749 ms
64 bytes from 192.168.137.4: icmp_req=4 ttl=63 time=0.628 ms
64 bytes from 192.168.137.4: icmp_req=5 ttl=63 time=0.596 ms



參考文檔

http://longgeek.com/2013/03/11/openstack-grizzly-g3-for-ubuntu-12-04-all-in-one-installation/

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst/

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章