Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)

由於創建實例時無法動態獲取IP,可能是之前的neutron配置有誤。由於時間原因,暫時只能擱下OpenStack實驗。下面還差一個cinder(可持續化存儲沒有做)

一、控制節點(Controller)

master1作爲控制節點、master2作爲計算節點、master3作爲網絡節點

1、環境準備

1.0 openstack鏡像源

建議選擇較新版本實驗,並根據官方文檔進行配置。各版本配置方法都有區別。。。
https://repos.fedorapeople.org/repos/openstack/

舊版本:
https://repos.fedorapeople.org/repos/openstack/EOL/ 

如果一定要實驗舊版本,建議檢測較舊的鏡像源和CD鏡像源配合:
http://vault.centos.org/

1.1 環境準備

將master1(控制節點)硬件內存調整至2G以上,準備兩塊網卡,eth0作爲內部通信,eth1外網通信;

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

將master2(計算節點)cpu開啓虛擬化功能,內存調整至2G以上。準備兩塊網卡,eth0作爲內部通信,eth1與網絡節點做GRE隧道;

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

將master3(網絡節點)準備三塊網卡,eth0作爲內部通信,eth1與計算節點做GRE隧道,eth1外網通信;

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷


記得要在所有節點網卡配置加上
NM_CONTROLLED='no'

所有節點停止NetworkManager
[root@master1 ~]# systemctl stop NetworkManager
[root@master1 ~]# systemctl disable NetworkManager

爲了實驗方便,清空所有防火牆規則
# iptables -F

所有節點配置主機名,並修改主機名:
[root@master1 ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 controller.com controller
192.168.1.2 compute1.com compute1
192.168.1.3 network1.com network1

[root@master1 ~]# scp /etc/hosts [email protected]:/etc/
[root@master1 ~]# scp /etc/hosts [email protected]:/etc/

[root@master1 ~]# hostnamectl set-hostname controller.com

[root@master2 ~]# hostnamectl set-hostname compute1.com

[root@master3 ~]# hostnamectl set-hostname network1.com

1.3 master1配置nat轉發規則(爲了讓master2能夠訪問外網)

刪除firewalld
[root@master1 ~]# systemctl stop firewalld
[root@master1 ~]# systemctl disable firewalld
[root@master1 ~]# systemctl mask firewalld

安裝啓動iptables
[root@master1 ~]# yum install iptables iptables-services
[root@master1 ~]# systemctl start iptables.service
清楚默認規則
[root@master1 ~]# iptables -F

配置nat:
[root@master1 ~]# iptables -t nat -A POSTROUTING -s 192.168.1.0/24 ! -d 192.168.1.0/24 -j SNAT --to-source 10.201.106.131

查看nat規則:
[root@master1 ~]# iptables -L -n -t nat

master1開啓路由轉發
[root@master1 ~]# vim /etc/sysctl.conf 

net.ipv4.ip_forward = 1

立即生效:
[root@master1 ~]# sysctl -p
net.ipv4.ip_forward = 1

master2配置默認網關地址爲master1的eth0地址後可以訪問外網:
[root@master2 ~]# route add default gw 192.168.1.1

1.4 各節點重名後對應

hostnamectl改完主機名後需要退出重新登錄生效
master1=controller
master2=compute1
master3=network1

1.5 master1安裝MariaDB

master1安裝MariaDB:
[root@master1 ~]# yum install mariadb-server

創建數據庫目錄:
[root@controller ~]# mkdir -pv /mydata/data
目錄授權:
[root@controller ~]# chown -R mysql:mysql /mydata/data/

配置
[root@controller ~]# vim /etc/my.cnf

[mysqld]
datadir=/mydata/data
default-storage-engine = innodb
character-set-server = utf8
innodb_file_per_table = on
skip_name_resolve = on

[mysql]
default-character-set=utf8  

啓動
[root@master1 ~]# systemctl start mariadb

設置密碼:
MariaDB [(none)]> GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '設置密碼' WITH GRANT OPTION;
MariaDB [(none)]> GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '設置密碼密碼' WITH GRANT OPTION;
刷新權限表:
MariaDB [(none)]> FLUSH PRIVILEGES;

2、Keystone

2.1 導入鏡像源

icehouse地址  
https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/

[root@zz ~]# vim /etc/yum.repos.d/rdo-release.repo 

[openstack-I]
name=OpenStack I Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/
gpgcheck=0
enabled=1

# yum clean all
# yum repolist

安裝
[root@controller ~]# yum install openstack-keystone python-keystoneclient openstack-utils

2.2 mysql創建keystone的數據庫,並授權訪問

創建數據庫:
MariaDB [(none)]> CREATE DATABASE keystone;

授權:
MariaDB [(none)]> GRANT ALL ON keystone.* to 'keystone'@'%' IDENTIFIED BY '設置密碼';
MariaDB [(none)]> GRANT ALL ON keystone.* to 'keystone'@'localhost' IDENTIFIED BY '設置密碼';

刷新授權表:
MariaDB [(none)]> FLUSH PRIVILEGES;

初始化同步keystone數據庫:
[root@controller ~]# su -s /bin/sh -c 'keystone-manage db_sync' keystone

查看數據庫:
MariaDB [keystone]> SHOW DATABASES;
MariaDB [keystone]> USE keystone;
MariaDB [keystone]> SHOW tables;

配置:
[root@controller ~]# vim /etc/keystone/keystone.conf 
[database]
connection=mysql://keystone:[email protected]/keystone

2.3 keystone其他初始化配置

定義存放token值的變量
[root@controller ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
保存值到文件
[root@controller ~]# echo $ADMIN_TOKEN
2506715b010f7e9ea0e0
[root@controller ~]# echo $ADMIN_TOKEN > .admin_toekn.rc

配置:
[root@controller ~]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token=2506715b010f7e9ea0e0

設置本地PKI(證書):
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

修改PKI目錄權限:
[root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl/
[root@controller ~]# chmod -R o-rwx /etc/keystone/ssl/

啓動keystone服務:
[root@controller ~]# systemctl enable openstack-keystone
[root@controller ~]# systemctl start openstack-keystone

定義keystone的TOEKN變量,用於命令執行TOKEN(令牌)驗證,有了這個變量,敲keystone命令時,默認不用加--os-toekn參數:
[root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[root@controller ~]# echo $OS_SERVICE_TOKEN
2506715b010f7e9ea0e0

[root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

列出用戶(不用加--os-toekn參數也可執行)
[root@controller ~]# keystone --os-token $ADMIN_TOKEN user-list

[root@controller ~]# keystone user-list

[root@controller ~]# 

2.5 創建管理員用戶

查看keystone命令幫助:
[root@controller ~]# keystone help
[root@controller ~]# keystone help create-user
[root@controller ~]# keystone help role-create
[root@controller ~]# keystone help user-role-add

創建管理用戶:
[root@controller ~]# keystone user-create --name=admin --pass=admin [email protected]
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           [email protected]           |
| enabled  |               True               |
|    id    | 032b9e8e5722495c9a71c413fcb70e6e |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+

列出用戶:
[root@controller ~]# keystone user-list
+----------------------------------+-------+---------+--------------+
|                id                |  name | enabled |    email     |
+----------------------------------+-------+---------+--------------+
| 032b9e8e5722495c9a71c413fcb70e6e | admin |   True  | [email protected] |
+----------------------------------+-------+---------+--------------+

創建擁有管理權限的角色:
[root@controller ~]# keystone role-create --name=admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 2b54b14daca041c2a3dc66325f5048ce |
|   name   |              admin               |
+----------+----------------------------------+

查看角色:
[root@controller ~]# keystone role-list
+----------------------------------+----------+
|                id                |   name   |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| 2b54b14daca041c2a3dc66325f5048ce |  admin   |
+----------------------------------+----------+

創建租戶(租客)
[root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | abfe5df994e54c6190e98e3f3f3dab38 |
|     name    |              admin               |
+-------------+----------------------------------+

把剛纔創建的admin用戶添加到admin角色,並放到admin租戶上。
[root@controller ~]# keystone user-role-add --user admin --role admin --tenant admin

把admin用戶添加到_member_角色(可以WEB_GUI訪問)
[root@controller ~]# keystone user-role-add --user admin --role _member_ --tenant admin

查看用戶擁有的角色:
[root@controller ~]# keystone user-role-list --user admin --tenant admin
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | 032b9e8e5722495c9a71c413fcb70e6e | abfe5df994e54c6190e98e3f3f3dab38 |
| 2b54b14daca041c2a3dc66325f5048ce |  admin   | 032b9e8e5722495c9a71c413fcb70e6e | abfe5df994e54c6190e98e3f3f3dab38 |
+----------------------------------+----------+----------------------------------+----------------------------------+

2.6 創建普通用戶

[root@controller ~]# keystone user-create --name=demo --pass=demo [email protected]
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           [email protected]            |
| enabled  |               True               |
|    id    | 472d9776f8984bb99a728985760ad5ba |
|   name   |               demo               |
| username |               demo               |
+----------+----------------------------------+

創建測試租戶:
[root@controller ~]# keystone tenant-create --name=demo --description="Demo Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Demo Tenant            |
|   enabled   |               True               |
|      id     | fbff77c905114d50b5be94ffd46203cd |
|     name    |               demo               |
+-------------+----------------------------------+

將用戶放入_member_角色:
[root@controller ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo

查看用戶屬於什麼角色: 
[root@controller ~]# keystone user-role-list --tenant=demo --user=demo
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | 472d9776f8984bb99a728985760ad5ba | fbff77c905114d50b5be94ffd46203cd |
+----------------------------------+----------+----------------------------------+----------------------------------+

2.7 創建service租戶(後面安裝的服務添加到裏面,作爲容器管理)

基本容器,存放內部服務。
[root@controller ~]# keystone tenant-create --name=service --description="Service Teant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Teant           |
|   enabled   |               True               |
|      id     | f9f13bac5d6f40449b2e4560ab16536d |
|     name    |             service              |
+-------------+----------------------------------+

2.8 定義服務訪問端點

相關命令幫助:
[root@controller ~]# keystone help service-create

把keystone添加到服務目錄裏面:
[root@controller ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | 882370ee97724a5a93dbde574b3f9dd9 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+

列出現在所有的服務:
[root@controller ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
|                id                |   name   |   type   |    description     |
+----------------------------------+----------+----------+--------------------+
| 882370ee97724a5a93dbde574b3f9dd9 | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+

[root@controller ~]# keystone service-list | grep -i keystone | awk '{print $2}'
882370ee97724a5a93dbde574b3f9dd9

創建keystone的訪問端點:
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/identity/ {print $2}') \
> --publicurl=http://controller:5000/v2.0 \
> --internalurl=http://controller:5000/v2.0 \
> --adminurl=http://controller:35357/v2.0
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://controller:35357/v2.0   |
|      id     | d85844bf91274283bc15e97a16cde9be |
| internalurl |   http://controller:5000/v2.0    |
|  publicurl  |   http://controller:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | 882370ee97724a5a93dbde574b3f9dd9 |
+-------------+----------------------------------+
[root@controller ~]# 

查看端點信息:
[root@controller ~]# keystone endpoint-list
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
|                id                |   region  |          publicurl          |         internalurl         |           adminurl           |            service_id            |
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
| d85844bf91274283bc15e97a16cde9be | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 882370ee97724a5a93dbde574b3f9dd9 |
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
[root@controller ~]# 

如果服務定義錯誤,可以刪除,重建:
[root@controller ~]# keystone help | grep delete
    ec2-credentials-delete
    endpoint-delete     Delete a service endpoint.
    role-delete         Delete role.
    service-delete      Delete service from Service Catalog.
    tenant-delete       Delete tenant.
    user-delete         Delete user.

日誌查看:
[root@controller ~]# tail -50 /var/log/keystone/keystone.log

2.9 修改認證方式,使用用戶密碼操作

取消TOKEN變量:
[root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

使用用戶密碼訪問測試:
[root@controller ~]# keystone --os-username=admin --os-password=admin --os-auth-url=http://controller:35357/v2.0 token-get

能夠獲取信息:

Openstack(Icehouse)安裝——殘卷

聲明用戶密碼方式的環境變量:
[root@controller ~]# vim ~/.admin-openrc.sh

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

[root@controller ~]# source ~/.admin-openrc.sh

再次測試,不用加用戶密碼參數,直接敲命令成功得到結果:
[root@controller ~]# keystone user-list
+----------------------------------+-------+---------+--------------+
|                id                |  name | enabled |    email     |
+----------------------------------+-------+---------+--------------+
| 032b9e8e5722495c9a71c413fcb70e6e | admin |   True  | [email protected] |
| 472d9776f8984bb99a728985760ad5ba |  demo |   True  | [email protected]  |
+----------------------------------+-------+---------+--------------+

3、glance

####(Image Service,存放元數據,用於在OpenStack中註冊、發現、及獲取 VM映像文件) ####

3.1 安裝glance程序包

[root@controller ~]# yum install openstack-glance python-glanceclient

查看其生成文件:
[root@controller ~]# rpm -ql openstack-glance

3.2 數據庫配置

設置數據庫本地連接,不需要密鑰:
[root@controller ~]# vim .my.cnf 

[mysql]
user=root
password=密碼
host=localhost

Openstack(Icehouse)安裝——殘卷

創建數據庫:
MariaDB [(none)]> CREATE DATABASE glance CHARACTER SET utf8;

授權:
MariaDB [(none)]> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

MariaDB [(none)]> GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';

MariaDB [(none)]> FLUSH PRIVILEGES;

數據庫同步初始化:
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

查看:
MariaDB [(none)]> USE glance;
MariaDB [glance]> SHOW TABLES;
+------------------+
| Tables_in_glance |
+------------------+
| image_locations  |
| image_members    |
| image_properties |
…………

3.3 編輯配置文件

先做個備份:
[root@controller ~]# cd /etc/glance/
[root@controller glance]# cp glance-api.conf{,.bak}
[root@controller glance]# cp glance-registry.conf{,.bak}

api配置:
[root@controller ~]# vim /etc/glance/glance-api.conf

[database]
connection=mysql://glance:[email protected]/glance

registry配置:
[root@controller ~]# vim /etc/glance/glance-registry.conf

[database]
connection=mysql://glance:[email protected]/glance

可以查看日誌有沒有報錯:
[root@controller ~]# tail -50 /var/log/glance/api.log

3.4 在keystone添加glance用戶

[root@controller ~]# keystone user-create --name=glance --pass=glance [email protected]
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          [email protected]           |
| enabled  |               True               |
|    id    | 232d3bfe23334050aa87bc4d7c6d491d |
|   name   |              glance              |
| username |              glance              |
+----------+----------------------------------+

將glance用戶放在service租戶,admin角色裏面:
[root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin

查看:
[root@controller ~]# keystone user-role-list --user=glance --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 2b54b14daca041c2a3dc66325f5048ce | admin | 232d3bfe23334050aa87bc4d7c6d491d | f9f13bac5d6f40449b2e4560ab16536d |
+----------------------------------+-------+----------------------------------+----------------------------------+

3.5 繼續修改配置文件

api:
[root@controller ~]# vim /etc/glance/glance-api.conf

[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_port=35357
auth_protocol=http
admin_tenant_name=service
admin_user=glance
admin_password=glance

[paste_deploy]
#認證方式
flavor=keystone

Registry(和上面的配置一樣,可以直接粘貼):
[root@controller ~]# vim /etc/glance/glance-registry.conf

Openstack(Icehouse)安裝——殘卷

3.6 添加端點

創建服務:
[root@controller ~]# keystone service-create --name=glance --type=image --description="OpenStack Image Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|   enabled   |               True               |
|      id     | a3e76b6f69014258be5bca4463de201b |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+

將服務添加至端點:
# keystone endpoint-create --service-id=$(keystone service-list | awk '/image/{print $2}') \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292

+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9292      |
|      id     | 947dbd1ba3284e6d81f531b4ec8ecb39 |
| internalurl |      http://controller:9292      |
|  publicurl  |      http://controller:9922      |
|    region   |            regionOne             |
|  service_id | a3e76b6f69014258be5bca4463de201b |
+-------------+----------------------------------+

啓動api和registry服務:
[root@controller ~]# for svc in api registry;do systemctl start openstack-glance-$svc;systemctl enable openstack-glance-$svc;done

正常運行中:

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

可以看下日誌有沒報錯:
[root@controller ~]# tail -50 /var/log/glance/api.log 
[root@controller ~]# tail -50 /var/log/glance/registry.log 

3.7 上傳保存磁盤映像文件

如果運行glance命令報錯,在glance-api提示認證錯誤;然後繼續排查keystone日誌,發現日誌有大量找不到用戶,找不到租戶,找不到角色的日誌。可以嘗試重啓下數據庫服務。我做實驗的時候,keystone日誌報了很多用戶找不到,然後數據庫卡死了,無法停止。強制殺死數據庫後恢復

以下是日誌報錯
[root@controller ~]# glance image-list
Request returned failure status.
Invalid OpenStack Identity credentials.

2018-04-30 00:41:13.258 11637 WARNING keystoneclient.middleware.auth_token [-] Verify error: Command 'openssl' returned non-zero exit status 4
2018-04-30 00:41:13.260 11637 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token
2018-04-30 00:41:13.262 11637 INFO keystoneclient.middleware.auth_token [-] Invalid user token - deferring reject downstream
2018-04-30 00:41:13.264 11637 INFO glance.wsgi.server [-] 10.201.106.131 - - [30/Apr/2018 00:41:13] "GET /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 HTTP/1.

2018-04-30 00:38:19.654 11617 WARNING keystone.common.wsgi [-] Could not find user, glance.
2018-04-30 00:38:19.745 11617 WARNING keystone.common.wsgi [-] Could not find role, admin.
2018-04-30 00:38:19.862 11617 WARNING keystone.common.wsgi [-] Could not find project, service.

Openstack(Icehouse)安裝——殘卷

解決辦法:
[root@controller ~]# kiall mariadb
[root@controller ~]# pkill mariadb
[root@controller ~]# systemctl start mariadb

[root@controller ~]# glance image-list
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+

默認目錄:
filesystem_store_datadir=/var/lib/glance/images/
[root@controller ~]# ll -d /var/lib/glance/images/
drwxr-xr-x 2 glance glance 6 Apr 29 22:33 /var/lib/glance/images/

修改其他路徑,記得修改屬主屬組權限

創建磁盤映像文件:
查看幫助:
[root@controller ~]# glance help image-create

使用網上提供的鏡像文件創建:
[root@controller ~]# ls
cirros-no_cloud-0.3.0-i386-disk.img  cirros-no_cloud-0.3.0-x86_64-disk.img

創建並上傳鏡像:
安裝qemu-img查看映像的格式
[root@controller ~]# yum install qemu-img

[root@controller ~]# qemu-img info cirros-no_cloud-0.3.0-i386-disk.img 
image: cirros-no_cloud-0.3.0-i386-disk.img
file format: qcow2

上傳:
[root@controller ~]# glance image-create --name=cirros-0.3.0-i386 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-i386-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ccdb7b71efb7cbae0ea4a437f55a5eb9     |
| container_format | bare                                 |
| created_at       | 2018-04-30T03:11:58                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 18a0019f-48e5-4f78-9f13-1166b4d53a12 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-i386                    |
| owner            | abfe5df994e54c6190e98e3f3f3dab38     |
| protected        | False                                |
| size             | 11010048                             |
| status           | active                               |
| updated_at       | 2018-04-30T03:11:58                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

上傳另一個64位的:
[root@controller ~]# glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-x86_64-disk.img 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 2b35be965df142f00026123a0fae4aa6     |
| container_format | bare                                 |
| created_at       | 2018-04-30T03:16:13                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | ca9993b8-91d5-44d1-889b-5496fd62114c |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-x86_64                  |
| owner            | abfe5df994e54c6190e98e3f3f3dab38     |
| protected        | False                                |
| size             | 11468800                             |
| status           | active                               |
| updated_at       | 2018-04-30T03:16:14                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

查看映像文件列表:
[root@controller ~]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 18a0019f-48e5-4f78-9f13-1166b4d53a12 | cirros-0.3.0-i386   | qcow2       | bare             | 11010048 | active |
| ca9993b8-91d5-44d1-889b-5496fd62114c | cirros-0.3.0-x86_64 | qcow2       | bare             | 11468800 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+

(文件名和ID號保持一致)
[root@controller ~]# ls -lht /var/lib/glance/images/
total 22M
-rw-r----- 1 glance glance 11M Apr 30 11:16 ca9993b8-91d5-44d1-889b-5496fd62114c
-rw-r----- 1 glance glance 11M Apr 30 11:11 18a0019f-48e5-4f78-9f13-1166b4d53a12

3.8 glance其他命令

查看映像文件詳細信息:
[root@controller ~]# glance image-show cirros-0.3.0-i386

下載磁盤映像文件:
[root@controller ~]# glance image-download --file=/tmp/cirros-0.3.0-i386.img --progress cirros-0.3.0-i386
[=============================>] 100%
[root@controller ~]# ls /tmp/cirros-0.3.0-i386.img 
/tmp/cirros-0.3.0-i386.img

4、Nova(Controller節點)

4.0 安裝Qpid(消息隊列)

安裝:
[root@controller ~]# yum install qpid-cpp-server

關閉認證功能:
[root@controller ~]# vim /etc/qpid/qpidd.conf 

auth=no

開啓Qpid服務:
[root@controller ~]# systemctl start qpidd
[root@controller ~]# systemctl status qpidd
[root@controller ~]# systemctl enable qpidd

4.1 安裝相關程序包

[root@controller ~]# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

查看生成的文件:
[root@controller ~]# rpm -ql openstack-nova-api
[root@controller ~]# rpm -ql openstack-nova-console

4.2 數據庫配置

創建nova數據庫:
MariaDB [(none)]> CREATE DATABASE nova CHARACTER SET 'utf8';

數據庫用戶授權:
MariaDB [(none)]> CREATE DATABASE nova CHARACTER SET 'utf8';
MariaDB [(none)]> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> FLUSH PRIVILEGES;

查看權限:
MariaDB [mysql]> USE mysql;
MariaDB [mysql]> SHOW GRANTS FOR 'nova';

修改數據庫連接相關配置:
[root@controller ~]# cd /etc/nova/
[root@controller nova]# cp nova.conf{,.bak} 

[root@controller ~]# vim /etc/nova/nova.conf

[database]
connection=mysql://nova:[email protected]/nova

其他配置:
[root@controller ~]# vim /etc/nova/nova.conf
#設置rpc後端(消息隊列)
[DEFAULT]
qpid_hostname=controller    
rpc_backend=qpid

my_ip=192.168.1.1
vncserver_listen=192.168.1.1
vncserver_proxyclient_address=192.168.1.1

初始化同步數據庫:
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

查看數據庫表(I版有100多張張表):
MariaDB [(none)]> USE nova;
MariaDB [nova]> SHOW TABLES;

Openstack(Icehouse)安裝——殘卷

可以看下日誌有沒有報錯:
[root@controller ~]# 
[root@controller ~]# tail -50 /var/log/nova/nova-manage.log

4.3 keystone創建nova用戶,並分配角色

[root@controller ~]# keystone user-create --name=nova --pass=nova [email protected]
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           [email protected]            |
| enabled  |               True               |
|    id    | 772409dae5af4d819bc87e3cc90634c0 |
|   name   |               nova               |
| username |               nova               |
+----------+----------------------------------+

查看用戶:
[root@controller ~]# keystone user-list

將nova用戶加入admin角色及service租戶
[root@controller ~]# keystone user-role-add --user=nova --role=admin --tenant=service
查看:
[root@controller ~]# keystone user-role-list --user=nova --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 2b54b14daca041c2a3dc66325f5048ce | admin | 772409dae5af4d819bc87e3cc90634c0 | f9f13bac5d6f40449b2e4560ab16536d |
+----------------------------------+-------+----------------------------------+----------------------------------+

4.4 認證、端點配置

nova認證配置:
[root@controller ~]# vim /etc/nova/nova.conf

[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
auth_version=v2.0
admin_user=nova
admin_password=nova
admin_tenant_name=service

keystone創建服務,並添加端點(訪問服務)
[root@controller ~]# keystone service-create --name=nova --type=compute --description="OpenStack Compute"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Compute         |
|   enabled   |               True               |
|      id     | a5600093b48145f1a8986481c0dd30ff |
|     name    |               nova               |
|     type    |             compute              |
+-------------+----------------------------------+

添加端點:
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/compute/{print $2}') \
--publicurl=http://controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller:8774/v2/%\(tenant_id\)s

+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8774/v2/%(tenant_id)s |
|      id     |     c4d8cc632b4541288a10fff74c6bc166    |
| internalurl | http://controller:8774/v2/%(tenant_id)s |
|  publicurl  | http://controller:8774/v2/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     a5600093b48145f1a8986481c0dd30ff    |
+-------------+-----------------------------------------+

4.5 啓動nova相關服務

[root@controller ~]# for svc in api cert consoleauth scheduler conductor novncproxy;do systemctl start openstack-nova-$svc;systemctl enable openstack-nova-$svc;done

由於websockify版本過高,導致novnc啓動失敗,參考鏈接:https://www.unixhot.com/article/27

報錯日誌:
Apr 30 22:28:14 controller systemd: Starting OpenStack Nova NoVNC Proxy Server...
Apr 30 22:28:18 controller python: detected unhandled Python exception in '/usr/bin/nova-novncproxy'
Apr 30 22:28:20 controller abrt-server: Package 'openstack-nova-novncproxy' isn't signed with proper key
Apr 30 22:28:20 controller abrt-server: 'post-create' on '/var/spool/abrt/Python-2018-04-30-22:28:19-16407' exited with 1
Apr 30 22:28:20 controller abrt-server: Deleting problem directory '/var/spool/abrt/Python-2018-04-30-22:28:19-16407'
Apr 30 22:28:20 controller nova-novncproxy: WARNING: no 'numpy' module, HyBi protocol will be slower
Apr 30 22:28:20 controller nova-novncproxy: Traceback (most recent call last):
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/bin/nova-novncproxy", line 10, in <module>
Apr 30 22:28:20 controller nova-novncproxy: sys.exit(main())
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 87, in main
Apr 30 22:28:20 controller nova-novncproxy: wrap_cmd=None)
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 47, in __init__
Apr 30 22:28:20 controller nova-novncproxy: ssl_target=None, *args, **kwargs)
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/lib/python2.7/site-packages/websockify/websocketproxy.py", line 231, in __init__
Apr 30 22:28:20 controller nova-novncproxy: websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs)
Apr 30 22:28:20 controller nova-novncproxy: TypeError: __init__() got an unexpected keyword argument 'no_parent'
Apr 30 22:28:20 controller systemd: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE
Apr 30 22:28:20 controller systemd: Unit openstack-nova-novncproxy.service entered failed state.

Openstack(Icehouse)安裝——殘卷

解決:
[root@controller ~]# yum install python-pip
[root@controller ~]# /usr/bin/pip2.7 install websockify==0.5.1
[root@controller ~]# systemctl start openstack-nova-novncproxy

Openstack(Icehouse)安裝——殘卷

查看進程
[root@controller ~]# ps aux | grep nova | grep -v grep
nova     16617  2.8  2.2 329508 66236 ?        Ss   22:55   1:30 /usr/bin/python /usr/bin/nova-api
nova     16627  0.3  2.5 425268 72820 ?        Ss   22:55   0:10 /usr/bin/python /usr/bin/nova-cert
nova     16634  0.3  2.5 425212 72848 ?        Ss   22:55   0:10 /usr/bin/python /usr/bin/nova-consoleauth
nova     16650  0.3  2.5 425772 73432 ?        Ss   22:55   0:10 /usr/bin/python /usr/bin/nova-scheduler
nova     16657  2.8  1.4 302184 42920 ?        Ss   22:55   1:28 /usr/bin/python /usr/bin/nova-conductor
nova     16669  0.1  1.1 369780 34520 ?        Ssl  22:55   0:04 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
nova     16681  0.0  1.4 308920 42972 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16682  0.0  1.4 308920 42972 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16683  0.2  2.4 425308 69912 ?        S    22:55   0:08 /usr/bin/python /usr/bin/nova-conductor
nova     16684  0.2  2.4 425300 69896 ?        S    22:55   0:07 /usr/bin/python /usr/bin/nova-conductor
nova     16696  0.0  2.1 329508 61448 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16697  0.0  2.1 329508 61448 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16709  0.0  2.1 329508 61440 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16710  0.0  2.1 329508 61440 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api

4.6 另一個報錯

日誌:
2018-05-01 10:15:21.566 17183 ERROR stevedore.extension [-] Could not load 'file': cannot import name util
2018-05-01 10:15:21.567 17183 ERROR stevedore.extension [-] cannot import name util
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension Traceback (most recent call last):
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 162, in _load_plugins
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     verify_requirements,
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 177, in _load_one_plugin
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     plugin = ep.load(require=verify_requirements)
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     entry = __import__(self.module_name, globals(),globals(), ['__name__'])
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/image/download/file.py", line 23, in <module>
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     import nova.virt.libvirt.utils as lv_utils
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py", line 15, in <module>
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     from nova.virt.libvirt import driver
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 59, in <module>
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     from eventlet import util as eventlet_util
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension ImportError: cannot import name util

參考鏈接:http://blog.sina.com.cn/s/blog_69a636860102v91c.html

處理辦法:重裝eventlet老版本
[root@controller ~]# pip install eventlet==0.15.2

再次重啓所有服務:
[root@controller ~]# for svc in api cert consoleauth scheduler conductor novncproxy;do systemctl restart openstack-nova-$svc;done

查看日誌還有沒有報錯(全部日誌查看一遍,並且以最後時間爲準):
[root@controller ~]# date
Tue May  1 10:40:43 CST 2018
[root@controller ~]# tail -50 /var/log/nova/nova-
nova-api.log          nova-cert.log         nova-conductor.log    nova-consoleauth.log  nova-manage.log       nova-scheduler.log 

現在時間40分後,重啓服務已經沒有報錯了:
2018-05-01 10:35:51.000 17790 ERROR stevedore.extension [-] Could not load 'file': cannot import name util
2018-05-01 10:35:51.000 17790 ERROR stevedore.extension [-] cannot import name util
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension Traceback (most recent call last):
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 162, in _load_plugins
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     verify_requirements,
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 177, in _load_one_plugin
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     plugin = ep.load(require=verify_requirements)
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     entry = __import__(self.module_name, globals(),globals(), ['__name__'])
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/image/download/file.py", line 23, in <module>
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     import nova.virt.libvirt.utils as lv_utils
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py", line 15, in <module>
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     from nova.virt.libvirt import driver
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 59, in <module>
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     from eventlet import util as eventlet_util
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension ImportError: cannot import name util
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension 
2018-05-01 10:35:51.028 17790 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2018-05-01 10:40:26.549 17957 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2018-05-01 10:40:26.602 17957 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2018-05-01 10:40:27.488 17957 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.

4.7 使用nova命令測試

查看磁盤映像文件:
[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 18a0019f-48e5-4f78-9f13-1166b4d53a12 | cirros-0.3.0-i386   | ACTIVE |        |
| ca9993b8-91d5-44d1-889b-5496fd62114c | cirros-0.3.0-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

二、compute1節點

5、Nova(Node節點,hypervisor節點)

5.1 查看是否支持虛擬化

查看是否支持虛擬化:
[root@compute1 ~]# egrep --color=auto -i "(svm|vmx)" /proc/cpuinfo

Openstack(Icehouse)安裝——殘卷

5.2 安裝compute相關組件包

配置yum源:
[root@compute1 ~]# vim /etc/yum.repos.d/C7-local.repo

[openstack-I]
name=OpenStack I Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/
gpgcheck=0
enabled=1

更新yum:
# yum clean all
# yum repolist

安裝
[root@compute1 ~]# yum install openstack-nova-compute

報錯:
Error: Package: python-nova-2014.1.5-1.el7.centos.noarch (Openstack-I)
           Requires: python-greenlet

需要安裝greenlet包,epel上沒有,得自己下載:
[root@compute1 ~]# ls python-greenlet-0.4.2-4.el7.x86_64.rpm 
python-greenlet-0.4.2-4.el7.x86_64.rpm
[root@compute1 ~]# yum install -y python-greenlet-0.4.2-4.el7.x86_64.rpm

重新安裝:
[root@compute1 ~]# yum install openstack-nova-compute

5.3 配置

備份配置:
[root@compute1 ~]# cd /etc/nova/
[root@compute1 nova]# cp nova.conf{,.bak}

開始配置:
[root@compute1 ~]# vim /etc/nova/nova.conf

#數據庫連接配置:
connection=mysql://nova:[email protected]/nova

#qpid配置:
qpid_hostname=192.168.1.1
rpc_backend=qpid

#認證
[DEFAULE]
auth_strategy=keystone

[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
auth_version=v2.0
admin_user=nova
admin_password=nova
admin_tenant_name=service

#glance配置
glance_host=controller

#vnc配置:
my_ip=192.168.1.2
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.2
vnc_enabled=true
novncproxy_base_url=http://controller:6080/vnc_auto.html

#設置支持的虛擬化方式(前提是支持kvm虛擬化,否則改成qemu)
virt_type=kvm

#修改虛擬網絡異常,報錯超時時間
vif_plugging_timeout=10

#設置虛擬機網絡接口異常時,依然可以啓動虛擬機
vif_plugging_is_fatal=false

5.4 服務啓動

[root@compute1 ~]# lsmod | grep kvm
kvm_intel             162153  0 
kvm                   525259  1 kvm_intel

先啓動libvirtd服務:
[root@compute1 ~]# systemctl start libvirtd.service

啓動messagebus(總線服務)
[root@compute1 ~]# systemctl start messagebus

啓動openstack-compute服務:
[root@compute1 ~]# systemctl start openstack-nova-compute

又出現eventlet認不到報錯,而且在安裝oepnstack-compute組件時,自動更新了其他組件導致yum損壞。

報錯日誌:
[root@compute1 ~]# rpm
error: Failed to initialize NSS library

error: Failed to initialize NSS library
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

   cannot import name ts

Please install a package which provides this module, or
verify that the module is installed correctly.

It's possible that the above module doesn't match the
current version of Python, which is:
2.7.5 (default, Nov 20 2015, 02:00:19) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]

修復(參考鏈接:https://www.huangzz.xyz/jie-jue-failed-to-initialize-nss-library-de-wen-ti.html):
[root@compute1 ~]# wget https://www.huangzz.xyz/wp-content/uploads/MyUploads/libnspr4.so.tar
[root@compute1 ~]# tar xf libnspr4.so.tar
[root@compute1 ~]# mv libnspr4.so /usr/lib64/
mv: overwrite ‘/usr/lib64/libnspr4.so’? y
[root@compute1 ~]# yum install glibc.i686 nspr

安裝eventlet舊版本:
[root@compute1 ~]# yum install python-pip
[root@compute1 ~]# pip install eventlet==0.15.2

已經可以啓動openstack-nova-compute進程:
[root@compute1 ~]# systemctl start openstack-nova-compute

5.5 又一個報錯,libvirt相關(此處巨坑,請先備份nova配置文件)

此處巨坑,請先備份nova配置文件,甚至做個鏡像

好像libvirt的版本跟這版本的compute有不兼容。。。。
日誌報錯:
May 01 14:50:47 compute1.com libvirtd[4336]: 2018-05-01 06:50:47.622+0000: 4341: error : virDBusCall:1570 : error from service: CheckAuthorization: Connection is closed
May 01 14:50:47 compute1.com libvirtd[4336]: 2018-05-01 06:50:47.622+0000: 4336: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error
May 01 14:50:55 compute1.com libvirtd[4336]: 2018-05-01 06:50:55.209+0000: 4340: error : virDBusCall:1570 : error from service: CheckAuthorization: Connection is closed
May 01 14:50:55 compute1.com libvirtd[4336]: 2018-05-01 06:50:55.617+0000: 4336: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error

2018-05-01 14:41:43.839 4248 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on 192.168.1.1:5672
2018-05-01 14:50:22.359 4248 WARNING nova.virt.libvirt.driver [-] Connection to libvirt lost: 0
2018-05-01 14:50:47.623 4248 ERROR nova.virt.libvirt.driver [-] Connection to libvirt failed: error from service: CheckAuthorization: Connection is closed
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 789, in _connect
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     libvirt.openAuth, uri, auth, flags)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     rv = execute(f, *args, **kwargs)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     six.reraise(c, e, tb)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     rv = meth(*args, **kwargs)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     if ret is None:raise libvirtError('virConnectOpenAuth() failed')
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver libvirtError: error from service: CheckAuthorization: Connection is closed
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver 
2018-05-01 14:50:47.766 4248 ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager.update_available_resource: Connection to the hypervisor is broken on host: compute1.com
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task Traceback (most recent call last):
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py", line 182, in run_periodic_tasks
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     task(self, context)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5529, in update_available_resource
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     rt.update_available_resource(context)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 249, in inner
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     return f(*args, **kwargs)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 293, in update_available_resource
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     resources = self.driver.get_available_resource(self.nodename)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4204, in get_available_resource
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     stats = self.get_host_stats(refresh=True)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4902, in get_host_stats
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     return self.host_state.get_host_stats(refresh=refresh)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5310, in get_host_stats
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     self.update_status()
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5344, in update_status
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     data["memory_mb"] = self.driver.get_memory_mb_total()
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3839, in get_memory_mb_total
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     return self._conn.getInfo()[1]
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 723, in _get_connection
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     wrapped_conn = self._get_new_connection()
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 676, in _get_new_connection
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     wrapped_conn = self._connect(self.uri(), self.read_only)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 798, in _connect
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     raise exception.HypervisorUnavailable(host=CONF.host)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task HypervisorUnavailable: Connection to the hypervisor is broken on host: compute1.com

處理(libvirt原始版本3.2):
更換鏡像源地址:http://vault.centos.org/7.2.1511/os/x86_64/
或者使用7.2的CD鏡像源
[root@compute1 ~]# yum clean all
[root@compute1 ~]# yum repolist

卸載老版本:
[root@compute1 ~]# yum remove libvirt-daemon libvirt-libs libvirt-python

安裝1.2.17的libvirt的時候,又報cyrus-sasl-lib版本高,強制降低版本
[root@compute1 ~]# rpm -Uvh cyrus-sasl-lib-2.1.26-19.2.el7.x86_64.rpm --force --nodeps

重新安裝(記得關閉其他鏡像源,只保留CD鏡像和openstack的鏡像源,安裝libvirt1.2版本的):
[root@compute1 ~]# yum install openstack-nova-compute
libvirt libvirt-python

啓動messagebus(總線服務)
[root@compute1 ~]# systemctl restart messagebus

啓動libvirt
[root@compute1 ~]# systemctl start libvirtd

還是報錯,不理了。

Openstack(Icehouse)安裝——殘卷

啓動openstack-nova-compute
[root@compute1 ~]# systemctl start openstack-nova-compute.service

Openstack(Icehouse)安裝——殘卷

設置服務開機啓動:
[root@compute1 ~]# systemctl enable libvirtd
[root@compute1 ~]# systemctl enable messagebus
[root@compute1 ~]# systemctl enable openstack-nova-compute

5.6 驗證

回到controller節點驗證:
可以到compute1計算節點了:
[root@controller ~]# nova hypervisor-list
+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 1  | compute1.com        |
+----+---------------------+

查看comput1的資源統計情況
[root@controller ~]# nova hypervisor-stats
+----------------------+-------+
| Property             | Value |
+----------------------+-------+
| count                | 1     |
| current_workload     | 0     |
| disk_available_least | 77    |
| free_disk_gb         | 78    |
| free_ram_mb          | 2286  |
| local_gb             | 78    |
| local_gb_used        | 0     |
| memory_mb            | 2798  |
| memory_mb_used       | 512   |
| running_vms          | 0     |
| vcpus                | 2     |
| vcpus_used           | 0     |
+----------------------+-------+

查看compute1詳細信息:
[root@controller ~]# nova hypervisor-show compute1.com

三、網絡配置

6、Neutron Server(controller)

開始使用第三個網絡節點,network1

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

6.1 配置neutron數據庫信息

創建數據庫:
MariaDB [(none)]> CREATE DATABASE neutron CHARACTER SET 'utf8';

授權:
MariaDB [(none)]> GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
MariaDB [(none)]> FLUSH PRIVILEGES;

6.2 keystone創建neutron用戶

[root@controller ~]# keystone user-create --name=neutron --pass=neutron [email protected]
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          [email protected]          |
| enabled  |               True               |
|    id    | fa48f4bfed2746d2b2711c46da825407 |
|   name   |             neutron              |
| username |             neutron              |
+----------+----------------------------------+

把用戶添加到管理角色和service租戶裏面:
[root@controller ~]# keystone user-role-add --user=neutron --tenant=service --role=admin

查看:
[root@controller ~]# keystone user-role-list --user=neutron --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 2b54b14daca041c2a3dc66325f5048ce | admin | fa48f4bfed2746d2b2711c46da825407 | f9f13bac5d6f40449b2e4560ab16536d |
+----------------------------------+-------+----------------------------------+----------------------------------+

6.3 創建neutron服務和訪問端點(訪問接口)

添加服務:
[root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       OpenStack Networking       |
|   enabled   |               True               |
|      id     | da34b8a9c89446c6901888e27db931e3 |
|     name    |             neutron              |
|     type    |             network              |
+-------------+----------------------------------+

添加端點:
keystone endpoint-create \
--service-id $(keystone service-list | awk '/network/{print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9696      |
|      id     | 6f9f3b37e1d9451896a36bfed1ed1536 |
| internalurl |      http://controller:9696      |
|  publicurl  |      http://controller:9696      |
|    region   |            regionOne             |
|  service_id | da34b8a9c89446c6901888e27db931e3 |
+-------------+----------------------------------+

6.4 安裝neutron程序包

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient

6.5 配置neutron

備份配置
[root@controller neutron]# cd /etc/neutron/
[root@controller neutron]# cp neutron.conf neutron.conf.bak

查看tenant中service的ID號
[root@controller ~]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| abfe5df994e54c6190e98e3f3f3dab38 |  admin  |   True  |
| fbff77c905114d50b5be94ffd46203cd |   demo  |   True  |
| f9f13bac5d6f40449b2e4560ab16536d | service |   True  |
+----------------------------------+---------+---------+

配置:
connection = mysql://neutron:[email protected]:3306/neutron

[DEFAULT]
#可以顯示詳細日誌
verbose = True
auth_strategy =keystone

[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_password=neutron
admin_tenant_name=service

rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.1.1

#配置網絡拓撲改變的事件通知
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = f9f13bac5d6f40449b2e4560ab16536d
nova_admin_password = nova
nova_admin_auth_url = http://controller:35357/v2.0

#配置網絡核心插件:
core_plugin = ml2
service_plugins = router

6.6 配置ml2插件

備份:
[root@controller ~]# cd /etc/neutron/plugins/ml2/
[root@controller ml2]# cp ml2_conf.ini ml2_conf.ini.bak

配置
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallD
river

6.9 compute配置使用網絡

配置:
[root@controller ~]# vim /etc/nova/nova.conf

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

創建超鏈接:
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller ~]# ll /etc/neutron/plugin.ini
lrwxrwxrwx 1 root root 37 May  2 01:36 /etc/neutron/plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini

6.10 重啓nova的幾個服務

[root@controller ~]# systemctl restart openstack-nova-api
[root@controller ~]# systemctl restart openstack-nova-scheduler
[root@controller ~]# systemctl restart openstack-nova-conductor

6.11 啓動neutron服務

[root@controller ~]# systemctl start neutron-server
[root@controller ~]# systemctl enable neutron-server

查看狀態:

Openstack(Icehouse)安裝——殘卷

查看日誌:
[root@controller ~]# tail -60 /var/log/neutron/server.log

網上說日誌找不到插件是正常的,好吧我信了。。。

Openstack(Icehouse)安裝——殘卷

[root@controller ~]# grep -i "error" /var/log/neutron/server.log 
[root@controller ~]# grep -i "fa" /var/log/neutron/server.log 

7、netwrok node (network1節點配置)

7.1 配置好相關OpenStack版本的鏡像源

7.2 編輯內核參數

[root@network1 ~]# vim /etc/sysctl.conf 

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

激活:
[root@network1 ~]# sysctl -p

7.3 安裝neutron程序包(這裏需要epel源。。。,確保openvswitch的安裝包來自openstack鏡像源即可)

[root@network1 ~]# yum install python-greenlet-0.4.2-4.el7.x86_64.rpm

[root@network1 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

7.4 neutron配置

備份:
[root@network1 ~]# cd /etc/neutron/
[root@network1 neutron]# cp neutron.conf neutron.conf.bak

配置:
[root@network1 ~]# vim /etc/neutron/neutron.conf

[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_password=neutron
admin_tenant_name=service

#QPID
rpc_backend=neutron.openstack.common.rpc.impl_qpid

qpid_hostname = 192.168.1.1

core_plugin = ml2

service_plugins = router

7.5 配置l3插件

[root@network1 ~]# cd /etc/neutron/
[root@network1 neutron]# cp l3_agent.ini l3_agent.ini.bak

[root@network1 ~]# vim /etc/neutron/l3_agent.ini 

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

7.6 配置DHCP

[root@network1 ~]# vim /etc/neutron/dhcp_agent.ini 

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

強制限制幀大小
[root@network1 ~]# vim /etc/neutron/dnsmasq-neutron.conf

#強制限制幀大小
dhcp-option-force=26,1454

7.7 配置metadata

[root@network1 ~]# vim /etc/neutron/metadata_agent.ini 

verbose = True
auth_url = http://controller:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

7.8 回到controller節點,配置metadata

[root@controller ~]# vim /etc/nova/nova.conf

service_neutron_metadata_proxy=true
neutron_metadata_proxy_shared_secret=METADATA_SECRET

重啓nova-api服務:
[root@controller ~]# systemctl restart openstack-nova-api

7.9 繼續在network1節點,配置l2插件

[root@network1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini 

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

#自己加一段
[ovs]
local_ip = 192.168.2.254
tunnel_type = gre
enable_tunneling = True

7.10 啓動network1上的openvswitch服務並配置

啓動openvswitch:
[root@network1 ~]# systemctl start openvswitch
[root@network1 ~]# systemctl enable openvswitch

增加一個內部橋:
[root@network1 ~]# ovs-vsctl add-br br-in

增加一個外部橋
[root@network1 ~]# ovs-vsctl add-br br-ex

去除eth2的地址、網關、掩碼。把eth2添加進br-ex外部橋
[root@network1 ~]# ifconfig eth2 0;ifconfig br-ex10.201.106.133/24 up;ovs-vsctl add-port br-ex eth2

另外在eth2配置中去除地址、網關、掩碼

配置默認路由
[root@network1 ~]# route add default gw 10.201.106.2

查看
[root@network1 ~]# ovs-vsctl show
c95ca634-3c90-4aca-ae62-21d4f740e3b5
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"

設置外部橋ID號:
[root@network1 ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex

關閉外網eth2網卡的gro功能,增加網絡性能:
[root@network1 ~]# ethtool -K eth2 gro off

7.11 其他配置

創建l2配置軟鏈接:
[root@network1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

由於bug原因,需要修改openvswitch的init啓動腳本
備份:
[root@network1 ~]# rpm -ql openstack-neutron-openvswitch | grep agent.service
/usr/lib/systemd/system/neutron-openvswitch-agent.service

[root@network1 ~]# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service{,.orig}

替換:
[root@network1 ~]# vim /usr/lib/systemd/system/neutron-openvswitch-agent.service 

:%s@plugins/openvswitch/[email protected]@ig

修改服務鏈接:
[root@network1 ~]# systemctl disable neutron-openvswitch-agent
[root@network1 ~]# systemctl enable neutron-openvswitch-agent

7.12 network1服務啓動

[root@network1 ~]# for svc in openvswitch l3 dhcp metadata;do systemctl start neutron-${svc}-agent;systemctl enable neutron-${svc}-agent;done

查看狀態:
[root@network1 ~]# for svc in openvswitch l3 dhcp metadata;do systemctl status neutron-${svc}-agent;done

看看日誌:
[root@network1 ~]# tail -50 /var/log/neutron/
dhcp-agent.log         l3-agent.log           metadata-agent.log     openvswitch-agent.log 

8、compute1節點的網絡配置

8.1 開啓內核網絡功能

[root@compute1 ~]# vim /etc/sysctl.conf 

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

激活:
[root@compute1 ~]# sysctl -p

8.2 安裝相關程序包(依賴包需要epel)

[root@compute1 ~]# yum install openstack-neutron-ml2 openstack-neutron-openvswitch

8.3 配置

備份:
[root@compute1 ~]# cp /etc/neutron/neutron.conf{,.bak}

[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_password=neutron
admin_tenant_name=service

# QPID
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.1.1

core_plugin = ml2
service_plugins = router

8.4 ml2配置

做個備份先:
[root@compute1 ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
[root@compute1 ~]# ls /etc/neutron/plugins/ml2/ml2_conf.ini*
/etc/neutron/plugins/ml2/ml2_conf.ini  /etc/neutron/plugins/ml2/ml2_conf.ini.bak

拷貝network1的配置
[root@network1 ~]# scp /etc/neutron/plugins/ml2/ml2_conf.ini 192.168.1.2:/etc/neutron/plugins/ml2/

修改權限
chown root:neutron /etc/neutron/plugins/ml2/ml2_conf.ini

繼續修改:
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ovs]
#改成自己的eth1地址
local_ip = 192.168.2.2

8.5 啓動openvswitch並配置

[root@compute1 ~]# systemctl start openvswitch
[root@compute1 ~]# systemctl enable openvswitch

添加內部橋:
[root@compute1 ~]# ovs-vsctl add-br br-in

8.6 修改nova配置文件

[root@compute1 ~]# vim /etc/nova/nova.conf

network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_auth_url=http://controller:5000/v2.0

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

8.7 啓動服務

創建l2配置軟鏈接
[root@compute1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

重啓compute1的nova服務:
[root@compute1 ~]# systemctl restart openstack-nova-compute

由於bug原因,需要修改openvswitch的init啓動腳本
備份:
[root@compute1 ~]# rpm -ql openstack-neutron-openvswitch | grep agent.service
/usr/lib/systemd/system/neutron-openvswitch-agent.service

[root@compute1 ~]# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service{,.orig}

啓動neutron-agent服務:
[root@compute1 ~]# systemctl start neutron-openvswitch-agent
[root@compute1 ~]# systemctl enable neutron-openvswitch-agent

如果無法啓動,請檢查配置文件的屬主屬組權限,該服務是用neutron用戶進行,如果配置文件用root用戶拷貝,屬主屬組全是root將無法啓動。默認權限是root:neutron(屬組是neutron)

root@network1 ~]# ll /etc/neutron/plugins/ml2/ml2_conf.ini 
-rw-r----- 1 root neutron 2567 May  2 16:21 /etc/neutron/plugins/ml2/ml2_conf.ini

8.8 驗證

在controller端使用網絡命令
[root@controller ~]# neutron net-list

[root@controller ~]# 
沒有報錯,默認爲空

如果出現鑑權錯誤。請把neutron配置的auth_uri改成identity_uri  後重啓neutron所有服務

[root@network1 ~]# for i in openvswitch l3 dhcp metadata;do systemctl restart neutron-${i}-agent;done

[root@compute1 ~]# systemctl restart neutron-openvswitch-agent

驗證:
在compute1和network1加載用戶環境文件
vim  /~/.admin-openrc.sh 
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

source /~/.admin-openrc.sh 

然後運行neutron net-list服務沒報錯即可

9、neutron配置網絡

9.1 創建外部網絡

創建外部網絡:
[root@controller ~]# neutron net-create ext-net --shared --router:external=True 
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 765c7736-23e1-4628-a30f-8c7a6b3fb112 |
| name                      | ext-net                              |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | abfe5df994e54c6190e98e3f3f3dab38     |
+---------------------------+--------------------------------------+

在上面的物理網絡基礎上創建subnet,並關閉DHCP,並指明分配地址範圍,網關等(子網,三層)
[root@controller ~]# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.201.106.150,end=10.201.106.180 --disable-dhcp --gateway 10.201.106.2 10.201.106.0/24
Created a new subnet:
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| allocation_pools | {"start": "10.201.106.150", "end": "10.201.106.180"} |
| cidr             | 10.201.106.0/24                                      |
| dns_nameservers  |                                                      |
| enable_dhcp      | False                                                |
| gateway_ip       | 10.201.106.2                                         |
| host_routes      |                                                      |
| id               | 4a1f4b34-c05d-4e0c-94a3-e793baf77903                 |
| ip_version       | 4                                                    |
| name             | ext-subnet                                           |
| network_id       | 8b1ce93e-d53f-4db8-9618-ea0e5a44c7e2                 |
| tenant_id        | abfe5df994e54c6190e98e3f3f3dab38                     |
+------------------+------------------------------------------------------+

    由於是用admin用戶創建,所以tenantID爲admin:
    [root@controller ~]# keystone tenant-list
    +----------------------------------+---------+---------+
    |                id                |   name  | enabled |
    +----------------------------------+---------+---------+
    | abfe5df994e54c6190e98e3f3f3dab38 |  admin  |   True  |
    | fbff77c905114d50b5be94ffd46203cd |   demo  |   True  |
    | f9f13bac5d6f40449b2e4560ab16536d | service |   True  |
    +----------------------------------+---------+---------+

9.2 切換普通用戶demo,管理網絡

[root@controller ~]# cp .admin-openrc.sh .demo-os.sh

編輯demo變量:
[root@controller ~]# vim .demo-os.sh 

export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://controller:35357/v2.0

激活變量
[root@controller ~]# source .demo-os.sh

查看網絡:
[root@controller ~]# neutron net-list
+--------------------------------------+---------+------------------------------------------------------+
| id                                   | name    | subnets                                              |
+--------------------------------------+---------+------------------------------------------------------+
| 765c7736-23e1-4628-a30f-8c7a6b3fb112 | ext-net | 9750a55a-1993-4e6a-a972-e91ffb700c08 10.201.106.0/24 |
+--------------------------------------+---------+------------------------------------------------------+

9.3 demo用戶創建管理網絡

創建二層網絡:
[root@controller ~]# neutron net-create ext-net --shared --router:external=True 
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 8b1ce93e-d53f-4db8-9618-ea0e5a44c7e2 |
| name                      | ext-net                              |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | abfe5df994e54c6190e98e3f3f3dab38     |
+---------------------------+--------------------------------------+

在二層網絡基礎上創建三層子網(默認會提供DHCP服務)
[root@controller ~]# neutron subnet-create demo-net --name demo-subnet --gateway 192.168.3.254 192.168.3.0/24
Created a new subnet:
+------------------+--------------------------------------------------+
| Field            | Value                                            |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.3.1", "end": "192.168.3.253"} |
| cidr             | 192.168.3.0/24                                   |
| dns_nameservers  |                                                  |
| enable_dhcp      | True                                             |
| gateway_ip       | 192.168.3.254                                    |
| host_routes      |                                                  |
| id               | 2e2902f4-a8e7-4468-b025-e3192c107c63             |
| ip_version       | 4                                                |
| name             | demo-subnet                                      |
| network_id       | 7f967a07-b98c-4684-ba2a-dd2ad4dc7171             |
| tenant_id        | fbff77c905114d50b5be94ffd46203cd                 |
+------------------+--------------------------------------------------+

9.3.1 手動創建router(路由器)

查看幫助:
[root@controller ~]# neutron help router-create

創建路由
[root@controller ~]# neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 2f000bf3-cbc3-49b5-8470-9f5b4db54aa5 |
| name                  | demo-router                          |
| status                | ACTIVE                               |
| tenant_id             | fbff77c905114d50b5be94ffd46203cd     |
+-----------------------+--------------------------------------+

路由器添加一個接口(網關),
[root@controller ~]# neutron router-interface-add demo-router demo-subnet
Added interface bdefa77a-4a7b-4fdc-8196-cfdd7337a822 to router demo-router.
[root@controller ~]# neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| bdefa77a-4a7b-4fdc-8196-cfdd7337a822 |      | fa:16:3e:5a:12:cf | {"subnet_id": "2e2902f4-a8e7-4468-b025-e3192c107c63", "ip_address": "192.168.3.254"} |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+

將到路由器跟外部網橋關聯(我的好像有bug,添加通過route-port-list命令沒有看到分配的外網IP和接口。但是刪除ext-subnet時又報錯說有分配IP無法刪除):
[root@controller ~]# neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router

Openstack(Icehouse)安裝——殘卷

在network1節點查看:
[root@network1 ~]# ip netns list
qrouter-2f000bf3-cbc3-49b5-8470-9f5b4db54aa5
qrouter-eb098169-d68a-400e-ac5d-4b95bc6229b1

可以看到分配出去的IP

Openstack(Icehouse)安裝——殘卷

10、Horizon(dashboard圖形界面)

10.1 安裝程序包

[root@controller ~]# yum install memcached python-memcached mod_wsgi openstack-dashboard

10.2 啓動memcached

[root@controller ~]# systemctl enable memcached
[root@controller ~]# systemctl start memcached

[root@controller ~]# netstat -tanp | grep memcached
tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      25002/memcached     
tcp6       0      0 :::11211                :::*                    LISTEN      25002/memcached 

10.3 配置dashboard

備份
[root@controller ~]# cp -p /etc/openstack-dashboard/local_settings{,.bak}

配置
[root@controller ~]# vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"
#授權所有主機訪問:
ALLOWED_HOSTS = ['*']

#禁用:
#CACHES = {
#    'default': {
#        'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
#    }
#}

#啓動memcached
CACHES = {
    'default': {
        'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION' : '192.168.1.1:11211',
    }
}

#時區
TIME_ZONE = "Asia/Chongqing"

啓動httpd服務:
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd

10.4 訪問測試

瀏覽器訪問:http://10.201.106.131/dashboard

Openstack(Icehouse)安裝——殘卷

用keystone的用戶登錄,這裏用admin用戶登錄。

登錄後界面:

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

demo用戶登錄:

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

10.5 圖形界面無法連接到Neutron 問題解決

Openstack(Icehouse)安裝——殘卷

參考鏈接:https://blog.csdn.net/wmj2004/article/details/53216024

處理:
[root@controller ~]# vim /usr/share/openstack-dashboard/openstack_dashboard/api

    def is_simple_associate_supported(self):
        def is_supported(self):
            network_config = getattr(settings, 'OPENSTACK_NEUTRON_NETWORK', {})
            return network_config.get('enable_router', True)

重啓web服務:
[root@controller ~]# systemctl restart httpd

11、創建管理實例(虛擬機)

11.1 生成demo用戶的密鑰

加載demo用戶變量:
[root@controller ~]# source .demo-os.sh

生成密鑰和公鑰,之前已經有了,不覆蓋原來的:
[root@controller ~]# ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? n

導入公鑰:
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub demokey

列出密鑰對
[root@controller ~]# nova keypair-list
+---------+-------------------------------------------------+
| Name    | Fingerprint                                     |
+---------+-------------------------------------------------+
| demokey | 35:78:7f:bf:9f:75:d3:ef:7a:b1:ee:a2:7f:2f:e3:27 |
+---------+-------------------------------------------------+

11.2 啓動實例前準備

列出默認內置的flavor
[root@controller ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@control

創建一個自定義flavor:
切換admin用戶:
[root@controller ~]# source .admin-openrc.sh
查看幫助:
[root@controller ~]# nova help flavor-create

創建:
[root@controller ~]# nova flavor-create --is-public true m1.cirros 6 128 1 1
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

[root@controller ~]# nova flavor-list | grep cirros
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |

切換回demo:
[root@controller ~]# source .demo-os.sh 

查看可以使用的磁盤映像文件:
[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 18a0019f-48e5-4f78-9f13-1166b4d53a12 | cirros-0.3.0-i386   | ACTIVE |        |
| ca9993b8-91d5-44d1-889b-5496fd62114c | cirros-0.3.0-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

如果普通用戶看不到,需要用admin用戶修改鏡像的權限爲公有,蛋(以後命令上傳要加參數--is-public=true,之前把true寫成ture了)疼:

Openstack(Icehouse)安裝——殘卷

Openstack(Icehouse)安裝——殘卷

查看自己可以使用的網絡和subnet:
[root@controller ~]# nova net-list
+--------------------------------------+----------+------+
| ID                                   | Label    | CIDR |
+--------------------------------------+----------+------+
| 8b1ce93e-d53f-4db8-9618-ea0e5a44c7e2 | ext-net  | -    |
| 7f967a07-b98c-4684-ba2a-dd2ad4dc7171 | demo-net | -    |
+--------------------------------------+----------+------+
[root@controller ~]# neutron subnet-list 
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
| id                                   | name        | cidr            | allocation_pools                                     |
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
| 4a1f4b34-c05d-4e0c-94a3-e793baf77903 | ext-subnet  | 10.201.106.0/24 | {"start": "10.201.106.150", "end": "10.201.106.180"} |
| 2e2902f4-a8e7-4468-b025-e3192c107c63 | demo-subnet | 192.168.3.0/24  | {"start": "192.168.3.1", "end": "192.168.3.253"}     |
+--------------------------------------+-------------+-----------------+------------------------------------------------------+

查看可用的安全組
[root@controller ~]# nova secgroup-list
+--------------------------------------+---------+-------------+
| Id                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 3b28dbde-1640-4043-81c9-d01cb822020b | default | default     |
+--------------------------------------+---------+-------------+

查看組內角色規則:
[root@controller ~]# nova secgroup-list-rules default
+-------------+-----------+---------+----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+----------+--------------+
|             |           |         |          | default      |
|             |           |         |          | default      |
+-------------+-----------+---------+----------+--------------+

11.3 正式啓動虛擬機實例(加入網絡需要網絡的ID號)

[root@controller ~]# nova boot --flavor m1.cirros --image cirros-0.3.0-i386 --key-name demokey --nic net-id=7f967a07-b98c-4684-ba2a-dd2ad4dc7171 --security-group default demo-0001
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 0                                                        |
| OS-EXT-STS:task_state                | scheduling                                               |
| OS-EXT-STS:vm_state                  | building                                                 |
| OS-SRV-USG:launched_at               | -                                                        |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| adminPass                            | vNpTGgs6uzDq                                             |
| config_drive                         |                                                          |
| created                              | 2018-05-03T09:43:51Z                                     |
| flavor                               | m1.cirros (6)                                            |
| hostId                               |                                                          |
| id                                   | f5a4aec6-100e-482c-9944-b319189facba                     |
| image                                | cirros-0.3.0-i386 (18a0019f-48e5-4f78-9f13-1166b4d53a12) |
| key_name                             | demokey                                                  |
| metadata                             | {}                                                       |
| name                                 | demo-0001                                                |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | BUILD                                                    |
| tenant_id                            | fbff77c905114d50b5be94ffd46203cd                         |
| updated                              | 2018-05-03T09:43:52Z                                     |
| user_id                              | 472d9776f8984bb99a728985760ad5ba                         |
+--------------------------------------+----------------------------------------------------------+

查看實例狀態:
[root@controller ~]# nova list
+--------------------------------------+-----------+--------+------------+-------------+----------+
| ID                                   | Name      | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+----------+
| f5a4aec6-100e-482c-9944-b319189facba | demo-0001 | BUILD  | spawning   | NOSTATE     |          |
+--------------------------------------+-----------+--------+------------+-------------+----------+

過了一段時間,終於創建好了:
[root@controller ~]# nova list
+--------------------------------------+-----------+--------+------------+-------------+----------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks             |
+--------------------------------------+-----------+--------+------------+-------------+----------------------+
| f5a4aec6-100e-482c-9944-b319189facba | demo-0001 | ACTIVE | -          | Running     | demo-net=192.168.3.1 |
+--------------------------------------+-----------+--------+------------+-------------+----------------------+

如果無法啓動,檢查下compute和network1節點的neutron-openvswitch-agent有沒有啓動

查看VNC訪問地址:
[root@controller ~]# nova get-vnc-console demo-0001 novnc
+-------+---------------------------------------------------------------------------------+
| Type  | Url                                                                             |
+-------+---------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=c72f7bc4-e7f4-47b1-b4e5-362fcb9811c3 |
+-------+---------------------------------------------------------------------------------+

通過瀏覽器訪問:http://controller:6080/vnc_auto.html?token=c72f7bc4-e7f4-47b1-b4e5-362fcb9811c3

Openstack(Icehouse)安裝——殘卷

11.5 安全組設置

開啓安全組ping功能:
[root@controller ~]# source .admin-openrc.sh
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章