openstack rocky版詳細搭建

openstack rocky版搭建

文章目錄

作者:40kuai

博客:http://www.cnblogs.com/40kuai/

個人博客 :http://www.heleicool.cn/

有疑問可以加本人QQ:948793841

實驗環境

系統:CentOS-7-x86_64-DVD-1804

實驗環境:vmware

hostname ip 功能
node1.heleicool.cn 172.16.175.11 管理節點
node2.heleicool.cn 172.16.175.12 計算節點

其他信息:

root密碼:123123

環境設置

安裝必要軟件:

yum install -y vim net-tools wget telnet

修改主機名:

配置網卡信息:網段爲172.16.175.0/24,網關爲172.16.175.2

node1節點網卡配置如下:

TYPE="Ethernet"
BOOTPROTO="static"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR=172.16.175.11
NETMASK=255.255.255.0
GATEWAY=172.16.175.2

node2節點網卡配置如下:

TYPE="Ethernet"
BOOTPROTO="static"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
IPADDR=172.16.175.12
NETMASK=255.255.255.0
GATEWAY=172.16.175.2

重啓網卡:

systemctl restart network

分別配置/etc/hosts文件:

172.16.175.11	node1.heleicool.cn
172.16.175.12	node2.heleicool.cn

分別配置/etc/resolv.conf文件:

nameserver 8.8.8.8

關閉防火牆:

systemctl disable firewalld 
systemctl stop firewalld 

關閉selinux:(應該可以省略)

setenforce 0
vim /etc/selinux/config
	SELINUX=disabled

安裝openstack包

安裝對應版本的epel庫:

yum install centos-release-openstack-rocky -y

安裝openstack客戶端:

yum install python-openstackclient -y

RHEL和CentOS 默認啓用SELinux。安裝 openstack-selinux軟件包以自動管理OpenStack服務的安全策略:

yum install openstack-selinux -y

數據庫安裝

安裝包:

yum install mariadb mariadb-server python2-PyMySQL -y

創建和編輯配置文件/etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 172.16.175.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

啓動數據庫:

systemctl enable mariadb.service
systemctl start mariadb.service

通過運行mysql_secure_installation 腳本來保護數據庫服務。特別是,爲數據庫root帳戶選擇合適的密碼 :

mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y  # 是否設置root密碼
New password:	# 輸入兩次root密碼
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y  # 是否刪除匿名用戶
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y # 是否禁止root遠程登陸
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y # 是否刪除test庫

▽
 - Dropping test database...

▽
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y  # 加載權限表
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

安裝消息隊列

安裝rabbitmq

yum install rabbitmq-server -y

啓動rabbitmy

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

添加openstack用戶

# 我 添加的用戶名爲openstack,密碼也是。
rabbitmqctl add_user openstack openstack

對openstack用戶進行讀寫授權:

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

###安裝Memcached

安裝Memacached

yum install memcached python-memcached -y

編輯/etc/sysconfig/memcached,修改配置

OPTIONS="-l 127.0.0.1,::1,172.16.175.11"

啓動memcached

systemctl enable memcached.service
systemctl start memcached.service

目前爲止端口信息如下

# rabbitmq 端口
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      1690/beam
# mariadb-server 端口
tcp        0      0 172.16.175.11:3306      0.0.0.0:*               LISTEN      1506/mysqld
# memcached 端口
tcp        0      0 172.16.175.11:11211     0.0.0.0:*               LISTEN      2236/memcached
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      2236/memcached
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      766/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1050/master
tcp6       0      0 :::5672                 :::*                    LISTEN      1690/beam
tcp6       0      0 ::1:11211               :::*                    LISTEN      2236/memcached
tcp6       0      0 :::22                   :::*                    LISTEN      766/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      1050/master

開始安裝openstack服務

keystone服務安裝

配置keystone數據庫:

使用數據庫訪問客戶端以root用戶身份連接到數據庫服務器:

mysql -u root -p

創建keystone數據庫,授予對keystone數據庫的適當訪問權限:

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

安裝配置keystone

運行以下命令以安裝軟件包:

yum install openstack-keystone httpd mod_wsgi -y

編輯/etc/keystone/keystone.conf文件並完成以下操作:

[database]
connection = mysql+pymysql://keystone:[email protected]/keystone
[token]
provider = fernet

填充Identity服務數據庫:

su -s /bin/sh -c "keystone-manage db_sync" keystone
# 驗證數據庫表
mysql -ukeystone -pkeystone -e "use keystone; show tables;"

初始化Fernet密鑰存儲庫:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引導身份服務:

# ADMIN_PASS爲管理用戶的密碼,這裏是設置密碼。
keystone-manage bootstrap --bootstrap-password admin \
  --bootstrap-admin-url http://172.16.175.11:5000/v3/ \
  --bootstrap-internal-url http://172.16.175.11:5000/v3/ \
  --bootstrap-public-url http://172.16.175.11:5000/v3/ \
  --bootstrap-region-id RegionOne

配置Apache HTTP服務

編輯/etc/httpd/conf/httpd.conf

ServerName 172.16.175.11

創建/usr/share/keystone/wsgi-keystone.conf文件的鏈接:

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

啓動服務

啓動Apache HTTP服務並將其配置爲在系統引導時啓動:

systemctl enable httpd.service
systemctl start httpd.service

配置管理帳戶

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3

創建domain,projects,users and roles

雖然本指南中的keystone-manage bootstrap步驟中已存在“默認”域,但創建新域的正式方法是:

# openstack domain create --description "An Example Domain" example

使用默認的domain,創建service project:用做服務。

openstack project create --domain default \
  --description "Service Project" service

創建myproject項目:用做常規(非管理員)任務應使用非特權項目和用戶。

openstack project create --domain default \
  --description "Demo Project" myproject

創建myuser用戶:

# 創建用戶需要輸入密碼
openstack user create --domain default \
  --password-prompt myuser

創建myrole角色:

openstack role create myrole

將myuser添加到myproject項目中並賦予myrole的角色:

openstack role add --project myproject --user myuser myrole

驗證用戶

取消設置臨時 變量OS_AUTH_URLOS_PASSWORD環境變量:

unset OS_AUTH_URL OS_PASSWORD

作爲admin用戶,請求身份驗證令牌:

# 執行後需要輸入admin密碼
openstack --os-auth-url http://172.16.175.11:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

作爲myuser用戶,請求身份驗證令牌:

# 執行後需要輸入admin密碼
openstack --os-auth-url http://172.16.175.11:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

創建openstack 客戶端環境腳本

openstack客戶端通過添加參數或使用環境變量的方式來與Identity服務進行交互,爲了提高效率,創建環境腳本:

創建admin用戶環境腳本:admin-openstack.sh

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3

創建myuser用戶環境腳本:demo-openstack.sh

export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
使用腳本
source admin-openstack.sh
openstack token issue

glance服務安裝

配置glance數據庫:

root用戶登陸數據庫:

mysql -u root -p

創建glance數據庫和用戶授權:

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

創建glance服務憑證,使用admin用戶:

source admin-openstack.sh

創建glance用戶:

# 需要輸入glance用戶密碼,我的是 glance
openstack user create --domain default --password-prompt glance

將glance用戶添加到service項目中,並賦予admin角色:

openstack role add --project service --user glance admin

創建glance服務實體:

openstack service create --name glance \
  --description "OpenStack Image" image

創建Image服務API端點:

openstack endpoint create --region RegionOne image public http://172.16.175.11:9292
openstack endpoint create --region RegionOne image internal http://172.16.175.11:9292
openstack endpoint create --region RegionOne image admin http://172.16.175.11:9292

安裝和配置glance

安裝包:

yum install openstack-glance -y 

編輯/etc/glance/glance-api.conf文件並完成以下操作:

# 配置數據庫訪問:
[database]
connection = mysql+pymysql://glance:[email protected]/glance

# 配置身份服務訪問:
[keystone_authtoken]
www_authenticate_uri  = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone

# 配置本地文件系統存儲和映像文件的位置:
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

編輯/etc/glance/glance-registry.conf文件並完成以下操作:

# 配置數據庫訪問:
[database]
connection = mysql+pymysql://glance:[email protected]/glance

# 配置身份服務訪問:
[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone

填充Image服務數據庫,並驗證:

su -s /bin/sh -c "glance-manage db_sync" glance
mysql -uglance -pglance -e "use glance; show tables;"

啓動服務:

systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

驗證服務

來源admin憑據來訪問僅管理員CLI命令:

source admin-openstack.sh

下載源圖像:

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

使用QCOW2磁盤格式,bare容器格式和公共可見性將圖像上載到Image服務 ,以便所有項目都可以訪問它:

# 確保cirros-0.4.0-x86_64-disk.img 文件在當前目錄下
openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public

確認上傳圖像並驗證屬性:

openstack image list

nova服務安裝

Nova控制節點安裝

建立nova數據庫信息:

mysql -u root -p

創建nova_apinovanova_cell0,和placement數據庫:

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';

使用admin權限訪問:

source admin-openstack.sh

創建nova用戶:

openstack user create --domain default --password-prompt nova

admin角色添加到nova用戶:

openstack role add --project service --user nova admin

創建nova服務實體:

openstack service create --name nova --description "OpenStack Compute" compute

創建Compute API服務端點:

openstack endpoint create --region RegionOne compute public http://172.16.175.11:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://172.16.175.11:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://172.16.175.11:8774/v2.1

創建placement用戶:

# 需要設置用戶名的密碼,我的密碼是 placement
openstack user create --domain default --password-prompt placement

使用admin角色將Placement用戶添加到服務項目:

openstack role add --project service --user placement admin

創建placement服務實體:

openstack service create --name placement --description "Placement API" placement

創建Placement API服務端點:

openstack endpoint create --region RegionOne placement public http://172.16.175.11:8778
openstack endpoint create --region RegionOne placement internal http://172.16.175.11:8778
openstack endpoint create --region RegionOne placement admin http://172.16.175.11:8778

#####安裝nova

yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api -y

編輯/etc/nova/nova.conf文件並完成以下操作:

# 僅啓用計算和元數據API
[DEFAULT]
enabled_apis = osapi_compute,metadata


# 配置數據庫訪問
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api

[database]
connection = mysql+pymysql://nova:[email protected]/nova

[placement_database]
connection = mysql+pymysql://placement:[email protected]/placement

# 配置RabbitMQ消息隊列訪問
[DEFAULT]
transport_url = rabbit://openstack:[email protected]


# 配置身份服務訪問
[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://172.16.175.11:5000/v3
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

# 啓用對網絡服務的支持
[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

# 配置VNC代理以使用控制器節點的管理接口IP地址
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = 172.16.175.11

# 配置Image服務API的位置
[glance]
api_servers = http://172.16.175.11:9292

# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

# 配置Placement API
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://172.16.175.11:5000/v3
username = placement
password = placement

配置添加到以下內容來啓用對Placement API的訪問 /etc/httpd/conf.d/00-nova-placement-api.conf

添加到配置文件最後

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

重啓httpd服務

systemctl restart httpd

填充nova-apiplacement數據庫:

su -s /bin/sh -c "nova-manage api_db sync" nova

註冊cell0數據庫:

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

創建cell1單元格:

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

填充nova數據庫:

su -s /bin/sh -c "nova-manage db sync" nova

驗證nova cell0和cell1是否正確註冊:

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

驗證數據庫:

mysql -unova -pnova -e "use nova ; show tables;"
mysql -unova -pnova -e "use nova_api ; show tables;"
mysql -unova -pnova -e "use nova_cell0 ; show tables;"
mysql -uplacement -pplacement -e "use placement ; show tables;"
啓動nova 控制節點服務
systemctl enable openstack-nova-api.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service

Nova計算節點安裝

安裝包

yum install openstack-nova-compute -y

編輯/etc/nova/nova.conf文件並完成以下操作:

# 拉取控制節點配置進行修改。刪除以下配置即可,這些是數據庫訪問的配置。
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api

[database]
connection = mysql+pymysql://nova:[email protected]/nova

[placement_database]
connection = mysql+pymysql://placement:[email protected]/placement

# 添加內容如下:
[vnc]
# 修改爲計算節點的IP
server_proxyclient_address = 172.16.175.12
novncproxy_base_url = http://172.16.175.11:6080/vnc_auto.html

確定您的計算節點是否支持虛擬機的硬件加速:

egrep -c '(vmx|svm)' /proc/cpuinfo

如果此命令返回值大於1,則計算節點支持硬件加速,通常不需要其他配置。

如果此命令返回值zero,則您的計算節點不支持硬件加速,您必須配置libvirt爲使用QEMU而不是KVM。

編輯文件中的[libvirt]部分,/etc/nova/nova.conf如下所示:

[libvirt]
# ...
virt_type = kvm
# 我這裏的返回值雖然大於1,但是配置爲kvm導致虛擬機不能啓動,修改爲qemu正常,求大神赤腳。
啓動nova計算節點服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
將計算節點添加到單元數據庫(在管理節點執行)
source admin-openstack.sh
# 確認數據庫中有主機
openstack compute service list --service nova-compute
# 發現計算主機
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

添加新計算節點時,必須在控制器節點上運行以註冊這些新計算節點。或者,您可以在以下位置設置適當的間隔 :/etc/nova/nova.conf

[scheduler]
discover_hosts_in_cells_interval = 300
驗證操作
source admin-openstack.sh
# 列出服務組件以驗證每個進程的成功啓動和註冊:state爲up 狀態
openstack compute service list
# 列出Identity服務中的API端點以驗證與Identity服務的連接
openstack catalog list
# 列出Image服務中的圖像以驗證與Image服務的連接:
openstack image list
# 檢查單元格和放置API是否成功運行:
nova-status upgrade check

這裏說明一下,在openstack compute service list命令進行查看時官方文檔比你多啓動一個服務器,你啓動它就行了。
這個服務是控制檯遠程連接認證服務器,不安裝不能進行vnc遠程登錄。

systemctl enable openstack-nova-consoleauth
systemctl start openstack-nova-consoleauth

neutron 服務安裝

neutron控制節點安裝

爲neutron服務創建數據庫相關:

mysql -uroot -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

創建neutron管理用戶

openstack user create --domain default --password-prompt neutron

neutron用戶添加到 neutron 服務中,並賦予admin的角色

openstack role add --project service --user neutron admin

創建neutron服務實體:

openstack service create --name neutron --description "OpenStack Networking" network

創建網絡服務API端點:

openstack endpoint create --region RegionOne network public http://172.16.175.11:9696
openstack endpoint create --region RegionOne network internal http://172.16.175.11:9696
openstack endpoint create --region RegionOne network admin http://172.16.175.11:9696
配置網絡選項

您可以使用選項1(Procider)、2(Self-service)表示的兩種體系結構之一來部署網絡服務。

選項1部署了最簡單的架構,該架構僅支持將實例附加到提供商(外部)網絡。沒有自助(私有)網絡,路由器或浮動IP地址。只有該admin特權用戶或其他特權用戶才能管理提供商網絡。

Procider Network

安裝插件

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

配置服務器組件

編輯/etc/neutron/neutron.conf文件並完成以下操作

[DEFAULT]
# 啓用模塊化第2層(ML2)插件並禁用其他插件
core_plugin = ml2
service_plugins =

# 通知Compute網絡拓撲更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone

[database]
# 配置數據庫訪問
connection = mysql+pymysql://neutron:[email protected]/neutron

[keystone_authtoken]
# 配置身份服務訪問
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

# 配置網絡以通知Compute網絡拓撲更改
[nova]
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置模塊化第2層(ML2)插件

ML2插件使用Linux橋接機制爲實例構建第2層(橋接和交換)虛擬網絡基礎架構。

編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件並完成以下操作:

[ml2]
# 啓用平面和VLAN網絡
type_drivers = flat,vlan
# 禁用自助服務網絡
tenant_network_types =
# 啓用Linux橋接機制
mechanism_drivers = linuxbridge
# 啓用端口安全性擴展驅動程序
extension_drivers = port_security

[ml2_type_flat]
# 將提供商虛擬網絡配置爲扁平網絡
flat_networks = provider

[securitygroup]
# 啓用ipset以提高安全組規則的效率
enable_ipset = true

配置linux網橋代理

Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。

編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成以下操作:

[linux_bridge]
# 提供者虛擬網絡映射到提供者物理網絡接口,這裏的eth-0爲映射的網卡
physical_interface_mappings = provider:eth-0

[vxlan]
# 禁用VXLAN覆蓋網絡
enable_vxlan = false

[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

驗證以下所有sysctl值設置爲1:確保您的Linux操作系統內核支持網橋過濾器:

modprobe br_netfilter
ls /proc/sys/net/bridge

在/etc/sysctl.conf中添加:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

執行生效

sysctl -p

配置DHCP代理

DHCP代理爲虛擬網絡提供DHCP服務。

編輯/etc/neutron/dhcp_agent.ini文件並完成以下操作:

[DEFAULT]
# 配置Linux橋接接口驅動程序,Dnsmasq DHCP驅動程序,並啓用隔離的元數據,以便提供商網絡上的實例可以通過網絡訪問元數據:
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Self-service networks

安裝組件

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

配置服務組件

編輯/etc/neutron/neutron.conf文件並完成以下操作:

[DEFAULT]
# 啓用模塊化第2層(ML2)插件,路由器服務和重疊的IP地址
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true

# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone

# 通知Compute網絡拓撲更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
# 配置數據庫訪問
connection = mysql+pymysql://neutron:[email protected]/neutron

[keystone_authtoken]
# 配置身份服務訪問
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

# 配置網絡以通知Compute網絡拓撲更改
[nova]
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置模塊化第2層(ML2)插件

ML2插件使用Linux橋接機制爲實例構建第2層(橋接和交換)虛擬網絡基礎架構。

編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件並完成以下操作:

[ml2]
# 啓用flat,VLAN和VXLAN網絡
type_drivers = flat,vlan,vxlan
# 啓用VXLAN自助服務網絡
tenant_network_types = vxlan
# 啓用Linux橋和第2層填充機制
mechanism_drivers = linuxbridge,l2population
# 啓用端口安全性擴展驅動程序
extension_drivers = port_security

[ml2_type_flat]
# 將提供商虛擬網絡配置爲扁平網絡
flat_networks = provider

[ml2_type_vxlan]
# 自助服務網絡配置VXLAN網絡標識符範圍
vni_ranges = 1:1000

[securitygroup]
# 啓用ipset以提高安全組規則的效率
enable_ipset = true

配置Linux橋代理

Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。

編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成以下操作:

[linux_bridge]
# 提供者虛擬網絡映射到提供者物理網絡接口,這裏的eth0爲映射的網卡
physical_interface_mappings = provider:eth0

[vxlan]
# 啓用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,並啓用第2層填充
enable_vxlan = true
local_ip = 172.16.175.11
l2_population = true

[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通過驗證以下所有sysctl值設置爲1:確保您的Linux操作系統內核支持網橋過濾器:

modprobe br_netfilter
ls /proc/sys/net/bridge

在/etc/sysctl.conf中添加:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

執行生效

sysctl -p

配置第三層代理

第3層(L3)代理爲自助虛擬網絡提供路由和NAT服務。

編輯/etc/neutron/l3_agent.ini文件並完成以下操作:

[DEFAULT]
# 配置Linux橋接接口驅動程序和外部網橋
interface_driver = linuxbridge

配置DHCP代理

DHCP代理爲虛擬網絡提供DHCP服務。

編輯/etc/neutron/dhcp_agent.ini文件並完成以下操作:

[DEFAULT]
# 配置Linux橋接接口驅動程序,Dnsmasq DHCP驅動程序,並啓用隔離的元數據,以便提供商網絡上的實例可以通過網絡訪問元數據
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata 客戶端

metadata數據爲虛擬機提供配置信息。

編輯/etc/neutron/metadata_agent.ini文件並完成以下操作

[DEFAULT]
# 配置metadata主機和共享密鑰
nova_metadata_host = controller
metadata_proxy_shared_secret = heleicool
# heleicool 爲neutron和nova之間通信的密碼
配置計算服務(nova計算服務)使用網絡服務

編輯/etc/nova/nova.conf文件並執行以下操作

[neutron]
# 配置訪問參數,啓用metadata代理並配置密碼:
url = http://172.16.175.11:9696
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = heleicool
安裝完成

網絡服務初始化腳本需要一個/etc/neutron/plugin.ini指向ML2插件配置文件的符號鏈接/etc/neutron/plugins/ml2/ml2_conf.ini。如果此符號鏈接不存在,請使用以下命令創建它

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

填充數據庫,這裏需要用到neutron.conf和ml2_conf.ini

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重啓nova 計算服務,因爲修改了它的配置文件。

systemctl restart openstack-nova-api.service

啓動網絡服務並將其配置爲在系統引導時啓動

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

neutron 計算節點安裝

安裝組件
yum install openstack-neutron-linuxbridge ebtables ipset -y
配置公共組件

Networking公共組件配置包括身份驗證機制,消息隊列和插件。

編輯/etc/neutron/neutron.conf文件並完成以下操作:

註釋掉任何connection選項,因爲計算節點不直接訪問數據庫

[DEFAULT]
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:[email protected]
# 配置身份服務訪問
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
# 配置鎖定路徑
lock_path = /var/lib/neutron/tmp
配置網絡選項

選擇爲控制器節點選擇的相同網絡選項,以配置特定於其的服務

Procider Network

配置網橋代理

Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。

編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成以下操作:

[linux_bridge]
# 將提供者虛擬網絡映射到提供者物理網絡接口
physical_interface_mappings = provider:eth0

[vxlan]
# 禁用VXLAN覆蓋網絡
enable_vxlan = false

[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通過驗證以下所有sysctl值設置爲1:確保您的Linux操作系統內核支持網橋過濾器:

modprobe br_netfilter
ls /proc/sys/net/bridge

在/etc/sysctl.conf中添加:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

執行生效

sysctl -p
Self-service networks

配置網橋代理

Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。

編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成以下操作:

[linux_bridge]
# 將提供者虛擬網絡映射到提供者物理網絡接口
physical_interface_mappings = provider:eth0

[vxlan]
# 啓用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,並啓用第2層填充
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true

[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通過驗證以下所有sysctl值設置爲1:確保您的Linux操作系統內核支持網橋過濾器:

modprobe br_netfilter
ls /proc/sys/net/bridge

在/etc/sysctl.conf中添加:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

執行生效

sysctl -p
配置計算(nova計算服務)服務使用網絡服務

編輯/etc/nova/nova.conf文件並完成以下操作

[neutron]
# ...
url = http://172.16.175.11:9696
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
完成安裝

重啓Compute服務

systemctl restart openstack-nova-compute.service

啓動Linux網橋代理並將其配置爲在系統引導時啓動

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
驗證操作
Provider networks

列出驗證成功連接neutron的代理

openstack network agent list
Self-service networks

列出驗證成功連接neutron的代理

# Metadata agent/Linux brideg agent/L3 agent/DHCP agent四個代理程序
openstack network agent list

啓動實例

以上服務都沒有問題後就可以進行創建啓動虛擬機。

創建虛擬網絡

首先需要創建一個虛擬網絡,根據配置Neutron時選擇的網絡選項進行虛擬網絡的配置。

Provider networks

創建網絡

source admin-openstack.sh
openstack network create  --share --external \
  --provider-physical-network provider \
  --provider-network-type flat public
# --share 選項允許所有的項目使用虛擬網絡
# --external 選項將虛擬網絡定義爲外部,如果你希望創建內部網絡,則可以使用--internal。默認時internal
# --provider-physical-network爲在ml2_conf.ini中配置的flat_networks。
# --provider-network-type flat 是網絡名稱

在網絡上創建子網

openstack subnet create --network public \
  --allocation-pool start=172.16.175.100,end=172.16.175.250 \
  --dns-nameserver 172.16.175.2 --gateway 172.16.175.2 \
  --subnet-range 172.16.175.0/24 public
# --subnet-range 使用CIDR表示法表示提供IP的子網
# start和end分別爲要爲實例分配IP的範圍
# --dns-nameserver 指定DNS解析的IP地址
# --gateway 網關地址
Self-service networks
創建自有網絡
source admin-openstack.sh
openstack network create selfservice

在網絡上創建子網

openstack subnet create --network selfservice \
  --dns-nameserver 8.8.8.8 --gateway 192.168.1.1 \
  --subnet-range 192.168.1.0/24 selfservice
創建路由
source demo-openstack.sh
openstack router create router

將自助網絡子網添加爲路由器上的接口

openstack router add subnet router selfservice

在路由器上的提供商網絡上設置網關

openstack router set router --external-gateway public
驗證操作

列出網絡命名空間。您應該看到一個qrouter名稱空間和兩個 qdhcp名稱空間

source demo-openstack.sh
ip netns

列出路由器上的端口以確定提供商網絡上的網關IP地址

openstack port list --router router

創建實例配置類型

# 爲虛擬機分配資源爲1C64M 名爲m1.nano的資源類型
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

配置祕鑰對

# 生成祕鑰文件
ssh-keygen -q -N ""
# openstack創建名爲mykey的祕鑰
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# 查看祕鑰
openstack keypair list

添加安全策略

默認情況下,default安全組適用於所有實例。

# 允許icmp
openstack security group rule create --proto icmp default
# 允許22端口
openstack security group rule create --proto tcp --dst-port 22 default

啓動實例

Provider networks
確定實例選項

查看可用的配置類型

source  demo-openstack.sh
openstack flavor list

查看可用的鏡像

openstack image list

查看可用的網絡

openstack network list

查看可用的安全組

openstack security group list
啓動實例
openstack server create --flavor m1.nano --image cirros \
  --nic net-id=PROVIDER_NET_ID --security-group default \
  --key-name mykey provider-instance
# PROVIDER_NET_ID 爲public網絡ID,如果選擇環境只包含一個網絡,則可以省略該--nic選項,因爲OpenStack會自動選擇唯一可用的網絡。

檢查實例的狀態

openstack server list

使用虛擬控制檯訪問實例

openstack console url show provider-instance
Self-service networks
確定實例選項

查看可用的配置類型

source  demo-openstack.sh
openstack flavor list

查看可用的鏡像

openstack image list

查看可用的網絡

openstack network list

查看可用的安全組

openstack security group list
啓動實例
# 替換SELFSERVICE_NET_ID爲selfservice網絡ID 。
openstack server create --flavor m1.nano --image cirros \
  --nic net-id=SELFSERVICE_NET_ID --security-group default \
  --key-name mykey selfservice-instance

檢查實例的狀態

openstack server list

使用虛擬控制檯訪問實例

openstack console url show provider-instance

horizon服務安裝

horizon服務需要基於 Apache HTTP服務和Memcached服務,我把這個服務安裝在控制節點,所以免去了這些服務的安裝,如果你要單獨部署,則需要安裝這些服務。

安裝和配置組件

安裝包

yum install openstack-dashboard -y

編輯 /etc/openstack-dashboard/local_settings 文件並完成以下操作

# 配置儀表板以在controller節點上使用OpenStack服務
OPENSTACK_HOST = "172.16.175.11"
# 配置允許訪問的主機列表
ALLOWED_HOSTS = ['*', 'two.example.com']
# 配置memcached會話存儲服務
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': '172.16.175.11:11211',
    }
}
# 啓用Identity API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
# 啓用對域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
# 配置Default爲通過儀表板創建的用戶的默認域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 配置user爲您通過儀表板創建的用戶的默認角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "myrole"
# 如果選擇網絡選項1,請禁用對第3層網絡服務的支持
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}
# 配置時區
TIME_ZONE = "Asia/Shanghai"

/etc/httpd/conf.d/openstack-dashboard.conf如果未包含,請添加以下行 。

WSGIApplicationGroup %{GLOBAL}

安裝完成

重新啓動Web服務器和memcached存儲服務:

systemctl restart httpd.service memcached.service

Cender 服務安裝

Cinder 控制節點安裝

組件介紹

Cinder-api:接受api,將操作下發到cinder-volume

cinder-volume:提供存儲支持的服務,它可以通過驅動程序體系結構與各種存儲提供程序進行交互。

cinder-scheduler daemon:選擇要在其上創建卷的最佳存儲提供程序節點。一個類似的組成部分nova-scheduler

cinder-backup daemon:該cinder-backup服務爲備份存儲提供程序提供任何類型的備份卷。與cinder-volume服務一樣,它可以通過驅動程序架構與各種存儲提供商進行交互。

安裝配置控制節點

創建cinder數據庫及用戶訪問

mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

創建服務憑據

source admin-openstack.sh
# 創建用戶,需要輸入密碼,我的密碼是cinder
openstack user create --domain default --password-prompt cinder
# 將cinder用戶加入servicer項目,並賦予admin角色
openstack role add --project service --user cinder admin
# 創建cinderv2和cinderv3服務實體
openstack service create --name cinderv2   --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3   --description "OpenStack Block Storage" volumev3

創建Block storage服務endpoint:

openstack endpoint create --region RegionOne volumev2 public http://172.16.175.11:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://172.16.175.11:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://172.16.175.11:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://172.16.175.11:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://172.16.175.11:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://172.16.175.11:8776/v3/%\(project_id\)s
安裝配置組件

安裝包

yum install openstack-cinder python-keystone -y

編輯/etc/cinder/cinder.conf文件並完成以下操作:

[DEFAULT]
# my_ip註釋後期補充,可以不配置
my_ip = 172.16.175.11

# 配置rabbitMQ消息隊列
transport_url = rabbit://openstack:[email protected]

# 配置身份服務訪問
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder

[database]
# 配置數據訪問
connection = mysql+pymysql://cinder:[email protected]/cinder

[oslo_concurrency]
# 配置鎖路徑
lock_path = /var/lib/cinder/tmp

填充數據庫並驗證

su -s /bin/sh -c "cinder-manage db sync" cinder
mysql -ucinder -pcinder -e "use cinder;show tables;"
配置計算服務使用塊設備存儲

編輯/etc/nova/nova.conf文件並將以下內容添加到其中

[cinder]
os_region_name = RegionOne
完成安裝

重新啓動計算服務 API服務:

systemctl restart openstack-nova-api.service

啓動Block Storage服務並配置系統引導時啓動服務

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

Cinder 存儲節點安裝

在存儲節點安裝和配置Block Storage服務之前,必須準備存儲設備

準備存儲設備

安裝LVM包

yum install lvm2 device-mapper-persistent-data -y

啓動LVM元數據服務並將其配置爲在系統引導時啓動

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

創建LVM物理卷/dev/sdb

pvcreate /dev/sdb

創建LVM卷組cinder-volumes

vgcreate cinder-volumes /dev/sdb

只有實例才能訪問塊存儲卷。但是,底層操作系統管理與卷關聯的設備。默認情況下,LVM卷掃描工具會掃描 /dev目錄以查找包含卷的塊存儲設備。如果項目在其捲上使用LVM,則掃描工具會檢測這些卷並嘗試對其進行緩存,這可能會導致底層操作系統和項目卷出現各種問題。您必須重新配置LVM以僅掃描包含cinder-volumes卷組的設備。編輯 /etc/lvm/lvm.conf文件並完成以下操作:

devices部分中,添加一個接受/dev/sdb設備的過濾 器並拒絕所有其他設備:

devices {
...
filter = [ "a/sdb/", "r/.*/"]

濾波器陣列中的每個項目開始於a用於接受r用於拒絕,並且包括用於所述裝置名稱的正則表達式。陣列必須r/.*/以拒絕任何剩餘設備結束。您可以使用vgs -vvvv命令來測試過濾器。

安裝和配置組件

安裝包

yum install openstack-cinder targetcli python-keystone -y

編輯/etc/cinder/cinder.conf文件並完成以下操作

[DEFAULT]
# my_ip註釋後期補充,可以不配置
my_ip = 172.16.175.12

# 配置Image服務API的位置
glance_api_servers = http://172.16.175.11:9292

# 配置rabbitMQ消息隊列
transport_url = rabbit://openstack:[email protected]

# 啓用LVM後端 
enabled_backends = lvm

# 配置身份服務訪問
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder

[database]
# 配置數據訪問
connection = mysql+pymysql://cinder:[email protected]/cinder

[oslo_concurrency]
# 配置鎖路徑
lock_path = /var/lib/cinder/tmp

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ISCSI_Storage
安裝完成

啓動Block Storage卷服務(包括其依賴項)並將其配置爲在系統引導時啓動

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

配置NFS後端

創建NFS共享

安裝NFS相關軟件包並啓動

yum install nfs-utils rpcbind -y
systemctl start nfs
systemctl start rpcbind

配置共享NFS

mkdir /data/nfs -p
# vim /etc/exports
/data/nfs *(rw,sync,no_root_squash)
# 使配置生效
exportfs -r

查看本機掛載情況

showmount -e 127.0.0.1
配置塊存儲以使用NFS存儲後端

創建文件,每個條目表示cinder卷服務應用於後端存儲的每個NFS共享。每個條目應該是一個單獨的行,並應使用以下格式:

# HOST是NFS服務器的IP地址或主機名。
# SHARE是現有和可訪問的NFS共享的絕對路徑。
# HOST:SHARE
172.16.175.12:/data/nfs

設置/etc/cinder/nfsshares爲由root用戶和cinder組擁有:

chown root:cinder /etc/cinder/nfsshares

設置/etc/cinder/nfsshares爲可以被cinder組的成員讀取:

chmod 0640 /etc/cinder/nfsshares

配置/etc/cinder/nfsshares文件,添加如下配置:

[nfs]
nfs_shares_config = /etc/cinder/nfsshares
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_mount_point_base = $state_path/mnt
volume_backend_name = NFS_Storage
# 啓用nfs後端 
enabled_backends = nfs

多存儲後端管理

創建類型

cinder type-create ISCSI
cinder type-create NFS

關聯類型

cinder type-key ISCSI set volume_backend_name=ISCSI_Storage
cinder type-key NFS set volume_backend_name=NFS_Storage

驗證操作

列出服務組件以驗證每個進程的成功啓動

source admin-openstack.sh
openstack volume service list

其他

CentOS基礎鏡像製作

其中一種方式是通過meta-data的方式獲取數據進行虛擬機的配置,這裏就需要先說一下meta-data這個服務。

首先需要了一個知識點,netns是在linux中提供網絡虛擬化的一個項目,使用netns網絡空間虛擬化可以在本地虛擬化出多個網絡環境,目前netns在lxc容器中被用來爲容器提供網絡。

neutron-dhcp-agent所在的服務器上和執行如下命令:

[root@controller ~]# ip netns list
qdhcp-750fb27a-4dad-4cb9-8ef6-6ac714532578 (id: 0)

該服務類是docker,創建了一個虛擬網絡,繼續看這個虛擬網絡的一些信息:

[root@controller ~]# ip netns exec qdhcp-750fb27a-4dad-4cb9-8ef6-6ac714532578 ip add list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ns-d85f84fe-5d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fa:16:3e:35:d3:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-d85f84fe-5d
       valid_lft forever preferred_lft forever
    inet 172.16.47.100/20 brd 172.16.47.255 scope global ns-d85f84fe-5d
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe35:d321/64 scope link
       valid_lft forever preferred_lft forever
[root@controller ~]# ip netns exec qdhcp-750fb27a-4dad-4cb9-8ef6-6ac714532578 netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      27018/dnsmasq
tcp        0      0 172.16.47.100:53        0.0.0.0:*               LISTEN      27018/dnsmasq
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      27020/haproxy
tcp6       0      0 fe80::f816:3eff:fe35:53 :::*                    LISTEN      27018/dnsmasq

這個虛擬環境提供兩個服務,一、meta-data的http服務。二、dhcp服務。

dhcp對應的IP爲172.16.47.100,meta-data對應的IP爲169.254.169.254

新建的虛擬機通過dhcp獲取IP相關配置信息,同時會獲取一條到169.254.169.254的路由。虛擬機通過訪問meta-data來獲取虛擬機的一些信息用來對主機進行配置。

禁用zeroconf路由

要訪問meta-data服務的實例,必須禁用默認的zeroconfi路由:

# 可以不做
echo "NOZEROCONF=yes" >> /etc/sysconfig/network
配置控制檯

要使nova console-log命令在CentOS 7上正常工作,您可能需要執行以下步驟

編輯文件 /etc/default/grubGRUB_CMDLINE_LINUX 選項, 刪除 rhgb quiet 並添加 console=tty0 console=ttyS0,115200n8

例如:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap console=tty0 console=ttyS0,115200n8"

運行以下命令以保存更改:

grub2-mkconfig -o /boot/grub2/grub.cfg
openstack 自定義鏡像初始化腳本:meta-data初始化腳本

cat /tmp/init.sh

#!/bin/bash
set_key(){
  if [ ! -d /root/.ssh ]; then
    mkdir -p /root/.ssh
    chmod 700 /root/.ssh
  fi

  # Fetch public key using HTTP
  ATTEMPTS=30
  FAILED=0
  while [ ! -f /root/.ssh/authorized_keys ]; do
    curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key \
      > /tmp/metadata-key 2>/dev/null
    if [ $? -eq 0 ]; then
      cat /tmp/metadata-key >> /root/.ssh/authorized_keys
      chmod 0600 /root/.ssh/authorized_keys
      restorecon /root/.ssh/authorized_keys
      rm -f /tmp/metadata-key
      echo "Successfully retrieved public key from instance metadata"
      echo "*****************"
      echo "AUTHORIZED KEYS"
      echo "*****************"
      cat /root/.ssh/authorized_keys
      echo "*****************"
    fi
  done
}

set_hostname(){
    PRE_HOSTNAME=$(curl -s http://169.254.169.254/latest/meta-data/hostname)
    DOMAIN_NAME=$(echo $PRE_HOSTNAME | awk -F '.' '{print $1}')
    hostnamectl set-hostname `echo ${DOMAIN_NAME}.example.com`
}

set_static_ip(){
	PRE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
	NET_FILE=/etc/sysconfig/network-scripts/ifcfg-eth0
	echo > $NET_FILE

	echo "TYPE=Ethernet" >> $NET_FILE
	echo "BOOTPROTO=static" >> $NET_FILE
	echo "NAME=eth0" >> $NET_FILE
	echo "DEVICE=eth0" >> $NET_FILE
	echo "ONBOOT=yes" >> $NET_FILE
	echo "IPADDR=${PRE_IP}" >> $NET_FILE
	echo "NETMASK=255.255.255.0" >> $NET_FILE
	echo "GATEWAY=172.16.175.2" >> $NET_FILE
}

main(){
	set_key;
	set_hostname;
	set_static_ip;
	systemctl restart network.service
	/bin/cp /tmp/rc.local /etc/rc.d/rc.local
}
main
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章