openstack

实验环境
 
系统:CentOS-7-x86_64-DVD-1804
 
实验环境:vmware

hostname ip 功能
controller 192.168.80.100 管理节点
compute01 192.168.80.101 计算节点

一、环境设置

1、安装必要软件:
 
yum install -y vim net-tools wget telnet
openstack

2、修改主机名:
hostnamectl set-hostname controller
hostnamectl set-hostname compute01
3、配置网卡信息:网段为192.168.80.0/24,网关为192.168.80.2
 cd /etc/sysconfig/network-scripts/
 cp -p ifcfg-ens32 ifcfg-ens34

node1节点网卡配置如下:
 
BOOTPROTO="static"
ONBOOT="yes"
IPADDR=192.168.80.100
NETMASK=255.255.255.0
GATEWAY=192.168.80.2

openstack

node2节点网卡配置如下:
 
BOOTPROTO="static"
ONBOOT="yes"
IPADDR=192.168.80.101
NETMASK=255.255.255.0
GATEWAY=192.168.80.2

openstack

重启网卡:
 
systemctl restart network
3、分别配置/etc/hosts文件:
vi  /etc/hosts
 
192.168.80.100        controller
192.168.80.101       compute01
 
 
分别配置/etc/resolv.conf文件:
 vi /etc/resolv.conf
nameserver 192.168.80.2

openstack

4、关闭防火墙:
 
systemctl disable firewalld    开机自动关闭防火墙

systemctl stop firewalld      关闭防火墙
 
 
关闭selinux:(应该可以省略)
 
setenforce 0     关闭selinux

vi /etc/selinux/config    开机自动关闭selinux
SELINUX=disabled
7、时间同步:
yum install ntpdate -y
ntpdate time1.aliyun.com

openstack

6、安装openstack包
 
安装对应版本的epel库:
 
cd  /etc/yum.repos.d
 
cp back/* ./
 
yum install centos-release-openstack-rocky -y

openstack

安装openstack客户端:
 
yum install python-openstackclient -y

openstack

RHEL和CentOS 默认启用SELinux。安装 openstack-selinux软件包以自动管理OpenStack服务的安全策略:
 
yum install openstack-selinux -y

openstack

7、数据库安装
 
安装包:
 
yum install mariadb mariadb-server python2-PyMySQL -y
 
 
创建和编辑配置文件/etc/my.cnf.d/openstack.cnf:
 
[mysqld]
bind-address = 192.168.80.100
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

openstack

启动数据库:
 
systemctl enable mariadb.service    开机自启
systemctl start mariadb.service      开启
通过运行mysql_secure_installation 脚本来保护数据库服务。特别是,为数据库root帐户选择合适的密码 :
 
mysql_secure_installation
 
 
 
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
 
In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
 
Enter current password for root (enter for none):
OK, successfully used password, moving on...
 
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
 
Set root password? [Y/n] y  # 是否设置root密码
New password:        # 输入两次root密码
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!
 
 
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.
 
Remove anonymous users? [Y/n] y  # 是否删除匿名用户
 ... Success!
 
Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.
 
Disallow root login remotely? [Y/n] y # 是否禁止root远程登陆
 ... Success!
 
By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.
 
Remove test database and access to it? [Y/n] y # 是否删除test库
 
▽
 - Dropping test database...
 
▽
 ... Success!
 - Removing privileges on test database...
 ... Success!
 
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
 
Reload privilege tables now? [Y/n] y  # 加载权限表
 ... Success!
 
Cleaning up...
 
All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.
 
Thanks for using MariaDB!
8、安装消息队列
 
安装rabbitmq
 
yum install rabbitmq-server -y

openstack

启动rabbitmq
 
systemctl enable rabbitmq-server.service      开机自启
systemctl start rabbitmq-server.service      开启
   
 
使用此插件实现web管理
rabbitmq-plugins enable rabbitmq_management

验证是否开启成功
netstat -anpt | grep 5672

openstack

浏览器访问:192.168.80.100:15672

openstack

账号:guest
密码:guest

添加openstack用户
 
# 添加的用户名为openstack,密码也是。
rabbitmqctl add_user openstack openstack
 
 
对openstack用户进行读写授权:
 
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
9、###安装Memcached
 
安装Memacached
 
yum install memcached python-memcached -y

openstack

编辑/etc/sysconfig/memcached,修改配置
 
OPTIONS="-l 127.0.0.1,::1,192.168.80.100"

openstack

启动memcached
 
systemctl enable memcached.service      开机自启
systemctl start memcached.service      开启
 
验证有没有开启
netstat -anpt | grep memcache

openstack

目前为止端口信息如下
netstat
# rabbitmq 端口
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      1690/beam
# mariadb-server 端口
tcp        0      0 192.168.80.100:3306      0.0.0.0:*               LISTEN      1506/mysqld
# memcached 端口
tcp        0      0 192.168.80.100:11211     0.0.0.0:*               LISTEN      2236/memcached
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      2236/memcached
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      766/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1050/master
tcp6       0      0 :::5672                 :::*                    LISTEN      1690/beam
tcp6       0      0 ::1:11211               :::*                    LISTEN      2236/memcached
tcp6       0      0 :::22                   :::*                    LISTEN      766/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      1050/master
 

二、开始安装openstack服务

keystone服务安装
 
配置keystone数据库:
 
使用数据库访问客户端以root用户身份连接到数据库服务器:
 
mysql -u root -p
 
 
创建keystone数据库,授予对keystone数据库的适当访问权限:
 
CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
 
 
安装配置keystone
 
运行以下命令以安装软件包:
 
yum install openstack-keystone httpd mod_wsgi -y

openstack

编辑/etc/keystone/keystone.conf文件并完成以下操作:
cd /etc/keystone/
cp keystone.conf keystone.conf.bak 
egrep -v "^#|^$" keystone.conf.bak > keystone.conf

vi keystone.conf

[database]
connection = mysql+pymysql://keystone:[email protected]/keystone

[token]
provider = fernet

openstack

填充Identity服务数据库:
 
su -s /bin/sh -c "keystone-manage db_sync" keystone
# 验证数据库表
mysql -ukeystone -pkeystone -e "use keystone; show tables;"

openstack

初始化Fernet密钥存储库:
 
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
 
 
引导身份服务:
 
# admin为管理用户的密码,这里是设置密码。
keystone-manage bootstrap --bootstrap-password admin \
  --bootstrap-admin-url http://192.168.80.100:5000/v3/ \
  --bootstrap-internal-url http://192.168.80.100:5000/v3/ \
  --bootstrap-public-url http://192.168.80.100:5000/v3/ \
  --bootstrap-region-id RegionOne
 
 
配置Apache HTTP服务
 
编辑/etc/httpd/conf/httpd.conf
 
ServerName 192.168.80.100

openstack

创建/usr/share/keystone/wsgi-keystone.conf文件的链接:
 
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
 
 
启动服务
 
启动Apache HTTP服务并将其配置为在系统引导时启动:
 
systemctl enable httpd.service      开机自启
systemctl start httpd.service        开启
 
 
配置管理帐户
 
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://192.168.80.100:5000/v3
export OS_IDENTITY_API_VERSION=3
 
查看全局变量是否生效
env|grep ^OS

openstack

创建domain(域),projects(项目),users(用户) and roles(角色)
 
虽然本指南中的keystone-manage bootstrap步骤中已存在“default”域,但创建新域的正式方法是:
 
openstack domain create --description "An Example Domain" example

openstack

使用默认的domain,创建service项目:用做服务。
 
openstack project create --domain default \
  --description "Service Project" service

openstack


创建myproject项目:用做常规(非管理员)任务应使用非特权项目和用户。
 
openstack project create --domain default \
  --description "Demo Project" myproject

openstack

创建myuser用户:
 
# 创建用户需要输入密码(我的密码是myrole)
openstack user create --domain default \
  --password-prompt myuser

openstack

创建myrole角色:
 
openstack role create myrole

openstack

将myuser添加到myproject项目中并赋予myrole的角色:
 
openstack role add --project myproject --user myuser myrole
 
 
验证用户
 
取消设置临时 变量OS_AUTH_URL和OS_PASSWORD环境变量:
 
unset OS_AUTH_URL OS_PASSWORD
 
 
 
作为admin用户,请求身份验证令牌:
 
# 执行后需要输入admin密码
openstack --os-auth-url http://192.168.80.100:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
  
# 执行后需要输入admin密码(admin)

openstack

作为myuser用户,请求身份验证令牌(myuser):
openstack --os-auth-url http://192.168.80.100:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

openstack

创建openstack 客户端环境脚本
 
openstack客户端通过添加参数或使用环境变量的方式来与Identity服务进行交互,为了提高效率,创建环境脚本:
 
创建admin用户环境脚本:admin-openstack.sh
 
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://192.168.80.100:5000/v3
export OS_IDENTITY_API_VERSION=3

openstack

创建myuser用户环境脚本:demo-openstack.sh
 
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://192.168.80.100:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

openstack

使用脚本
 
source admin-openstack.sh

openstack token issue

openstack

三、glance服务安装

配置glance数据库:
 
root用户登陆数据库:
 
mysql -u root -p
 
创建glance数据库和用户授权:
 
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
 
 
创建glance服务凭证,使用admin用户:
 
source admin-openstack.sh
 

创建glance用户:
 
# 需要输入glance用户密码,密码是 glance(自定义)
openstack user create --domain default --password-prompt glance

openstack

将glance用户添加到service项目中,并赋予admin角色:
 
openstack role add --project service --user glance admin
 

创建glance服务实体:
 
openstack service create --name glance \
  --description "OpenStack Image" image

openstack

创建Image服务API端点:
 
openstack endpoint create --region RegionOne image public http://192.168.80.100:9292

openstack

openstack endpoint create --region RegionOne image internal http://192.168.80.100:9292

openstack

openstack endpoint create --region RegionOne image admin http://192.168.80.100:9292

openstack

安装和配置glance
 
安装包:
 
yum install openstack-glance -y 

openstack

编辑/etc/glance/glance-api.conf文件并完成以下操作:
cd /etc/glance/
cp glance-api.conf glance-api.conf.bak
egrep -v "^#|^$" glance-api.conf.bak > glance-api.conf
# 配置数据库访问:
vi glance-api.conf

[database]
connection = mysql+pymysql://glance:[email protected]/glance
 
# 配置身份服务访问:
[keystone_authtoken]
www_authenticate_uri  = http://192.168.80.100:5000
auth_url = http://192.168.80.100:5000
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
 
# 配置本地文件系统存储和映像文件的位置:
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

openstack

编辑/etc/glance/glance-registry.conf文件并完成以下操作:
 
cp glance-registry.conf glance-registry.conf.bak 

egrep -v "^#|^$" glance-registry.conf.bak > glance-registry.conf

vi glance-registry.conf
# 配置数据库访问:
[database]
connection = mysql+pymysql://glance:[email protected]/glance
 
# 配置身份服务访问:
[keystone_authtoken]
www_authenticate_uri = http://192.168.80.100:5000
auth_url = http://192.168.80.100:5000
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

openstack

填充Image服务数据库,并验证:
 
su -s /bin/sh -c "glance-manage db_sync" glance

mysql -uglance -pglance -e "use glance; show tables;"

openstack

启动服务:
 
systemctl enable openstack-glance-api.service \
openstack-glance-registry.service

systemctl start openstack-glance-api.service \
openstack-glance-registry.service
 
验证服务
 
来源admin凭据来访问仅管理员CLI命令:
 
source admin-openstack.sh
 
 
下载源镜像:
 
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
 

openstack

使用QCOW2磁盘格式,bare容器格式和公共可见性将图像上载到Image服务 ,以便所有项目都可以访问它:
 
# 确保cirros-0.4.0-x86_64-disk.img 文件在当前目录下
openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public

确认上传图像并验证属性:
 
openstack image list

openstack

四、nova服务安装

Nova控制节点安装
 
建立nova数据库信息:
 
mysql -u root -p
 
 
创建nova_api,nova,nova_cell0,和placement数据库:
 
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
 
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
 
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
 
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
 
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';

使用admin权限访问:
 
source admin-openstack.sh
 
创建nova用户:密码为nova
 
openstack user create --domain default --password-prompt nova

openstack

将admin角色添加到nova用户:
 
openstack role add --project service --user nova admin
 
 
 
创建nova服务实体:
 
openstack service create --name nova --description "OpenStack Compute" compute
 
 

openstack

创建Compute API服务端点:
 
openstack endpoint create --region RegionOne compute public http://192.168.80.100:8774/v2.1

openstack

openstack endpoint create --region RegionOne compute internal http://192.168.80.100:8774/v2.1

openstack

openstack endpoint create --region RegionOne compute admin http://192.168.80.100:8774/v2.1

openstack

创建placement用户:
 
# 需要设置用户名的密码,密码是 placement
openstack user create --domain default --password-prompt placement

openstack

使用admin角色将Placement用户添加到服务项目:
 
openstack role add --project service --user placement admin
 
 
 
创建placement服务实体:
 
openstack service create --name placement --description "Placement API" placement

openstack

创建Placement API服务端点:
 
openstack endpoint create --region RegionOne placement public http://192.168.80.100:8778

openstack

openstack endpoint create --region RegionOne placement internal http://192.168.80.100:8778

openstack

openstack endpoint create --region RegionOne placement admin http://192.168.80.100:8778

openstack

五、#####安装nova

yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api -y

openstack

编辑/etc/nova/nova.conf文件并完成以下操作:
cd /etc/nova/
cp nova.conf nova.conf.bak

egrep -v "^$|^#" nova.conf.bak > nova.conf
vi nova.conf
# 仅启用计算和元数据API
[DEFAULT]
enabled_apis = osapi_compute,metadata
 
 
# 配置数据库访问
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api
 
[database]
connection = mysql+pymysql://nova:[email protected]/nova
 
[placement_database]
connection = mysql+pymysql://placement:[email protected]/placement

# 配置RabbitMQ消息队列访问
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
 
# 配置身份服务访问
[api]
auth_strategy = keystone
 
[keystone_authtoken]
auth_url = http://192.168.80.100:5000/v3
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
 
# 启用对网络服务的支持
[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
 
# 配置VNC代理以使用控制器节点的管理接口IP地址
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.80.100
 
# 配置Image服务API的位置
[glance]
api_servers = http://192.168.80.100:9292
 
# 配置锁定路径
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
 
# 配置Placement API
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.80.100:5000/v3
username = placement
password = placement
 

配置添加到以下内容来启用对Placement API的访问 /etc/httpd/conf.d/00-nova-placement-api.conf:
 
添加到配置文件最后
 
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
 
 
重启httpd服务
 
systemctl restart httpd
 
 
填充nova-api和placement数据库:
 
su -s /bin/sh -c "nova-manage api_db sync" nova
 
 
 
注册cell0数据库:
 
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
 
 
 
创建cell1单元格:
 
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
 
 
 
填充nova数据库:
 
su -s /bin/sh -c "nova-manage db sync" nova

openstack

验证nova cell0和cell1是否正确注册:
 
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

openstack

验证数据库:
 
mysql -unova -pnova -e "use nova ; show tables;"

openstack

mysql -unova -pnova -e "use nova_api ; show tables;"

openstack

mysql -unova -pnova -e "use nova_cell0 ; show tables;"

openstack

mysql -uplacement -pplacement -e "use placement ; show tables;"

openstack

启动nova 控制节点服务
 
systemctl enable openstack-nova-api.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
 

nova service-list

openstack

Nova计算节点安装
 
安装包
 
yum install openstack-nova-compute -y
 

openstack

编辑/etc/nova/nova.conf文件并完成以下操作:
scp /etc/nova/nova.conf [email protected]:/etc/nova/nova.conf
# 拉取控制节点配置进行修改。删除以下配置即可,这些是数据库访问的配置。
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api
 
[database]
connection = mysql+pymysql://nova:[email protected]/nova
 
[placement_database]
connection = mysql+pymysql://placement:[email protected]/placement
 
# 添加内容如下:
[vnc]
# 修改为计算节点的IP
server_proxyclient_address = 192.168.80.101
novncproxy_base_url = http://192.168.80.100:6080/vnc_auto.html
 
 
确定您的计算节点是否支持虚拟机的硬件加速:
 
egrep -c '(vmx|svm)' /proc/cpuinfo
 
如果此命令返回值大于1,则计算节点支持硬件加速,通常不需要其他配置。
 
如果此命令返回值zero,则您的计算节点不支持硬件加速,您必须配置libvirt为使用QEMU而不是KVM。
 
编辑文件中的[libvirt]部分,/etc/nova/nova.conf如下所示:
 
[libvirt]
# ...
virt_type = qemu
# 我这里的返回值虽然大于1,但是配置为kvm导致虚拟机不能启动,修改为qemu正常,求大神赤脚。
 
 
启动nova计算节点服务
 
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
 
 
将计算节点添加到单元数据库(在管理节点执行)
 
source admin-openstack.sh
# 确认数据库中有主机
openstack compute service list --service nova-compute
# 发现计算主机
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

openstack

添加新计算节点时,必须在控制器节点上运行以注册这些新计算节点。或者,您可以在以下位置设置适当的间隔 :/etc/nova/nova.conf
 
[scheduler]
discover_hosts_in_cells_interval = 300
 
 
验证操作
 
source admin-openstack.sh
# 列出服务组件以验证每个进程的成功启动和注册:state为up 状态
openstack compute service list

openstack

# 列出Identity服务中的API端点以验证与Identity服务的连接
openstack catalog list

openstack

# 列出Image服务中的图像以验证与Image服务的连接:
openstack image list

openstack

# 检查单元格和放置API是否成功运行:
nova-status upgrade check

openstack

这里说明一下,在openstack compute service list命令进行查看时官方文档比你多启动一个服务器,你启动它就行了。
这个服务是控制台远程连接认证服务器,不安装不能进行vnc远程登录。
 
systemctl enable openstack-nova-consoleauth
systemctl start openstack-nova-consoleauth
 
检查:
nova service-list 

六、neutron 服务安装

neutron控制节点安装
 
为neutron服务创建数据库相关:
 
mysql -uroot -p

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
 
 
创建neutron管理用户neutron
 
openstack user create --domain default --password-prompt neutron

openstack

将neutron用户添加到 neutron 服务中,并赋予admin的角色
 
openstack role add --project service --user neutron admin
 
 
 
创建neutron服务实体:
 
openstack service create --name neutron --description "OpenStack Networking" network

openstack

创建网络服务API端点:
 
openstack endpoint create --region RegionOne network public http://192.168.80.100:9696

openstack

openstack endpoint create --region RegionOne network internal http://192.168.80.100:9696

openstack

openstack endpoint create --region RegionOne network admin http://192.168.80.100:9696

openstack

配置网络选项
 
您可以使用选项1(Procider)、2(Self-service)表示的两种体系结构之一来部署网络服务。
 
选项1部署了最简单的架构,该架构仅支持将实例附加到提供商(外部)网络。没有自助(私有)网络,路由器或浮动IP地址。只有该admin特权用户或其他特权用户才能管理提供商网络。
Procider Network
 
安装插件
 
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

openstack

配置服务器组件
 
编辑/etc/neutron/neutron.conf文件并完成以下操作
cd /etc/neutron/
cp neutron.conf neutron.conf.bak 
egrep -v "^$|^#" neutron.conf.bak > neutron.conf
 
vi neutron.conf

[DEFAULT]
# 启用模块化第2层(ML2)插件并禁用其他插件
core_plugin = ml2
service_plugins =
 
# 通知Compute网络拓扑更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
 
# 配置RabbitMQ 消息队列访问
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
 
[database]
# 配置数据库访问
connection = mysql+pymysql://neutron:[email protected]/neutron
 
[keystone_authtoken]
# 配置身份服务访问
www_authenticate_uri = http://192.168.80.100:5000
auth_url = http://192.168.80.100:5000
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
 
# 配置网络以通知Compute网络拓扑更改
[nova]
auth_url = http://192.168.80.100:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
 
# 配置锁定路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
 
 
配置模块化第2层(ML2)插件
 
ML2插件使用Linux桥接机制为实例构建第2层(桥接和交换)虚拟网络基础架构。
 
编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件并完成以下操作:
cd /etc/neutron/plugins/ml2/
cp ml2_conf.ini ml2_conf.ini.bak
egrep -v "^$|^#" ml2_conf.ini.bak > ml2_conf.ini
 
vi ml2_conf.ini
[ml2]
# 启用平面和VLAN网络
type_drivers = flat,vlan
# 禁用自助服务网络
tenant_network_types =
# 启用Linux桥接机制
mechanism_drivers = linuxbridge
# 启用端口安全性扩展驱动程序
extension_drivers = port_security
 
[ml2_type_flat]
# 将提供商虚拟网络配置为扁平网络
flat_networks = provider
 
[securitygroup]
# 启用ipset以提高安全组规则的效率
enable_ipset = true
 
配置linux网桥代理
 
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础架构并处理安全组。
 
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
cd /etc/neutron/plugins/ml2/
cp linuxbridge_agent.ini linuxbridge_agent.ini.bak
egrep -v "^$|^#" linuxbridge_agent.ini.bak >linuxbridge_agent.ini
vi linuxbridge_agent.ini
[linux_bridge]
# 提供者虚拟网络映射到提供者物理网络接口,这里的eth-0为映射的网卡
physical_interface_mappings = provider:eth-0
 
[vxlan]
# 禁用VXLAN覆盖网络
enable_vxlan = false
 
[securitygroup]
# 启用安全组并配置Linux桥接iptables防火墙驱动程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 
 
验证以下所有sysctl值设置为1:确保您的Linux操作系统内核支持网桥过滤器:
 
modprobe br_netfilter
ls /proc/sys/net/bridge
 
在/etc/sysctl.conf中添加:
 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
 
执行生效
 
sysctl -p

openstack

配置DHCP代理
 
DHCP代理为虚拟网络提供DHCP服务。
 
编辑/etc/neutron/dhcp_agent.ini文件并完成以下操作:
cd /etc/neutron/    
cp dhcp_agent.ini dhcp_agent.ini.bak
egrep -v "^$|^#" dhcp_agent.ini.bak > dhcp_agent.ini
vi dhcp_agent.ini

[DEFAULT]
# 配置Linux桥接接口驱动程序,Dnsmasq DHCP驱动程序,并启用隔离的元数据,以便提供商网络上的实例可以通过网络访问元数据:
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
 
配置metadata 客户端
 
metadata数据为虚拟机提供配置信息。
 
编辑/etc/neutron/metadata_agent.ini文件并完成以下操作
 
cd /etc/neutron/
cp metadata_agent.ini metadata_agent.ini.bak
egrep -v "^$|^#" metadata_agent.ini.bak > metadata_agent.ini
vi metadata_agent.ini

[DEFAULT]
# 配置metadata主机和共享密钥
nova_metadata_host = controller
metadata_proxy_shared_secret = meta
# heleicool 为neutron和nova之间通信的密码
 
配置计算服务(nova计算服务)使用网络服务
 
编辑/etc/nova/nova.conf文件并执行以下操作
 
cp /etc/nova/nova.conf /etc/nova/nova.conf.nova
 
vi /etc/nova/nova.conf
 
[neutron]
# 配置访问参数,启用metadata代理并配置密码:
url = http://192.168.80.100:9696
auth_url = http://192.168.80.100:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = meta
 
 
安装完成
 
网络服务初始化脚本需要一个/etc/neutron/plugin.ini指向ML2插件配置文件的符号链接/etc/neutron/plugins/ml2/ml2_conf.ini。如果此符号链接不存在,请使用以下命令创建它
 
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
 
填充数据库,这里需要用到neutron.conf和ml2_conf.ini
 
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
 

openstack

重启nova 计算服务,因为修改了它的配置文件。
 

systemctl restart openstack-nova-api.service
 
 
 
启动网络服务并将其配置为在系统引导时启动
 
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service

systemctl start neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
 
neutron 计算节点安装
安装组件
 
yum install openstack-neutron-linuxbridge ebtables ipset -y

openstack

配置公共组件
 
Networking公共组件配置包括身份验证机制,消息队列和插件。
 
编辑/etc/neutron/neutron.conf文件并完成以下操作:
 
注释掉任何connection选项,因为计算节点不直接访问数据库
cd /etc/neutron/
cp neutron.conf neutron.conf.bak
egrep -v "^$|^#" neutron.conf.bak > neutron.conf
vi neutron.conf
 
[DEFAULT]
# 配置RabbitMQ 消息队列访问
transport_url = rabbit://openstack:[email protected]
# 配置身份服务访问
auth_strategy = keystone
 
[keystone_authtoken]
www_authenticate_uri = http://192.168.80.100:5000
auth_url = http://192.168.80.100:5000
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
 
[oslo_concurrency]
# 配置锁定路径
lock_path = /var/lib/neutron/tmp
 
 
配置网络选项
 
选择为控制器节点选择的相同网络选项,以配置特定于其的服务
Procider Network
 
配置网桥代理
 
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础架构并处理安全组。
 
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:
cd /etc/neutron/plugins/ml2/

cp linuxbridge_agent.ini linuxbridge_agent.ini.bak

egrep -v "^$|^#" linuxbridge_agent.ini.bak >linuxbridge_agent.ini
vi linuxbridge_agent.ini

[linux_bridge]
# 将提供者虚拟网络映射到提供者物理网络接口
physical_interface_mappings = provider:eth0
 
[vxlan]
# 禁用VXLAN覆盖网络
enable_vxlan = false
 
[securitygroup]
# 启用安全组并配置Linux桥接iptables防火墙驱动程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 
 
通过验证以下所有sysctl值设置为1:确保您的Linux操作系统内核支持网桥过滤器:
 
modprobe br_netfilter
ls /proc/sys/net/bridge
 
 
在/etc/sysctl.conf中添加:
vi /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
 
 
执行生效
 
sysctl -p
 

配置计算(nova计算服务)服务使用网络服务
 
编辑/etc/nova/nova.conf文件并完成以下操作
cd /etc/nova/
cp /etc/nova/nova.conf /etc/nova/nova.conf.nova 
vi /etc/nova/nova.conf

[neutron]
# ...
url = http://192.168.80.100:9696
auth_url = http://192.168.80.100:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
 
 
完成安装
    
重启Compute服务
 yum install openstack-nova-compute -y

openstack

systemctl restart openstack-nova-compute.service
 
启动Linux网桥代理并将其配置为在系统引导时启动
 
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
 
 
验证操作
Provider networks
 
列出验证成功连接neutron的代理
 
openstack network agent list
 

openstack
openstack

列出验证成功连接neutron的代理
 
# Metadata agent/Linux brideg agent/L3 agent/DHCP agent四个代理程序
openstack network agent list

openstack

启动实例
 
以上服务都没有问题后就可以进行创建启动虚拟机。
创建虚拟网络
 
#控制节点
首先需要创建一个虚拟网络,根据配置Neutron时选择的网络选项进行虚拟网络的配置。
Provider networks
 
创建网络
 
source admin-openstack.sh
openstack network create  --share --external \
  --provider-physical-network provider \
  --provider-network-type flat provider

openstack

# --share 选项允许所有的项目使用虚拟网络
# --external 选项将虚拟网络定义为外部,如果你希望创建内部网络,则可以使用--internal。默认时internal
# --provider-physical-network为在ml2_conf.ini中配置的flat_networks。
# --provider-network-type flat 是网络名称
 
 
在网络上创建子网
 
openstack subnet create --network provider \
  --allocation-pool start=192.168.80.150,end=192.168.80.250 \
  --dns-nameserver 192.168.80.2 --gateway 192.168.80.2 \
  --subnet-range 192.168.80.0/24 provider

openstack

#--subnet-range 使用CIDR表示法表示提供IP的子网
#start和end分别为要为实例分配IP的范围
#--dns-nameserver 指定DNS解析的IP地址
#--gateway 网关地址
 
 
验证操作
 
列出网络命名空间。您应该看到一个qrouter名称空间和两个 qdhcp名称空间
 
source demo-openstack.sh
ip netns
 
列出路由器上的端口以确定提供商网络上的网关IP地址
 
openstack port list 
创建实例配置类型
 
#为虚拟机分配资源为1C64M 名为m1.nano的资源类型
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
 openstack
配置秘钥对
 
#生成秘钥文件
ssh-keygen -q -N ""
直接回车
#openstack创建名为mykey的秘钥
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack
#查看秘钥
openstack keypair list
 
 openstack
添加安全策略
 
默认情况下,default安全组适用于所有实例。
 
#允许icmp
openstack security group rule create --proto icmp default
openstack
#允许22端口
openstack security group rule create --proto tcp --dst-port 22 default
 openstack
 
启动实例
Provider networks
确定实例选项
 
查看可用的配置类型
 
source demo-openstack.sh
openstack flavor list
 openstack
 
查看可用的镜像
 
openstack image list
 openstack
查看可用的网络
 
openstack network list
 openstack
查看可用的安全组
 
openstack security group list
 openstack
启动实例
 
openstack server create --flavor m1.nano --image cirros \
--nic net-id=provider --security-group default \
--key-name mykey provider-instance
openstack
#PROVIDER_NET_ID 为public网络ID,如果选择环境只包含一个网络,则可以省略该--nic选项,因为OpenStack会自动选择唯一可用的网络。
 
 
检查实例的状态
 
openstack server list
 openstack
使用虚拟控制台访问实例
 
openstack console url show provider-instance
 
 openstack

七、horizon服务安装

horizon服务需要基于 Apache HTTP服务和Memcached服务,我把这个服务安装在控制节点,所以免去了这些服务的安装,如果你要单独部署,则需要安装这些服务。
安装和配置组件
 
安装包
 
yum install openstack-dashboard -y

openstack

编辑 /etc/openstack-dashboard/local_settings 文件并完成以下操作
cd /etc/openstack-dashboard
cp local_settings local_settings.bak 
vi local_settings.bak
# 配置仪表板以在controller节点上使用OpenStack服务
OPENSTACK_HOST = "192.168.80.100"
# 配置允许访问的主机列表
ALLOWED_HOSTS = ['*', 'two.example.com']
# 配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
 
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': '192.168.80.100:11211',
    }
}
# 启用Identity API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
# 启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
# 配置Default为通过仪表板创建的用户的默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 配置user为您通过仪表板创建的用户的默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "myrole"
# 如果选择网络选项1,请禁用对第3层网络服务的支持
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_***': False,
    'enable_fip_topology_check': False,
}
# 配置时区
TIME_ZONE = "Asia/Shanghai"
 
 
/etc/httpd/conf.d/openstack-dashboard.conf如果未包含,请添加以下行 。
 
WSGIApplicationGroup %{GLOBAL}
 
 
安装完成
 
重新启动Web服务器和memcached存储服务:
 
systemctl restart httpd.service memcached.service

八、Cinder 服务安装

Cinder 控制节点安装
组件介绍
 
Cinder-api:接受api,将操作下发到cinder-volume。
 
cinder-volume:提供存储支持的服务,它可以通过驱动程序体系结构与各种存储提供程序进行交互。
 
cinder-scheduler daemon:选择要在其上创建卷的最佳存储提供程序节点。一个类似的组成部分nova-scheduler。
 
cinder-backup daemon:该cinder-backup服务为备份存储提供程序提供任何类型的备份卷。与cinder-volume服务一样,它可以通过驱动程序架构与各种存储提供商进行交互。
安装配置控制节点
 
创建cinder数据库及用户访问
 
mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
 
 
创建服务凭据
 
source admin-openstack.sh
# 创建用户,需要输入密码,我的密码是cinder
openstack user create --domain default --password-prompt cinder
# 将cinder用户加入servicer项目,并赋予admin角色
openstack role add --project service --user cinder admin
# 创建cinderv2和cinderv3服务实体
openstack service create --name cinderv2   --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3   --description "OpenStack Block Storage" volumev3
 
 
创建Block storage服务endpoint:
 
openstack endpoint create --region RegionOne volumev2 public http://192.168.80.100:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://192.168.80.100:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://192.168.80.100:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://192.168.80.100:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://192.168.80.100:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://192.168.80.100:8776/v3/%\(project_id\)s
 
 
安装配置组件
 
安装包
 
yum install openstack-cinder python-keystone -y
 
 
编辑/etc/cinder/cinder.conf文件并完成以下操作:
 
[DEFAULT]
# my_ip注释后期补充,可以不配置
my_ip = 192.168.80.100
 
# 配置rabbitMQ消息队列
transport_url = rabbit://openstack:[email protected]
 
# 配置身份服务访问
auth_strategy = keystone
 
[keystone_authtoken]
auth_uri = http://192.168.80.100:5000
auth_url = http://192.168.80.100:5000
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
 
[database]
# 配置数据访问
connection = mysql+pymysql://cinder:[email protected]/cinder
 
[oslo_concurrency]
# 配置锁路径
lock_path = /var/lib/cinder/tmp
 
 
填充数据库并验证
 
su -s /bin/sh -c "cinder-manage db sync" cinder
mysql -ucinder -pcinder -e "use cinder;show tables;"
 
 
配置计算服务使用块设备存储
 
编辑/etc/nova/nova.conf文件并将以下内容添加到其中
 
[cinder]
os_region_name = RegionOne
 
 
完成安装
 
重新启动计算服务 API服务:
 
systemctl restart openstack-nova-api.service
 
 
启动Block Storage服务并配置系统引导时启动服务
 
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
 
 
Cinder 存储节点安装
 
在存储节点安装和配置Block Storage服务之前,必须准备存储设备
准备存储设备
 
安装LVM包
 
yum install lvm2 device-mapper-persistent-data -y
 
 
 
启动LVM元数据服务并将其配置为在系统引导时启动
 
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
 
 
创建LVM物理卷/dev/sdb
 
pvcreate /dev/sdb
 
 
 
创建LVM卷组cinder-volumes
 
vgcreate cinder-volumes /dev/sdb
 
 
 
只有实例才能访问块存储卷。但是,底层操作系统管理与卷关联的设备。默认情况下,LVM卷扫描工具会扫描 /dev目录以查找包含卷的块存储设备。如果项目在其卷上使用LVM,则扫描工具会检测这些卷并尝试对其进行缓存,这可能会导致底层操作系统和项目卷出现各种问题。您必须重新配置LVM以仅扫描包含cinder-volumes卷组的设备。编辑 /etc/lvm/lvm.conf文件并完成以下操作:
 
该devices部分中,添加一个接受/dev/sdb设备的过滤 器并拒绝所有其他设备:
 
devices {
...
filter = [ "a/sdb/", "r/.*/"]
 
 
滤波器阵列中的每个项目开始于a用于接受或 r用于拒绝,并且包括用于所述装置名称的正则表达式。阵列必须r/.*/以拒绝任何剩余设备结束。您可以使用vgs -vvvv命令来测试过滤器。
安装和配置组件
 
安装包
 
yum install openstack-cinder targetcli python-keystone -y
 
 
 
编辑/etc/cinder/cinder.conf文件并完成以下操作
 
[DEFAULT]
# my_ip注释后期补充,可以不配置
my_ip = 192.168.80.101
 
# 配置Image服务API的位置
glance_api_servers = http://192.168.80.100:9292
 
# 配置rabbitMQ消息队列
transport_url = rabbit://openstack:[email protected]
 
# 启用LVM后端 
enabled_backends = lvm
 
# 配置身份服务访问
auth_strategy = keystone
 
[keystone_authtoken]
auth_uri = http://192.168.80.100:5000
auth_url = http://192.168.80.100:5000
memcached_servers = 192.168.80.100:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
 
[database]
# 配置数据访问
connection = mysql+pymysql://cinder:[email protected]/cinder
 
[oslo_concurrency]
# 配置锁路径
lock_path = /var/lib/cinder/tmp
 
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ISCSI_Storage
 
 
安装完成
 
启动Block Storage卷服务(包括其依赖项)并将其配置为在系统引导时启动
 
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
 
 
配置NFS后端
创建NFS共享
 
安装NFS相关软件包并启动
 
yum install nfs-utils rpcbind -y
systemctl start nfs
systemctl start rpcbind
 
 
配置共享NFS
 
mkdir /data/nfs -p
# vim /etc/exports
/data/nfs *(rw,sync,no_root_squash)
# 使配置生效
exportfs -r
 
 
查看本机挂载情况
 
showmount -e 127.0.0.1
 
 
 
配置块存储以使用NFS存储后端
 
创建文件,每个条目表示cinder卷服务应用于后端存储的每个NFS共享。每个条目应该是一个单独的行,并应使用以下格式:
 
# HOST是NFS服务器的IP地址或主机名。
# SHARE是现有和可访问的NFS共享的绝对路径。
# HOST:SHARE
192.168.80.101:/data/nfs
 

设置/etc/cinder/nfsshares为由root用户和cinder组拥有:
 
chown root:cinder /etc/cinder/nfsshares
 

 
设置/etc/cinder/nfsshares为可以被cinder组的成员读取:
 
chmod 0640 /etc/cinder/nfsshares
 

 
配置/etc/cinder/nfsshares文件,添加如下配置:
 
[nfs]
nfs_shares_config = /etc/cinder/nfsshares
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_mount_point_base = $state_path/mnt
volume_backend_name = NFS_Storage
# 启用nfs后端 
enabled_backends = nfs
 

多存储后端管理
 
创建类型
 
cinder type-create ISCSI
cinder type-create NFS
 

关联类型
 
cinder type-key ISCSI set volume_backend_name=ISCSI_Storage
cinder type-key NFS set volume_backend_name=NFS_Storage
 
 
验证操作
 
列出服务组件以验证每个进程的成功启动
 
source admin-openstack.sh
openstack volume service list
 

其他
CentOS基础镜像制作
 
其中一种方式是通过meta-data的方式获取数据进行虚拟机的配置,这里就需要先说一下meta-data这个服务。
 
首先需要了一个知识点,netns是在linux中提供网络虚拟化的一个项目,使用netns网络空间虚拟化可以在本地虚拟化出多个网络环境,目前netns在lxc容器中被用来为容器提供网络。
 
在neutron-dhcp-agent所在的服务器上和执行如下命令:
 
[root@controller ~]# ip netns list
qdhcp-750fb27a-4dad-4cb9-8ef6-6ac714532578 (id: 0)
 
该服务类是docker,创建了一个虚拟网络,继续看这个虚拟网络的一些信息:
 
[root@controller ~]# ip netns exec qdhcp-750fb27a-4dad-4cb9-8ef6-6ac714532578 ip add list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ns-d85f84fe-5d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether fa:16:3e:35:d3:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-d85f84fe-5d
       valid_lft forever preferred_lft forever
    inet 172.16.47.100/20 brd 172.16.47.255 scope global ns-d85f84fe-5d
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe35:d321/64 scope link
       valid_lft forever preferred_lft forever
[root@controller ~]# ip netns exec qdhcp-750fb27a-4dad-4cb9-8ef6-6ac714532578 netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      27018/dnsmasq
tcp        0      0 172.16.47.100:53        0.0.0.0:*               LISTEN      27018/dnsmasq
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      27020/haproxy
tcp6       0      0 fe80::f816:3eff:fe35:53 :::*                    LISTEN      27018/dnsmasq
 
 
这个虚拟环境提供两个服务,一、meta-data的http服务。二、dhcp服务。
 
dhcp对应的IP为172.16.47.100,meta-data对应的IP为169.254.169.254。
 
新建的虚拟机通过dhcp获取IP相关配置信息,同时会获取一条到169.254.169.254的路由。虚拟机通过访问meta-data来获取虚拟机的一些信息用来对主机进行配置。
禁用zeroconf路由
 
要访问meta-data服务的实例,必须禁用默认的zeroconfi路由:
 
# 可以不做
echo "NOZEROCONF=yes" >> /etc/sysconfig/network
 
 
配置控制台
 
要使nova console-log命令在CentOS 7上正常工作,您可能需要执行以下步骤
 
编辑文件 /etc/default/grub 的 GRUB_CMDLINE_LINUX 选项, 删除 rhgb quiet 并添加 console=tty0 console=ttyS0,115200n8。
 
例如:
 
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap console=tty0 console=ttyS0,115200n8"
 
    1
 
运行以下命令以保存更改:
 
grub2-mkconfig -o /boot/grub2/grub.cfg
 
    1
 
openstack 自定义镜像初始化脚本:meta-data初始化脚本
 
cat /tmp/init.sh
 
#!/bin/bash
set_key(){
  if [ ! -d /root/.ssh ]; then
    mkdir -p /root/.ssh
    chmod 700 /root/.ssh
  fi
 
  # Fetch public key using HTTP
  ATTEMPTS=30
  FAILED=0
  while [ ! -f /root/.ssh/authorized_keys ]; do
    curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key \
      > /tmp/metadata-key 2>/dev/null
    if [ $? -eq 0 ]; then
      cat /tmp/metadata-key >> /root/.ssh/authorized_keys
      chmod 0600 /root/.ssh/authorized_keys
      restorecon /root/.ssh/authorized_keys
      rm -f /tmp/metadata-key
      echo "Successfully retrieved public key from instance metadata"
      echo "*****************"
      echo "AUTHORIZED KEYS"
      echo "*****************"
      cat /root/.ssh/authorized_keys
      echo "*****************"
    fi
  done
}
 
set_hostname(){
    PRE_HOSTNAME=$(curl -s http://169.254.169.254/latest/meta-data/hostname)
    DOMAIN_NAME=$(echo $PRE_HOSTNAME | awk -F '.' '{print $1}')
    hostnamectl set-hostname `echo ${DOMAIN_NAME}.example.com`
}
 
set_static_ip(){
PRE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
NET_FILE=/etc/sysconfig/network-scripts/ifcfg-eth0
echo > $NET_FILE
 
echo "TYPE=Ethernet" >> $NET_FILE
echo "BOOTPROTO=static" >> $NET_FILE
echo "NAME=eth0" >> $NET_FILE
echo "DEVICE=eth0" >> $NET_FILE
echo "ONBOOT=yes" >> $NET_FILE
echo "IPADDR=${PRE_IP}" >> $NET_FILE
echo "NETMASK=255.255.255.0" >> $NET_FILE
echo "GATEWAY=192.168.80.2" >> $NET_FILE
}
 
main(){
set_key;
set_hostname;
set_static_ip;
systemctl restart network.service
/bin/cp /tmp/rc.local /etc/rc.d/rc.local
}
main
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章