⑥ OpenStack高可用集群部署方案(train版)—Neutron

十五、Neutron控制节点集群部署

https://docs.openstack.org/neutron/train/install/install-rdo.html
Neutron网络的博客

Nova具体功能如下:

  • Neutron 为整个 OpenStack 环境提供网络支持,包括二层交换,三层路由,负载均衡,防火墙和 VPN 等。
  • Neutron 提供了一个灵活的框架,通过配置,无论是开源还是商业软件都可以被用来实现这些功能。

1. 创建nova相关数据库(控制节点)

在任意控制节点创建数据库,数据库自动同步,以controller01节点为例;

mysql -u root -pZxzn@2020
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'Zxzn@2020';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'Zxzn@2020';
flush privileges;

2. 创建neutron相关服务凭证(控制节点)

在任意控制节点操作,以controller01节点为例;

2.1 创建neutron用户

source admin-openrc
openstack user create --domain default --password Zxzn@2020 neutron

2.2 向neutron用户赋予admin权限

openstack role add --project service --user neutron admin

2.3 创建neutron服务实体

openstack service create --name neutron --description "OpenStack Networking" network

2.4 创建neutron API服务端点

api地址统一采用vip,如果public/internal/admin分别设计使用不同的vip,请注意区分;

--region与初始化admin用户时生成的region一致;neutron-api 服务类型为network;

openstack endpoint create --region RegionOne network public http://10.15.253.88:9696
openstack endpoint create --region RegionOne network internal http://10.15.253.88:9696
openstack endpoint create --region RegionOne network admin http://10.15.253.88:9696

3. 安装Neutron server(控制节点)

提供商网络
租户服务网络

  • openstack-neutron:neutron-server的包
  • openstack-neutron-ml2:ML2 plugin的包
  • openstack-neutron-linuxbridge:linux bridge network provider相关的包
  • ebtables:防火墙相关的包
  • conntrack-tools: 该模块可以对iptables进行状态数据包检查

这里将neutron server与neutron agent分离,所以采取这样的部署方式,常规的控制节点部署所有neutron的应用包括agent,计算节点部署只部署以下的neutron server、linuxbridge和nova配置即可;三台计算节点现在相当于neutron节点

在全部控制节点安装neutron相关服务,以controller01节点为例;

#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

yum install openstack-neutron openstack-neutron-ml2 ebtables -y
yum install conntrack-tools -y

4. 部署与配置(控制节点)

https://docs.openstack.org/neutron/train/install/controller-install-rdo.html

在全部控制节点配置neutron相关服务,以controller01节点为例;

4. 1 配置nova.conf

注意my_ip参数,根据节点修改;注意neutron.conf文件的权限:root:neutron

注意bind_host参数,根据节点修改;

#备份配置文件/etc/nova/nova.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 10.15.253.163

openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
#直接连接rabbitmq集群
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672

openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  true
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  true

openstack-config --set  /etc/neutron/neutron.conf database connection  mysql+pymysql://neutron:Zxzn@[email protected]/neutron

openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://10.15.253.88:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url  http://10.15.253.88:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  Zxzn@2020

openstack-config --set  /etc/neutron/neutron.conf nova  auth_url http://10.15.253.88:5000
openstack-config --set  /etc/neutron/neutron.conf nova  auth_type password
openstack-config --set  /etc/neutron/neutron.conf nova  project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  region_name RegionOne
openstack-config --set  /etc/neutron/neutron.conf nova  project_name service
openstack-config --set  /etc/neutron/neutron.conf nova  username nova
openstack-config --set  /etc/neutron/neutron.conf nova  password Zxzn@2020

openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

将neutron.conf配置文件拷贝到另外的控制节点上:

scp -rp /etc/neutron/neutron.conf controller02:/etc/neutron/
scp -rp /etc/neutron/neutron.conf controller03:/etc/neutron/

##controller02上
sed -i "s#10.15.253.163#10.15.253.195#g" /etc/neutron/neutron.conf

##controller03上
sed -i "s#10.15.253.163#10.15.253.227#g" /etc/neutron/neutron.conf

4.2 配置 ml2_conf.ini

在全部控制节点操作,以controller01节点为例;

#备份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true

ml2_conf.ini配置文件拷贝到另外的控制节点上:

scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini controller02:/etc/neutron/plugins/ml2/ml2_conf.ini
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini controller03:/etc/neutron/plugins/ml2/ml2_conf.ini

4.3 配置nova服务与neutron服务进行交互

全部控制节点执行;

#修改配置文件/etc/nova/nova.conf
#在全部控制节点上配置nova服务与网络节点服务进行交互
openstack-config --set  /etc/nova/nova.conf neutron url  http://10.15.253.88:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url  http://10.15.253.88:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type  password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name  default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name  default
openstack-config --set  /etc/nova/nova.conf neutron region_name  RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name  service
openstack-config --set  /etc/nova/nova.conf neutron username  neutron
openstack-config --set  /etc/nova/nova.conf neutron password  Zxzn@2020
openstack-config --set  /etc/nova/nova.conf neutron service_metadata_proxy  true
openstack-config --set  /etc/nova/nova.conf neutron metadata_proxy_shared_secret  Zxzn@2020

4.4 同步nova相关数据库并验证

任意控制节点操作;填充neutron数据库

[root@controller01 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
...
  OK

验证neutron数据库是否正常写入

mysql -h controller03 -u neutron -pZxzn@2020 -e "use neutron;show tables;"

4.5 创建ml2的软连接 文件指向ML2插件配置的软链接

全部控制节点执行;

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

4.6 重启nova-api和neutron-server服务

在全部控制节点操作;

systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service

systemctl enable neutron-server.service
systemctl restart neutron-server.service
systemctl status neutron-server.service

十六、Neutron计算节点集群部署

1. 安装Neutron agent(计算节点=网络节点)

  • 由于这里部署为neutron serverneutron agent分离,所以采取这样的部署方式,常规的控制节点部署所有neutron的应用包括server和agent

  • 计算节点部署neutron agentlinuxbridgenova配置即可;也可以单独准备网络节点进行neutron agent的部署;

在全部计算节点安装,以compute01节点为例;

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
#备份配置文件/etc/nova/nova.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 10.15.253.162
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy keystone 

openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password Zxzn@2020

openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

将neutron.conf配置文件拷贝到另外的计算节点上:

scp -rp /etc/neutron/neutron.conf controller02:/etc/neutron/
scp -rp /etc/neutron/neutron.conf controller03:/etc/neutron/

##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/neutron/neutron.conf

##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/neutron/neutron.conf

2. 部署与配置(计算节点)

2.1 配置nova.conf

在全部计算节点操作;配置只涉及nova.conf的[neutron]字段

openstack-config --set  /etc/nova/nova.conf neutron url  http://10.15.253.88:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url http://10.15.253.88:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name service
openstack-config --set  /etc/nova/nova.conf neutron username neutron
openstack-config --set  /etc/nova/nova.conf neutron password Zxzn@2020

2.2 配置ml2_conf.ini

在全部计算节点操作,以compute01节点为例;

#备份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true

ml2_conf.ini配置文件拷贝到另外的计算节点上:

scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini compute02:/etc/neutron/plugins/ml2/ml2_conf.ini
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini compute03:/etc/neutron/plugins/ml2/ml2_conf.ini

2.3 配置linuxbridge_agent.ini

  • Linux网桥代理
  • Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组
  • 网络类型名称与物理网卡对应,这里提供商网络provider对应规划的ens192网卡,vlan租户网络对应规划的ens224网卡,在创建相应网络时采用的是网络名称而非网卡名称;
  • 需要明确的是物理网卡是本地有效,根据主机实际使用的网卡名确定;

在全部计算节点操作,以compute01节点为例;

#备份配置文件
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
#环境无法提供四张网卡;建议生产环境上将每种网络分开配置
#provider网络对应规划的ens192,vlan租户网络对应也暂时使用ens192;
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:ens192,vlan:ens192

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  true

#tunnel租户网络(vxlan)vtep端点,这里对应规划的ens224地址,根据节点做相应修改
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.15.253.162

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

将 linuxbridge_agent.ini 配置文件拷贝到另外的计算节点上:

scp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini  controller02:/etc/neutron/plugins/ml2/
scp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini  controller03:/etc/neutron/plugins/ml2/

##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/neutron/plugins/ml2/linuxbridge_agent.ini 

##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/neutron/plugins/ml2/linuxbridge_agent.ini

2.4 配置 l3_agent.ini

  • l3代理为租户虚拟网络提供路由和NAT服务

在全部计算节点操作,以compute01节点为例;

#备份配置文件
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

2.5 配置dhcp_agent.ini

  • DHCP代理,DHCP代理为虚拟网络提供DHCP服务;
  • 使用dnsmasp提供dhcp服务;

在全部计算节点操作,以compute01节点为例;

#备份配置文件
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

2.6 配置metadata_agent.ini

  • 元数据代理提供配置信息,例如实例的凭据
  • metadata_proxy_shared_secret 的密码与控制节点上/etc/nova/nova.conf文件中密码一致;

在全部计算节点操作,以compute01节点为例;

#备份配置文件
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host 10.15.253.88
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret Zxzn@2020
openstack-config --set /etc/neutron/metadata_agent.ini cache memcache_servers controller01:11211,controller02:11211,controller03:11211

2.7 添加linux内核参数设置

  • 确保Linux操作系统内核支持网桥过滤器,通过验证所有下列sysctl值设置为1;

全部控制节点和计算节点配置;

echo 'net.bridge.bridge-nf-call-iptables=1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1'  >>/etc/sysctl.conf

#启用网络桥接器支持,需要加载 br_netfilter 内核模块;否则会提示没有目录
modprobe br_netfilter
sysctl -p

2.8 重启nova-api和neutron-gaent服务

全部控制节点;重新启动nova API和neutron-server服务

systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service

systemctl enable neutron-server.service
systemctl restart neutron-server.service
systemctl status neutron-server.service

全部计算节点;重启nova-compute服务

systemctl restart openstack-nova-compute.service

全部计算节点;启动neutron-agent服务和l3网络服务

systemctl enable neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
systemctl restart neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
systemctl status neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent

3. neutron服务验证(控制节点)

#列出已加载的扩展,以验证该neutron-server过程是否成功启动
[root@controller01 ~]# openstack extension list --network

#列出代理商以验证成功
[root@controller01 ~]# openstack network agent list

4. 添加pcs资源

  • 只需要添加neutron-server,其他的neutron-agent服务:neutron-linuxbridge-agentneutron-l3-agentneutron-dhcp-agentneutron-metadata-agent 不需要添加了;因为部署在了计算节点上

在任意控制节点操作;添加资源neutron-server

#pcs resource create neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent clone interleave=true
#pcs resource create neutron-l3-agent systemd:neutron-l3-agent clone interleave=true
#pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent clone interleave=true
#pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent clone interleave=true

pcs resource create neutron-server systemd:neutron-server clone interleave=true

查看资源

[root@controller01 ~]# pcs resource 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章