如何藉助OpenStack Kolla-K8S項目,通過K8S對OpenStack進行容器化部署?並最終部署一套All-In-One類型的OpenStack容器雲?讓我們繼續部署:
部署kolla-kubernetes
■ 覆蓋默認的RBAC設置
通過kubectl replace命令進行默認RBAC設置的覆蓋,如下:
kubectl replace -f <(cat <<EOF
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
name: cluster-admin
roleRef:
apiGroup:rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– kind: Group
name: system:masters
– kind: Group
name: system:authenticated
– kind: Group
name: system:unauthenticated
EOF
)
■ 安裝部署Helm
Helm是Kubernetes中的包管理器,類似yum包管理工具,yum用來安裝RPM包,而Helm用來安裝charts,這裏的charts便類似RPM軟件包。Helm分爲客戶端和服務器端,Helm的服務器端稱爲tiller,服務器端在Kubernetes中以Docker容器形式運行,爲了便於Helm安裝,可以實先將Tiller的容器鏡像下載到本地,可使用如下命令下載:
docker pull warrior/kubernetes-helm:2.4.1
docker tag warrior/kubernetes-helm:2.4.1 gcr.io/kubernetes-helm/tiller:v2.4.1
安裝Helm最簡單的方式如下:
sudo curl -L https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get
> get_helm.sh
sudo chmod 700 get_helm.sh
sudo ./get_helm.sh
sudo helm init
Helm安裝完成後,可以看到kube-system命名空間中新增了一個running狀態的POD,名稱爲tiller-deploy-xxx;
Helm安裝成功後,通過helm version即可看到客戶端和服務器端的信息;
■ 安裝kolla-ansible和kolla-kubernetes
Clone社區Kolla-ansible源代碼,如下:
git clone http://github.com/openstack/kolla-ansible
git clone http://github.com/openstack/kolla-kubernetes
安裝kolla-ansible和kolla-kubernets,如下:
sudo pip install -U kolla-ansible/ kolla-kubernetes/
複製默認的kolla配置文件到/etc目錄,如下:
sudo cp -aR /usr/share/kolla-ansible/etc_examples/kolla /etc
複製 kolla-kubernetes 配置文件至/etc目錄,如下:
sudo cp -aR kolla-kubernetes/etc/kolla-kubernetes /etc
爲Openstack集羣各個項目和用戶生成密碼文件,如下:
sudo kolla-kubernetes-genpwd
在kubernets中創建一個獨立的密碼空間kolla,如下:
kubectl create namespace kolla
將AIO節點標記爲控制節點和計算節點,如下:
kubectl label node $(hostname) kolla_compute=true
kubectl label node $(hostname) kolla_controller=true
修改/etc/kolla/globals.yml配置文件,其中:network_interface和neutron_external_interface兩個變量需要用戶指定,network_interface是管理接口(如eth0),默認也是Openstack各個服務項目的API接口,neutron_external_interface是Neutron項目用於外網橋接的物理接口(如eth1),該接口上不要手工配置IP地址。
將需要啓動的服務項目添加到/etc/kolla/globals.yml的末尾,如下:
cat <<EOF > add-to-globals.yml
kolla_install_type: “source”
tempest_image_alt_id: “{{ tempest_image_id }}”
tempest_flavor_ref_alt_id: “{{ tempest_flavor_ref_id }}”
neutron_plugin_agent: “openvswitch”
api_interface_address: 0.0.0.0
tunnel_interface_address: 0.0.0.0
orchestration_engine: KUBERNETES
memcached_servers: “memcached”
keystone_admin_url: “http://keystone-admin:35357/v3”
keystone_internal_url: “http://keystone-internal:5000/v3”
keystone_public_url: “http://keystone-public:5000/v3”
glance_registry_host: “glance-registry”
neutron_host: “neutron”
keystone_database_address: “mariadb”
glance_database_address: “mariadb”
nova_database_address: “mariadb”
nova_api_database_address: “mariadb”
neutron_database_address: “mariadb”
cinder_database_address: “mariadb”
ironic_database_address: “mariadb”
placement_database_address: “mariadb”
rabbitmq_servers: “rabbitmq”
openstack_logging_debug: “True”
enable_haproxy: “no”
enable_heat: “no”
enable_cinder: “yes”
enable_cinder_backend_lvm: “yes”
enable_cinder_backend_iscsi: “yes”
enable_cinder_backend_rbd: “no”
enable_ceph: “no”
enable_elasticsearch: “no”
enable_kibana: “no”
glance_backend_ceph: “no”
cinder_backend_ceph: “no”
nova_backend_ceph: “no”
EOF
cat ./add-to-globals.yml | sudo tee -a /etc/kolla/globals.yml
如果是在虛擬機上進行部署,則需要使用qemu虛擬化引擎,如下:
sudo mkdir /etc/kolla/config
sudo tee /etc/kolla/config/nova.conf<<EOF
[libvirt]
virt_type=qemu
cpu_mode=none
EOF
生成默認的Openstack各個項目配置文件,如下:
sudo kolla-ansible genconfig
爲Openstack各個項目創建Kubernetes祕鑰並將其註冊到Kubernetes集羣中,如下:
kolla-kubernetes/tools/secret-generator.py create
創建並註冊kolla的config map,如下:
kollakube res create configmap mariadb keystone horizon rabbitmq memcached nova-api nova-conductor nova-scheduler glance-api-haproxy glance-registry-haproxy glance-api glance-registry neutron-server neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent neutron-openvswitch-agent openvswitch-db-server openvswitch-vswitchd nova-libvirt nova-compute nova-consoleauth nova-novncproxy nova-novncproxy-haproxy neutron-server-haproxy nova-api-haproxy cinder-api cinder-api-haproxy cinder-backup inder-scheduler cinder-volume iscsid tgtd keepalived placement-api placement-api-haproxy
啓用resolv.conf解決方法,如下:
kolla-kubernetes/tools/setup-resolv-conf.sh kolla
編譯Helm的microcharts、service charts和 metacharts,如下:
kolla-kubernetes/tools/helm_build_all.sh ./
編譯過程會花費一定時間,編譯完成會在當前目錄上產生很多.tgz的文件,其數目至少要大於150個。
創建一個本地cloud.yaml文件,用於安裝部署Helm的charts,如下:
global:
kolla:
all:
docker_registry:192.168.128.13:4000 //本地registry倉庫地址
image_tag:”4.0.0″
kube_logger: false
external_vip:”192.168.128.13″
base_distro:”centos”
install_type: “source”
tunnel_interface:”ens34″ //管理接口
resolve_conf_net_host_workaround: true
keystone:
all:
admin_port_external:”true”
dns_name:”192.168.128.13″
public:
all:
port_external:”true”
rabbitmq:
all:
cookie: 67
glance:
api:
all:
port_external:”true”
cinder:
api:
all:
port_external:”true”
volume_lvm:
all:
element_name:cinder-volume
daemonset:
lvm_backends:
– ‘192.168.128.13’:’cinder-volumes’ //cinder後端VG名稱
ironic:
conductor:
daemonset:
selector_key:”kolla_conductor”
nova:
placement_api:
all:
port_external: true
novncproxy:
all:
port: 6080
port_external: true
openvwswitch:
all:
add_port: true
ext_bridge_name:br-ex
ext_interface_name:ens41 //Neutron外網橋接網口
setup_bridge: true
horizon:
all:
port_external: true
cloud.yaml文件需要根據用戶各自的環境進行修改,上述文件中的192.168.128.13是筆者管理網口ens34上的IP地址,在使用過程中需要進行相應的修改。
■ 使用Helm在Kubernetes上部署Openstack
首先部署MariaDB,並等待其POD進入running狀態,如下:
helm install –debug kolla-kubernetes/helm/service/mariadb –namespace kolla –name mariadb –values ./cloud.yaml
待數據庫穩定後,部署其他的Openstack服務,如下:
helm install –debug kolla-kubernetes/helm/service/rabbitmq –namespace kolla –name rabbitmq –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/memcached –namespace kolla –name memcached –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/keystone –namespace kolla –name keystone –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/glance –namespace kolla –name glance –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/cinder-control –namespace kolla –name cinder-control –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/horizon –namespace kolla –name horizon –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/openvswitch –namespace kolla –name openvswitch –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/neutron –namespace kolla –name neutron –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/nova-control –namespace kolla –name nova-control –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/service/nova-compute –namespace kolla –name nova-compute –values ./cloud.yaml
當nova-compute進入running狀態後,創建cell0數據庫,如下:
helm install –debug kolla-kubernetes/helm/microservice /nova-cell0-create-db-job –namespace kolla –name nova-cell0-create-db-job –values ./cloud.yaml
helm install –debug kolla-kubernetes/helm/microservice /nova-api-create-simple-cell-job –namespace kolla –name nova-api-create-simple-cell –values ./cloud.yaml
當上述全部POD進入running狀態後,部署Cinder LVM。這裏假設系統上已經有一個名爲cinder-volumes的VG存在,如果還沒有cinder-volume這個VG,則需要事先創建該VG,如下:
pvcreate /dev/sdb /dev/sdc
vgcreate cinder-volumes /dev/sdb /dev/sdc
安裝部署cinder-volume,如下:
helm install –debug kolla-kubernetes/helm/service/cinder-volume-lvm –namespace kolla –name cinder-volume-lvm –values ./cloud.yaml
注意:如果要刪除helm部署的charts,如cinder-volume-lvm,則通過命令:
helm delete cinder-volume-lvm –purge即可從kubernets集羣中清除cinder-volume相關的PODs。
至此,全部Openstack服務已經部署完成,在操作Openstack集羣之前,先等待所有Kubernetes集羣中的PODs處於running狀態;
查看kubernets集羣中的全部deployment;
查看kubernets集羣中的全部service;
可以看到,每個service都被自動分配了10.3.3.0/24網段的IP地址,並且可以看到各個service對應的端口。在確認Kubernetes的各個API對象正常運行後,便可通過Openstack命令行客戶端進行Openstack集羣操作。首先,生成並加載openrc文件,如下:
kolla-kubernetes/tools/build_local_admin_keystonerc.sh ext
source ~/keystonerc_admin
通過kolla-ansible提供的init-runonce腳本初始化Openstack,並launch一個VM,如下:
kolla-ansible/tools/init-runonce
創建一個FloatingIP地址,並將其添加到VM上,如下:
openstack server add floating ip demo1 $(openstack floating ip create public1 -f value -c floating_ip_address)
查看創建的VM;
登錄Dashboard(http://192.168.128.13);
在dashboard上查看創建的實例;
創建一個塊存儲,並將其attach到實例demo1上;
到此,Ocata版本的Openstack已經成功部署在Kubernetes集羣上。由於諸多原因,目前Kolla-kubernets項目仍然不具備生產環境部署條件,社區目前也僅支持AIO的開發實驗性質的部署,相信隨着K8S的興趣,Kolla-kubernets項目的重視程度也會與日俱增,而且可以預言,在不久的將來,通過K8S部署Openstack容器雲將會是Openstack的一大主流方向!
本文轉移K8S技術社區-教程get | K8S部署OpenStack容器雲(下)