使用Kolla構建Pike版本OpenStack Docker鏡像 原

構建環境:

  1. 宿主機操作系統爲Windows 10 X64,虛擬機軟件爲VMware WorkStation 14.0.0,網卡IP192.168.195.1,使用ShadowSocket的端口爲1080(需要在Windows控制面板的防火牆高級功能中放開相應端口訪問權限);
  2. 虛擬機中安裝CentOS Linux release 7.4.1708發行版,雙核3G內存,網卡0使用NAT網絡,IP192.168.195.131網卡1Host Only網絡,IP192.168.162.128

安裝和配置Docker服務

安裝Docker軟件包

  • 如果有的話,卸載舊的Docker,否則可能會不兼容:
$ yum remove -y docker docker-io docker-selinux python-docker-py
  • 新增DockerYum倉庫:
$ vi /etc/yum.repos.d/docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
  • 安裝Docker軟件包:
$ yum update
$ yum install -y epel-release
$ yum install -y docker-engine docker-engine-selinux

配置國內鏡像加速

  • 使用阿里的Docker鏡像服務(也可以自己去申請一個地址):
$ mkdir -p /etc/docker
$ vi /etc/docker/daemon.json
{
  "registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}
  • 重啓Docker服務:
$ systemctl daemon-reload && systemctl enable docker && systemctl restart docker && systemctl status docker
  • 檢查鏡像服務是否正常:
$ docker run --rm hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

啓動Registry服務

這裏使用localhost作爲Registry服務的地址,如果要想在局域網內使用,需要改成本機IP或者一個可以被解析的名字。

  • 運行Register容器,映射到4000端口:
$ docker run -d --name registry --restart=always -p 4000:5000 -v /opt/registry:/var/lib/registry registry:2
  • 修改Docker服務配置,信任本地Registry服務:
$ vi /usr/lib/systemd/system/docker.service
...

#ExecStart=/usr/bin/dockerd
ExecStart=/usr/bin/dockerd --insecure-registry localhost:4000

...
  • 重啓Docker服務:
$ systemctl daemon-reload && systemctl restart docker
  • 測試Registry服務是否正常:
$ curl -X GET http://localhost:4000/v2/_catalog
{"repositories":[]}
  • 推送一個鏡像到Registry服務器中:
$ docker pull centos:7
$ docker tag centos:7 localhost:4000/centos:7
  • 查看推送到Registry的鏡像是否正常:
$ curl -X GET http://localhost:4000/v2/_catalog
{"repositories":["centos"]}

安裝和配置Kolla

獲取Kolla源碼

$ mkdir -pv /opt/kolla
$ cd /opt/kolla
$ git clone https://github.com/openstack/kolla
$ cd kolla
$ git checkout -b devel/pike remotes/origin/stable/pike

###安裝依賴軟件

$ pip install pyopenssl tox
$ pip install -r requirements.txt -r test-requirements.txt

生成默認配置

$ tox -e genconfig
$ mkdir -pv /etc/kolla/
$ cp -v etc/kolla/kolla-build.conf /etc/kolla/

生成Dockerfile

  • 使用Pike版本的默認配置生成source類型的Dockerfile
$ python tools/build.py -t source --template-only --work-dir=..
  • 查看Base鏡像的Dockerfile
$ cat ../docker/base/Dockerfile

主要看BEGIN REPO ENABLEMENTEND REPO ENABLEMENT之間的內容:

FROM centos:7
LABEL maintainer="Kolla Project (https://launchpad.net/kolla)" name="base" build-date="20180327"

RUN groupadd --force --gid 42401 ansible \
    && useradd -M --shell /usr/sbin/nologin --uid 42401 --gid 42401 ansible \
    && groupadd --force --gid 42402 aodh \
    && useradd -M --shell /usr/sbin/nologin --uid 42402 --gid 42402 aodh \
    && groupadd --force --gid 42403 barbican \
    && useradd -M --shell /usr/sbin/nologin --uid 42403 --gid 42403 barbican \
    && groupadd --force --gid 42404 bifrost \
    && useradd -M --shell /usr/sbin/nologin --uid 42404 --gid 42404 bifrost \
    && groupadd --force --gid 42471 blazar \
    && useradd -M --shell /usr/sbin/nologin --uid 42471 --gid 42471 blazar \
    && groupadd --force --gid 42405 ceilometer \
    && useradd -M --shell /usr/sbin/nologin --uid 42405 --gid 42405 ceilometer \
    && groupadd --force --gid 64045 ceph \
    && useradd -M --shell /usr/sbin/nologin --uid 64045 --gid 64045 ceph \
    && groupadd --force --gid 42406 chrony \
    && useradd -M --shell /usr/sbin/nologin --uid 42406 --gid 42406 chrony \
    && groupadd --force --gid 42407 cinder \
    && useradd -M --shell /usr/sbin/nologin --uid 42407 --gid 42407 cinder \
    && groupadd --force --gid 42408 cloudkitty \
    && useradd -M --shell /usr/sbin/nologin --uid 42408 --gid 42408 cloudkitty \
    && groupadd --force --gid 42409 collectd \
    && useradd -M --shell /usr/sbin/nologin --uid 42409 --gid 42409 collectd \
    && groupadd --force --gid 42410 congress \
    && useradd -M --shell /usr/sbin/nologin --uid 42410 --gid 42410 congress \
    && groupadd --force --gid 42411 designate \
    && useradd -M --shell /usr/sbin/nologin --uid 42411 --gid 42411 designate \
    && groupadd --force --gid 42464 dragonflow \
    && useradd -M --shell /usr/sbin/nologin --uid 42464 --gid 42464 dragonflow \
    && groupadd --force --gid 42466 ec2api \
    && useradd -M --shell /usr/sbin/nologin --uid 42466 --gid 42466 ec2api \
    && groupadd --force --gid 42412 elasticsearch \
    && useradd -M --shell /usr/sbin/nologin --uid 42412 --gid 42412 elasticsearch \
    && groupadd --force --gid 42413 etcd \
    && useradd -M --shell /usr/sbin/nologin --uid 42413 --gid 42413 etcd \
    && groupadd --force --gid 42474 fluentd \
    && useradd -M --shell /usr/sbin/nologin --uid 42474 --gid 42474 fluentd \
    && groupadd --force --gid 42414 freezer \
    && useradd -M --shell /usr/sbin/nologin --uid 42414 --gid 42414 freezer \
    && groupadd --force --gid 42415 glance \
    && useradd -M --shell /usr/sbin/nologin --uid 42415 --gid 42415 glance \
    && groupadd --force --gid 42416 gnocchi \
    && useradd -M --shell /usr/sbin/nologin --uid 42416 --gid 42416 gnocchi \
    && groupadd --force --gid 42417 grafana \
    && useradd -M --shell /usr/sbin/nologin --uid 42417 --gid 42417 grafana \
    && groupadd --force --gid 42454 haproxy \
    && useradd -M --shell /usr/sbin/nologin --uid 42454 --gid 42454 haproxy \
    && groupadd --force --gid 42418 heat \
    && useradd -M --shell /usr/sbin/nologin --uid 42418 --gid 42418 heat \
    && groupadd --force --gid 42420 horizon \
    && useradd -M --shell /usr/sbin/nologin --uid 42420 --gid 42420 horizon \
    && groupadd --force --gid 42421 influxdb \
    && useradd -M --shell /usr/sbin/nologin --uid 42421 --gid 42421 influxdb \
    && groupadd --force --gid 42422 ironic \
    && useradd -M --shell /usr/sbin/nologin --uid 42422 --gid 42422 ironic \
    && groupadd --force --gid 42461 ironic-inspector \
    && useradd -M --shell /usr/sbin/nologin --uid 42461 --gid 42461 ironic-inspector \
    && groupadd --force --gid 42423 kafka \
    && useradd -M --shell /usr/sbin/nologin --uid 42423 --gid 42423 kafka \
    && groupadd --force --gid 42458 karbor \
    && useradd -M --shell /usr/sbin/nologin --uid 42458 --gid 42458 karbor \
    && groupadd --force --gid 42425 keystone \
    && useradd -M --shell /usr/sbin/nologin --uid 42425 --gid 42425 keystone \
    && groupadd --force --gid 42426 kibana \
    && useradd -M --shell /usr/sbin/nologin --uid 42426 --gid 42426 kibana \
    && groupadd --force --gid 42400 kolla \
    && useradd -M --shell /usr/sbin/nologin --uid 42400 --gid 42400 kolla \
    && groupadd --force --gid 42469 kuryr \
    && useradd -M --shell /usr/sbin/nologin --uid 42469 --gid 42469 kuryr \
    && groupadd --force --gid 42473 libvirt \
    && useradd -M --shell /usr/sbin/nologin --uid 42473 --gid 42473 libvirt \
    && groupadd --force --gid 42428 magnum \
    && useradd -M --shell /usr/sbin/nologin --uid 42428 --gid 42428 magnum \
    && groupadd --force --gid 42429 manila \
    && useradd -M --shell /usr/sbin/nologin --uid 42429 --gid 42429 manila \
    && groupadd --force --gid 42457 memcached \
    && useradd -M --shell /usr/sbin/nologin --uid 42457 --gid 42457 memcached \
    && groupadd --force --gid 42430 mistral \
    && useradd -M --shell /usr/sbin/nologin --uid 42430 --gid 42430 mistral \
    && groupadd --force --gid 42431 monasca \
    && useradd -M --shell /usr/sbin/nologin --uid 42431 --gid 42431 monasca \
    && groupadd --force --gid 65534 mongodb \
    && useradd -M --shell /usr/sbin/nologin --uid 42432 --gid 65534 mongodb \
    && groupadd --force --gid 42433 murano \
    && useradd -M --shell /usr/sbin/nologin --uid 42433 --gid 42433 murano \
    && groupadd --force --gid 42434 mysql \
    && useradd -M --shell /usr/sbin/nologin --uid 42434 --gid 42434 mysql \
    && groupadd --force --gid 42435 neutron \
    && useradd -M --shell /usr/sbin/nologin --uid 42435 --gid 42435 neutron \
    && groupadd --force --gid 42436 nova \
    && useradd -M --shell /usr/sbin/nologin --uid 42436 --gid 42436 nova \
    && groupadd --force --gid 42470 novajoin \
    && useradd -M --shell /usr/sbin/nologin --uid 42470 --gid 42470 novajoin \
    && groupadd --force --gid 42437 octavia \
    && useradd -M --shell /usr/sbin/nologin --uid 42437 --gid 42437 octavia \
    && groupadd --force --gid 42462 odl \
    && useradd -M --shell /usr/sbin/nologin --uid 42462 --gid 42462 odl \
    && groupadd --force --gid 42438 panko \
    && useradd -M --shell /usr/sbin/nologin --uid 42438 --gid 42438 panko \
    && groupadd --force --gid 42472 prometheus \
    && useradd -M --shell /usr/sbin/nologin --uid 42472 --gid 42472 prometheus \
    && groupadd --force --gid 42465 qdrouterd \
    && useradd -M --shell /usr/sbin/nologin --uid 42465 --gid 42465 qdrouterd \
    && groupadd --force --gid 42427 qemu \
    && useradd -M --shell /usr/sbin/nologin --uid 42427 --gid 42427 qemu \
    && groupadd --force --gid 42439 rabbitmq \
    && useradd -M --shell /usr/sbin/nologin --uid 42439 --gid 42439 rabbitmq \
    && groupadd --force --gid 42440 rally \
    && useradd -M --shell /usr/sbin/nologin --uid 42440 --gid 42440 rally \
    && groupadd --force --gid 42460 redis \
    && useradd -M --shell /usr/sbin/nologin --uid 42460 --gid 42460 redis \
    && groupadd --force --gid 42441 sahara \
    && useradd -M --shell /usr/sbin/nologin --uid 42441 --gid 42441 sahara \
    && groupadd --force --gid 42442 searchlight \
    && useradd -M --shell /usr/sbin/nologin --uid 42442 --gid 42442 searchlight \
    && groupadd --force --gid 42443 senlin \
    && useradd -M --shell /usr/sbin/nologin --uid 42443 --gid 42443 senlin \
    && groupadd --force --gid 42467 sensu \
    && useradd -M --shell /usr/sbin/nologin --uid 42467 --gid 42467 sensu \
    && groupadd --force --gid 42468 skydive \
    && useradd -M --shell /usr/sbin/nologin --uid 42468 --gid 42468 skydive \
    && groupadd --force --gid 42444 solum \
    && useradd -M --shell /usr/sbin/nologin --uid 42444 --gid 42444 solum \
    && groupadd --force --gid 42445 swift \
    && useradd -M --shell /usr/sbin/nologin --uid 42445 --gid 42445 swift \
    && groupadd --force --gid 42446 tacker \
    && useradd -M --shell /usr/sbin/nologin --uid 42446 --gid 42446 tacker \
    && groupadd --force --gid 42447 td-agent \
    && useradd -M --shell /usr/sbin/nologin --uid 42447 --gid 42447 td-agent \
    && groupadd --force --gid 42448 telegraf \
    && useradd -M --shell /usr/sbin/nologin --uid 42448 --gid 42448 telegraf \
    && groupadd --force --gid 42449 trove \
    && useradd -M --shell /usr/sbin/nologin --uid 42449 --gid 42449 trove \
    && groupadd --force --gid 42459 vitrage \
    && useradd -M --shell /usr/sbin/nologin --uid 42459 --gid 42459 vitrage \
    && groupadd --force --gid 42450 vmtp \
    && useradd -M --shell /usr/sbin/nologin --uid 42450 --gid 42450 vmtp \
    && groupadd --force --gid 42451 watcher \
    && useradd -M --shell /usr/sbin/nologin --uid 42451 --gid 42451 watcher \
    && groupadd --force --gid 42452 zaqar \
    && useradd -M --shell /usr/sbin/nologin --uid 42452 --gid 42452 zaqar \
    && groupadd --force --gid 42453 zookeeper \
    && useradd -M --shell /usr/sbin/nologin --uid 42453 --gid 42453 zookeeper \
    && groupadd --force --gid 42463 zun \
    && useradd -M --shell /usr/sbin/nologin --uid 42463 --gid 42463 zun

LABEL kolla_version="5.0.2"

ENV KOLLA_BASE_DISTRO=centos \
    KOLLA_INSTALL_TYPE=source \
    KOLLA_INSTALL_METATYPE=mixed

#### Customize PS1 to be used with bash shell
COPY kolla_bashrc /tmp/
RUN cat /tmp/kolla_bashrc >> /etc/skel/.bashrc \
    && cat /tmp/kolla_bashrc >> /root/.bashrc

# PS1 var when used /bin/sh shell
ENV PS1="$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ "


# For RPM Variants, enable the correct repositories - this should all be done
# in the base image so repos are consistent throughout the system.  This also
# enables to provide repo overrides at a later date in a simple fashion if we
# desire such functionality.  I think we will :)

RUN CURRENT_DISTRO_RELEASE=$(awk '{match($0, /[0-9]+/,version)}END{print version[0]}' /etc/system-release); \
    if [  $CURRENT_DISTRO_RELEASE != "7" ]; then \
        echo "Only release '7' is supported on centos"; false; \
    fi \
    && cat /tmp/kolla_bashrc >> /etc/bashrc \
    && sed -i 's|^\(override_install_langs=.*\)|# \1|' /etc/yum.conf


COPY yum.conf /etc/yum.conf


#### BEGIN REPO ENABLEMENT

COPY elasticsearch.repo /etc/yum.repos.d/elasticsearch.repo
COPY grafana.repo /etc/yum.repos.d/grafana.repo
COPY influxdb.repo /etc/yum.repos.d/influxdb.repo
COPY kibana.yum.repo /etc/yum.repos.d/kibana.yum.repo
COPY MariaDB.repo /etc/yum.repos.d/MariaDB.repo
COPY opendaylight.repo /etc/yum.repos.d/opendaylight.repo
COPY td.repo /etc/yum.repos.d/td.repo
COPY zookeeper.repo /etc/yum.repos.d/zookeeper.repo

RUN yum -y install http://repo.percona.com/release/7/RPMS/x86_64/percona-release-0.1-4.noarch.rpm && yum clean all

RUN rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
    && rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch \
    && rpm --import https://repos.influxdata.com/influxdb.key \
    && rpm --import https://packagecloud.io/gpg.key \
    && rpm --import https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana \
    && rpm --import https://packages.treasuredata.com/GPG-KEY-td-agent

RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

RUN yum -y install epel-release yum-plugin-priorities centos-release-ceph-jewel centos-release-openstack-pike centos-release-opstools centos-release-qemu-ev && yum clean all
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization \
    && yum clean all


#### END REPO ENABLEMENT


# Update packages
RUN yum -y install curl iproute iscsi-initiator-utils lvm2 scsi-target-utils sudo tar which && yum clean all


COPY set_configs.py /usr/local/bin/kolla_set_configs
COPY start.sh /usr/local/bin/kolla_start
COPY sudoers /etc/sudoers
COPY curlrc /root/.curlrc


RUN curl -sSL https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 -o /usr/local/bin/dumb-init \
    && chmod +x /usr/local/bin/dumb-init \
    && sed -i 's|#!|#!/usr/local/bin/dumb-init |' /usr/local/bin/kolla_start


RUN touch /usr/local/bin/kolla_extend_start \
    && chmod 755 /usr/local/bin/kolla_start /usr/local/bin/kolla_extend_start /usr/local/bin/kolla_set_configs \
    && chmod 440 /etc/sudoers \
    && mkdir -p /var/log/kolla \
    && chown :kolla /var/log/kolla \
    && chmod 2775 /var/log/kolla \
    && rm -f /tmp/kolla_bashrc


CMD ["kolla_start"]

創建本地Yum倉庫

配置同步鏡像

  • 啓動一個用於同步遠端倉庫的CentOS容器:
$ mkdir -pv /opt/yum/repo
$ docker run -it --name yum-sync -v /opt/:/opt/ centos:7 /bin/bash
  • 配置YumHTTPHTTPS代理(原因你懂的):
$ vi ~/set_proxy.sh
#!/bin/bash
export http_proxy=192.168.195.1:1080; export https_proxy=$http_proxy

$ chmod a+x ~/set_proxy.sh

$ . ~/set_proxy.sh
  • 進入Base鏡像Dockerfile目錄:
$ cd /opt/kolla/docker/base
  • 配置默認Yum,保留緩存:
$ cp -v yum.conf /etc/yum.conf
$ vi /etc/yum.conf
[main]
keepcache=1
cachedir=/var/yum/$basearch/$releasever
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=0
skip_missing_names_on_install=False
  • 配置Yun源和Key
$ cp -v elasticsearch.repo grafana.repo influxdb.repo kibana.yum.repo MariaDB.repo opendaylight.repo td.repo zookeeper.repo /etc/yum.repos.d/

$ yum -y install http://repo.percona.com/release/7/RPMS/x86_64/percona-release-0.1-4.noarch.rpm

$ yum -y install epel-release yum-plugin-priorities centos-release-ceph-jewel centos-release-openstack-pike centos-release-opstools centos-release-qemu-ev

$ rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
    && rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch \
    && rpm --import https://repos.influxdata.com/influxdb.key \
    && rpm --import https://packagecloud.io/gpg.key \
    && rpm --import https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana \
    && rpm --import https://packages.treasuredata.com/GPG-KEY-td-agent \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
  • 獲取國內CentOS鏡像源配置文件:
$ yum install -y wget curl git

$ cd /etc/yum.repos.d/

$ wget -O CentOS-Base.repo https://lug.ustc.edu.cn/wiki/_export/code/mirrors/help/centos?codeblock=3
  • 創建Yum元數據緩存:
$ yum makecache
  • 安裝創建Yum倉庫的工具:
$ yum install -y createrepo
  • 生成Yum倉庫同步Docker鏡像,並刪除容器:
$ docker commit -m "openstack pike base yum sync." -a "LastRitter<[email protected]>" yum-sync yum-sync:pike

$ docker rm yum-sync
  • 導出Yum倉庫同步Docker鏡像(可選):
$ docker save -o openstack_pike_yum_sync_`date +%Y-%m-%d`.tar.gz yum-sync:pike
  • 導入Yum倉庫同步Docker鏡像(可選):
$ docker load --input openstack_pike_yum_sync_`date +%Y-%m-%d`.tar.gz

同步遠程倉庫

  • 啓動同步鏡像,並初設置代理服務器:
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike /bin/bash
$ . ~/set_proxy.sh
  • 同步所有遠程倉庫:
$ reposync -p /opt/yum/repo/
  • 單獨同步遠程倉庫(可以啓動多個容器同時同步不同倉庫):
$ reposync -p /opt/yum/repo/ --repoid=base
$ reposync -p /opt/yum/repo/ --repoid=updates
$ reposync -p /opt/yum/repo/ --repoid=extras

$ reposync -p /opt/yum/repo/ --repoid=epel

$ reposync -p /opt/yum/repo/ --repoid=centos-ceph-jewel
$ reposync -p /opt/yum/repo/ --repoid=centos-openstack-pike
$ reposync -p /opt/yum/repo/ --repoid=centos-opstools-release
$ reposync -p /opt/yum/repo/ --repoid=centos-qemu-ev

$ reposync -p /opt/yum/repo/ --repoid=elasticsearch-2.x

$ reposync -p /opt/yum/repo/ --repoid=grafana

$ reposync -p /opt/yum/repo/ --repoid=influxdb

$ reposync -p /opt/yum/repo/ --repoid=kibana-4.6

$ reposync -p /opt/yum/repo/ --repoid=mariadb

$ reposync -p /opt/yum/repo/ --repoid=opendaylight

$ reposync -p /opt/yum/repo/ --repoid=percona-release-x86_64
$ reposync -p /opt/yum/repo/ --repoid=percona-release-noarch

$ reposync -p /opt/yum/repo/ --repoid=treasuredata

$ reposync -p /opt/yum/repo/ --repoid=iwienand-zookeeper-el7
  • 創建本地Yum倉庫軟件包索引文件:
$ ls /opt/yum/repo/ | xargs -I {} createrepo -p /opt/yum/repo/{}
  • 備份已同步的軟件包(可選):
$ cd /opt/yum/repo/
$ ls | xargs -I {} tar cJvf /path/to/backup/yum_repo_{}_`date +%Y-%m-%d`.tar.xz {}

備份倉庫配置

  • 啓動同步鏡像,並進入KollaBase鏡像源碼目錄:
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike /bin/bash
$ mkdir -pv /opt/kolla/kolla/docker/base/cache
$ cd /opt/kolla/kolla/docker/base/cache
  • 保存Repo文件:
$ mkdir repo
$ cp -v /etc/yum.repos.d/* repo/
  • 保存RPM文件:
$ mkdir rpms
$ cd rpms

$ wget http://repo.percona.com/release/7/RPMS/x86_64/percona-release-0.1-4.noarch.rpm

$ cp -v /var/yum/x86_64/7/{extras/packages/{epel-release-*,centos-release-*}.rpm,base/packages/yum-plugin-priorities-*.rpm} .

$ cd ..
  • 保存Key文件:
$ mkdirkeys
$ cd keys

$ wget http://yum.mariadb.org/RPM-GPG-KEY-MariaDB \
    && wget https://packages.elastic.co/GPG-KEY-elasticsearch \
    && wget https://repos.influxdata.com/influxdb.key \
    && wget https://packagecloud.io/gpg.key \
    && wget https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana \
    && wget https://packages.treasuredata.com/GPG-KEY-td-agent

$ cd ..

啓動本地倉庫

  • 啓動一個 Nginx服務,暴露10022端口,提供Yum倉庫服務:
$ docker run --name=yum-server --restart=always -d -p 10022:80 -v /opt/yum/repo:/usr/share/nginx/html nginx
  • 測試Web庫服務是否正常:
$ curl -X GET http://192.168.195.131:10022/base/repodata/repomd.xml
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
 <revision>1522137310</revision>
<data type="filelists">
  <checksum type="sha256">c1561546c684bd06b3a499c2babc35c761b37b2fc331677eca12f0c769b1bb37</checksum>
  <open-checksum type="sha256">99513068b73d614e3d76f22b892fe62bee6af26ed5640d70cb3744e8c57045b5</open-checksum>
  <location href="repodata/c1561546c684bd06b3a499c2babc35c761b37b2fc331677eca12f0c769b1bb37-filelists.xml.gz"/>
  <timestamp>1522137364</timestamp>
  <size>6936336</size>
  <open-size>97041754</open-size>
</data>
<data type="primary">
  <checksum type="sha256">1ce4baf2de7b0c88dced853cf47e70788cc69dc1db6b8f4be5d0d04b8690a488</checksum>
  <open-checksum type="sha256">7ab3d5121dd6c296665850a210f1258c857ddc20cdfa8990cab9ccf34acc12f8</open-checksum>
  <location href="repodata/1ce4baf2de7b0c88dced853cf47e70788cc69dc1db6b8f4be5d0d04b8690a488-primary.xml.gz"/>
  <timestamp>1522137364</timestamp>
  <size>2831814</size>
  <open-size>26353155</open-size>
</data>
<data type="primary_db">
  <checksum type="sha256">befe5add1fa3a44783fccf25fef6a787a81bcbdca4f19417cfe16e66c5e7f26b</checksum>
  <open-checksum type="sha256">938764645340e4863b503902c10ca326610c430c5e606c5a99461e890713e131</open-checksum>
  <location href="repodata/befe5add1fa3a44783fccf25fef6a787a81bcbdca4f19417cfe16e66c5e7f26b-primary.sqlite.bz2"/>
  <timestamp>1522137381</timestamp>
  <database_version>10</database_version>
  <size>6025221</size>
  <open-size>29564928</open-size>
</data>
<data type="other_db">
  <checksum type="sha256">cf0cc856d46b3095106da78256fb28f9d8defea4118d0e75eab07dc53b7d3f0d</checksum>
  <open-checksum type="sha256">dbb8218b01cc5d8159c7996cf2aa574aa881d837713f8fae06849b13d14d78a1</open-checksum>
  <location href="repodata/cf0cc856d46b3095106da78256fb28f9d8defea4118d0e75eab07dc53b7d3f0d-other.sqlite.bz2"/>
  <timestamp>1522137367</timestamp>
  <database_version>10</database_version>
  <size>2579184</size>
  <open-size>18237440</open-size>
</data>
<data type="other">
  <checksum type="sha256">a0af68e1057f6b03a36894d3a4f267bbe0590327423d0005d95566fb58cd7a29</checksum>
  <open-checksum type="sha256">967f79ee76ebc7bfe82d74e5aa20403751454f93a5d51ed26f3118e6fda29425</open-checksum>
  <location href="repodata/a0af68e1057f6b03a36894d3a4f267bbe0590327423d0005d95566fb58cd7a29-other.xml.gz"/>
  <timestamp>1522137364</timestamp>
  <size>1564207</size>
  <open-size>19593459</open-size>
</data>
<data type="filelists_db">
  <checksum type="sha256">6cd606547d4f569538d4090e9accdc3c69964de1116b9ab1e0a7864bb1f3ec98</checksum>
  <open-checksum type="sha256">8135f93597ef335a32817b598b45d9f48a1f10271d0ae4263c2860092aab8cba</open-checksum>
  <location href="repodata/6cd606547d4f569538d4090e9accdc3c69964de1116b9ab1e0a7864bb1f3ec98-filelists.sqlite.bz2"/>
  <timestamp>1522137376</timestamp>
  <database_version>10</database_version>
  <size>7019993</size>
  <open-size>45116416</open-size>
</data>
</repomd>

使用本地倉庫

  • 複製之前保存的Repo配置文件:
$ cd /opt/kolla/kolla/docker/base/cache
$ cp -rv repo local
  • 修改軟件倉庫基地址爲yum_local_repo_url_base,在使用時替換成真實地址:
--- a/docker/base/cache/local/CentOS-Base.repo
+++ b/docker/base/cache/local/CentOS-Base.repo
@@ -13,7 +13,8 @@
 [base]
 name=CentOS-$releasever - Base
 #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
-baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/os/$basearch/
+#baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/os/$basearch/
+baseurl=http://yum_local_repo_url_base/base/
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
@@ -21,7 +22,8 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 [updates]
 name=CentOS-$releasever - Updates
 # mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
-baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/updates/$basearch/
+#baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/updates/$basearch/
+baseurl=http://yum_local_repo_url_base/updates/
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
@@ -29,7 +31,8 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 [extras]
 name=CentOS-$releasever - Extras
 # mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
-baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/extras/$basearch/
+#baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/extras/$basearch/
+baseurl=http://yum_local_repo_url_base/extras/
 gpgcheck=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
 
@@ -40,4 +43,4 @@ name=CentOS-$releasever - Plus
 baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/centosplus/$basearch/
 gpgcheck=1
 enabled=0
-gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
\ No newline at end of file
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
--- a/docker/base/cache/local/CentOS-Ceph-Jewel.repo
+++ b/docker/base/cache/local/CentOS-Ceph-Jewel.repo
@@ -5,7 +5,8 @@
 
 [centos-ceph-jewel]
 name=CentOS-$releasever - Ceph Jewel
-baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/ceph-jewel/
+#baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/ceph-jewel/
+baseurl=http://yum_local_repo_url_base/centos-ceph-jewel/
 gpgcheck=1
 enabled=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
--- a/docker/base/cache/local/CentOS-OpenStack-pike.repo
+++ b/docker/base/cache/local/CentOS-OpenStack-pike.repo
@@ -5,7 +5,8 @@
 
 [centos-openstack-pike]
 name=CentOS-7 - OpenStack pike
-baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-pike/
+#baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-pike/
+baseurl=http://yum_local_repo_url_base/centos-openstack-pike/
 gpgcheck=1
 enabled=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
--- a/docker/base/cache/local/CentOS-OpsTools.repo
+++ b/docker/base/cache/local/CentOS-OpsTools.repo
@@ -11,7 +11,8 @@ enabled=0
 
 [centos-opstools-release]
 name=CentOS-7 - OpsTools - release
-baseurl=http://mirror.centos.org/centos/$releasever/opstools/$basearch/
+#baseurl=http://mirror.centos.org/centos/$releasever/opstools/$basearch/
+baseurl=http://yum_local_repo_url_base/centos-opstools-release/
 gpgcheck=1
 enabled=1
 skip_if_unavailable=1
--- a/docker/base/cache/local/CentOS-QEMU-EV.repo
+++ b/docker/base/cache/local/CentOS-QEMU-EV.repo
@@ -5,7 +5,8 @@
 
 [centos-qemu-ev]
 name=CentOS-$releasever - QEMU EV
-baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
+#baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
+baseurl=http://yum_local_repo_url_base/centos-qemu-ev/
 gpgcheck=1
 enabled=1
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
--- a/docker/base/cache/local/MariaDB.repo
+++ b/docker/base/cache/local/MariaDB.repo
@@ -1,5 +1,6 @@
 [mariadb]
 name = MariaDB
-baseurl = https://yum.mariadb.org/10.0/centos7-amd64
+#baseurl = https://yum.mariadb.org/10.0/centos7-amd64
+baseurl=http://yum_local_repo_url_base/mariadb/
 gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck = 1
--- a/docker/base/cache/local/elasticsearch.repo
+++ b/docker/base/cache/local/elasticsearch.repo
@@ -1,6 +1,7 @@
 [elasticsearch-2.x]
 name=Elasticsearch repository for 2.x packages
-baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
+#baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
+baseurl=http://yum_local_repo_url_base/elasticsearch-2.x/
 gpgcheck=1
 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
 enabled=1
--- a/docker/base/cache/local/epel.repo
+++ b/docker/base/cache/local/epel.repo
@@ -1,7 +1,8 @@
 [epel]
 name=Extra Packages for Enterprise Linux 7 - $basearch
 #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
-metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
+#metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
+baseurl=http://yum_local_repo_url_base/epel/
 failovermethod=priority
 enabled=1
 gpgcheck=1
--- a/docker/base/cache/local/grafana.repo
+++ b/docker/base/cache/local/grafana.repo
@@ -1,7 +1,8 @@
 [grafana]
 name=grafana
-baseurl=https://packagecloud.io/grafana/stable/el/7/$basearch
-repo_gpgcheck=1
+#baseurl=https://packagecloud.io/grafana/stable/el/7/$basearch
+baseurl=http://yum_local_repo_url_base/grafana/
+#repo_gpgcheck=1
 enabled=1
 gpgcheck=1
 gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana
--- a/docker/base/cache/local/influxdb.repo
+++ b/docker/base/cache/local/influxdb.repo
@@ -1,6 +1,7 @@
 [influxdb]
 name = InfluxDB Repository - RHEL $releasever
-baseurl = https://repos.influxdata.com/rhel/$releasever/$basearch/stable
+#baseurl = https://repos.influxdata.com/rhel/$releasever/$basearch/stable
+baseurl=http://yum_local_repo_url_base/influxdb/
 enabled = 1
 gpgcheck = 1
 gpgkey = https://repos.influxdata.com/influxdb.key
--- a/docker/base/cache/local/kibana.yum.repo
+++ b/docker/base/cache/local/kibana.yum.repo
@@ -1,6 +1,7 @@
 [kibana-4.6]
 name=Kibana repository for 4.6.x packages
-baseurl=https://packages.elastic.co/kibana/4.6/centos
+#baseurl=https://packages.elastic.co/kibana/4.6/centos
+baseurl=http://yum_local_repo_url_base/kibana-4.6/
 gpgcheck=1
 gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
 enabled=1
--- a/docker/base/cache/local/opendaylight.repo
+++ b/docker/base/cache/local/opendaylight.repo
@@ -1,5 +1,6 @@
 [opendaylight]
 name=CentOS CBS OpenDaylight Release Repository
-baseurl=http://cbs.centos.org/repos/nfv7-opendaylight-6-release/x86_64/os/
+#baseurl=http://cbs.centos.org/repos/nfv7-opendaylight-6-release/x86_64/os/
+baseurl=http://yum_local_repo_url_base/opendaylight/
 enabled=1
 gpgcheck=0
--- a/docker/base/cache/local/percona-release.repo
+++ b/docker/base/cache/local/percona-release.repo
@@ -3,14 +3,16 @@
 ########################################
 [percona-release-$basearch]
 name = Percona-Release YUM repository - $basearch
-baseurl = http://repo.percona.com/release/$releasever/RPMS/$basearch
+#baseurl = http://repo.percona.com/release/$releasever/RPMS/$basearch
+baseurl=http://yum_local_repo_url_base/percona-release-$basearch/
 enabled = 1
 gpgcheck = 1
 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
 
 [percona-release-noarch]
 name = Percona-Release YUM repository - noarch
-baseurl = http://repo.percona.com/release/$releasever/RPMS/noarch
+#baseurl = http://repo.percona.com/release/$releasever/RPMS/noarch
+baseurl=http://yum_local_repo_url_base/percona-release-noarch/
 enabled = 1
 gpgcheck = 1
 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Percona 
--- a/docker/base/cache/local/td.repo
+++ b/docker/base/cache/local/td.repo
@@ -1,5 +1,6 @@
 [treasuredata]
 name=TreasureData
-baseurl=http://packages.treasuredata.com/2/redhat/\$releasever/\$basearch
+#baseurl=http://packages.treasuredata.com/2/redhat/\$releasever/\$basearch
+baseurl=http://yum_local_repo_url_base/treasuredata/
 gpgcheck=1
 gpgkey=https://packages.treasuredata.com/GPG-KEY-td-agent
--- a/docker/base/cache/local/zookeeper.repo
+++ b/docker/base/cache/local/zookeeper.repo
@@ -1,6 +1,7 @@
 [iwienand-zookeeper-el7]
 name=Copr repo for zookeeper-el7 owned by iwienand
-baseurl=https://copr-be.cloud.fedoraproject.org/results/iwienand/zookeeper-el7/epel-7-$basearch/
+#baseurl=https://copr-be.cloud.fedoraproject.org/results/iwienand/zookeeper-el7/epel-7-$basearch/
+baseurl=http://yum_local_repo_url_base/iwienand-zookeeper-el7/
 type=rpm-md
 skip_if_unavailable=True
 gpgcheck=1
  • 創建Yum倉庫測試鏡像:
$ docker run -it --name yum-client -v /opt/:/opt/ centos:7 /bin/bash

$ mkdir -pv /tmp/rpms && cd /tmp/rpms
$ cp -vf /opt/kolla/kolla/docker/base/cache/rpms/*.rpm .
$ rpm -ivh *.rpm
$ cd - && rm -rfv /tmp/rpms

$ mkdir -pv /tmp/keys && cd /tmp/keys
$ cp -vf /opt/kolla/kolla/docker/base/cache/keys/{RPM-GPG-KEY-MariaDB,GPG-KEY-elasticsearch,influxdb.key,gpg.key,RPM-GPG-KEY-grafana,GPG-KEY-td-agent} .
$ rpm --import /tmp/keys/RPM-GPG-KEY-MariaDB \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
    && rpm --import /tmp/keys/GPG-KEY-elasticsearch \
    && rpm --import /tmp/keys/influxdb.key \
    && rpm --import /tmp/keys/gpg.key \
    && rpm --import /tmp/keys/RPM-GPG-KEY-grafana \
    && rpm --import /tmp/keys/GPG-KEY-td-agent \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
$ cd - && rm -rfv /tmp/keys

$ cd /etc/yum.repos.d/
$ rm -vf *.repo
$ cp -vf /opt/kolla/kolla/docker/base/cache/local/*.repo .

$ yum clean all && rm -rf /var/cache/yum
  • 生成Yum倉庫測試Docker鏡像,並刪除容器:
$ docker commit -m "openstack pike yum client." -a "LastRitter<[email protected]>" yum-client yum-client:pike

$ docker rm yum-client
  • 導出Yum倉庫同步Docker鏡像(可選):
$ docker save -o openstack_pike_yum_client_`date +%Y-%m-%d`.tar.gz yum-client:pike
  • 導入Yum倉庫同步Docker鏡像(可選):
$ docker load --input openstack_pike_yum_client_`date +%Y-%m-%d`.tar.gz
  • yum_local_repo_url_base替換爲實際使用的地址192.168.195.131:10022,然後測試Yum倉庫:
$ docker run --rm -it yum-client:pike /bin/bash
$ ls /etc/yum.repos.d/*.repo | xargs sed -i 's/yum_local_repo_url_base/192.168.195.131:10022/g'
$ yum repolist
Loaded plugins: fastestmirror, ovl, priorities
base                                                                  | 2.9 kB  00:00:00     
centos-ceph-jewel                                                     | 2.9 kB  00:00:00     
centos-openstack-pike                                                 | 2.9 kB  00:00:00     
centos-opstools-release                                               | 2.9 kB  00:00:00     
centos-qemu-ev                                                        | 2.9 kB  00:00:00     
elasticsearch-2.x                                                     | 2.9 kB  00:00:00     
epel                                                                  | 2.9 kB  00:00:00     
extras                                                                | 2.9 kB  00:00:00     
grafana                                                               | 2.9 kB  00:00:00     
influxdb                                                              | 2.9 kB  00:00:00     
iwienand-zookeeper-el7                                                | 2.9 kB  00:00:00     
kibana-4.6                                                            | 2.9 kB  00:00:00     
mariadb                                                               | 2.9 kB  00:00:00     
opendaylight                                                          | 2.9 kB  00:00:00     
percona-release-noarch                                                | 2.9 kB  00:00:00     
percona-release-x86_64                                                | 2.9 kB  00:00:00     
treasuredata                                                          | 2.9 kB  00:00:00     
updates                                                               | 2.9 kB  00:00:00     
(1/18): centos-ceph-jewel/primary_db                                  |  62 kB  00:00:00     
(2/18): elasticsearch-2.x/primary_db                                  | 9.4 kB  00:00:00     
(3/18): centos-opstools-release/primary_db                            | 155 kB  00:00:00     
(4/18): centos-qemu-ev/primary_db                                     |  34 kB  00:00:00     
(5/18): grafana/primary_db                                            |  12 kB  00:00:00     
(6/18): centos-openstack-pike/primary_db                              | 933 kB  00:00:00     
(7/18): influxdb/primary_db                                           |  29 kB  00:00:00     
(8/18): kibana-4.6/primary_db                                         |  42 kB  00:00:00     
(9/18): iwienand-zookeeper-el7/primary_db                             | 2.4 kB  00:00:00     
(10/18): extras/primary_db                                            | 184 kB  00:00:00     
(11/18): epel/primary_db                                              | 6.2 MB  00:00:00     
(12/18): base/primary_db                                              | 5.7 MB  00:00:00     
(13/18): mariadb/primary_db                                           |  21 kB  00:00:00     
(14/18): percona-release-noarch/primary_db                            |  15 kB  00:00:00     
(15/18): opendaylight/primary_db                                      | 2.4 kB  00:00:00     
(16/18): percona-release-x86_64/x86_64/primary_db                     |  40 kB  00:00:00     
(17/18): treasuredata/primary_db                                      |  47 kB  00:00:00     
(18/18): updates/primary_db                                           | 6.9 MB  00:00:00     
Determining fastest mirrors
repo id                            repo name                                           status
base                               CentOS-7 - Base                                      9591
centos-ceph-jewel                  CentOS-7 - Ceph Jewel                                  92
centos-openstack-pike              CentOS-7 - OpenStack pike                            2389
centos-opstools-release            CentOS-7 - OpsTools - release                         427
centos-qemu-ev                     CentOS-7 - QEMU EV                                     47
elasticsearch-2.x                  Elasticsearch repository for 2.x packages              22
epel                               Extra Packages for Enterprise Linux 7 - x86_64      12439
extras                             CentOS-7 - Extras                                     444
grafana                            grafana                                                33
influxdb                           InfluxDB Repository - RHEL 7                          104
iwienand-zookeeper-el7             Copr repo for zookeeper-el7 owned by iwienand           1
kibana-4.6                         Kibana repository for 4.6.x packages                   14
mariadb                            MariaDB                                                15
opendaylight                       CentOS CBS OpenDaylight Release Repository              1
percona-release-noarch             Percona-Release YUM repository - noarch                26
percona-release-x86_64/x86_64      Percona-Release YUM repository - x86_64                70
treasuredata                       TreasureData                                           15
updates                            CentOS-7 - Updates                                   2411
repolist: 28141

創建本地Pip倉庫

初始化Pip容器

  • 創建pip-server容器:
$ mkdir -pv /opt/pip
$ docker run -it --name pip-server -v /opt/:/opt/ centos:7 /bin/bash
  • 安裝基本的軟件包:
$ yum install -y epel-release
$ yum install -y python-pip httpd-tools

$ pip install --upgrade pip
$ pip install pypiserver pip2pi passlib

啓動Pip服務

  • 設置密碼,並啓動pipyi-server服務器:
$ htpasswd -sc ~/.htaccess admin
New password: 123456
Re-type new password: 123456
Adding password for user admin

$ pypi-server -p 3141 -P ~/.htaccess /opt/pip
  • pip-server容器中另外開啓一個Shell,測試服務是否正常:
$ docker exec -it pip-server bash

$ curl -X GET http://localhost:3141
<html><head><title>Welcome to pypiserver!</title></head><body>
<h1>Welcome to pypiserver!</h1>
<p>This is a PyPI compatible package index serving 473 packages.</p>

<p> To use this server with pip, run the the following command:
<blockquote><pre>
pip install --extra-index-url http://localhost:3141/ PACKAGE [PACKAGE2...]
</pre></blockquote></p>

<p> To use this server with easy_install, run the the following command:
<blockquote><pre>
easy_install -i http://localhost:3141/simple/ PACKAGE
</pre></blockquote></p>

<p>The complete list of all packages can be found <a href="/packages/">here</a>
or via the <a href="/simple/">simple</a> index.</p>

<p>This instance is running version 1.2.1 of the
  <a href="https://pypi.python.org/pypi/pypiserver">pypiserver</a> software.</p>
</body></html>

備份Pip鏡像

  • 生成pip-server鏡像,並刪除容器:
$ docker commit -m "openstack pike pip server." -a "LastRitter<[email protected]>" pip-server pip-server:pike

$ docker rm pip-server
  • 導出pip-server鏡像(可選):
$ docker save -o openstack_pike_pip_server_`date +%Y-%m-%d`.tar.gz pip-server:pike
  • 導入pip-server鏡像(可選):
$ docker load --input openstack_pike_pip_server_`date +%Y-%m-%d`.tar.gz

測試Pip鏡像

  • 啓動一個pip-server容器:
$ docker run --name=pip-server --restart=always -d -p 3141:3141 -v /opt/:/opt/ pip-server:pike pypi-server -p 3141 -P ~/.htaccess /opt/pip
  • 在宿主機上測試pip-server是否正常:
$ curl -X GET http://192.168.195.131:3141
<html><head><title>Welcome to pypiserver!</title></head><body>
<h1>Welcome to pypiserver!</h1>
<p>This is a PyPI compatible package index serving 996 packages.</p>

<p> To use this server with pip, run the the following command:
<blockquote><pre>
pip install --extra-index-url http://192.168.195.131:3141/ PACKAGE [PACKAGE2...]
</pre></blockquote></p>

<p> To use this server with easy_install, run the the following command:
<blockquote><pre>
easy_install -i http://192.168.195.131:3141/simple/ PACKAGE
</pre></blockquote></p>

<p>The complete list of all packages can be found <a href="/packages/">here</a>
or via the <a href="/simple/">simple</a> index.</p>

<p>This instance is running version 1.2.1 of the
  <a href="https://pypi.python.org/pypi/pypiserver">pypiserver</a> software.</p>
</body></html>

下載軟件包

  • 單獨下載某個軟件包:
$ docker exec -it pip-server bash

$ cd /opt/pip/ && pip download "tox==2.9.1"
  • 批量下載軟件包(具體要下載哪些包,可根據構建鏡像的Dockerfile.j2來確定):
$ docker exec -it pip-server bash

$ vi /opt/requirements.txt
pytest===3.1.3
tox===2.9.1

$ cd /opt/pip/ && pip download -r /opt/requirements.txt
  • 建立索引:
$ docker exec -it pip-server dir2pi --normalize-package-names /opt/pip/
# Or
$ docker run --rm -it -v /opt/:/opt/ pip-server:pike dir2pi --normalize-package-names /opt/pip/

使用Pip倉庫

  • 設置宿主機信任pip-server
$ mkdir -pv ~/.pip/
$ vi ~/.pip/pip.conf
[global]
trusted-host = 192.168.195.131
index-url = http://192.168.195.131:3141/simple
  • 安裝一個之前緩存過的軟件包:
# pip install -i http://192.168.195.131:3141/simple/ tox
$ pip install tox
Collecting tox
  Downloading http://192.168.195.131:3141/packages/simple/tox/tox-2.9.1-py2.py3-none-any.whl (73kB)
    100% |████████████████████████████████| 81kB 49.7MB/s 
Requirement already satisfied: virtualenv>=1.11.2; python_version != "3.2" in /usr/lib/python2.7/site-packages (from tox)
Requirement already satisfied: pluggy<1.0,>=0.3.0 in /usr/lib/python2.7/site-packages (from tox)
Requirement already satisfied: six in /usr/lib/python2.7/site-packages (from tox)
Requirement already satisfied: py>=1.4.17 in /usr/lib/python2.7/site-packages (from tox)
Installing collected packages: tox
Successfully installed tox-2.9.1

$ docker logs pip-server
172.17.0.1 - - [31/Mar/2018 15:47:35] "GET / HTTP/1.1" 200 796
192.168.195.131 - - [31/Mar/2018 15:51:27] "GET / HTTP/1.1" 200 808
192.168.195.131 - - [31/Mar/2018 15:57:25] "GET /simple/tox/ HTTP/1.1" 200 464
192.168.195.131 - - [31/Mar/2018 15:57:25] "GET /packages/simple/tox/tox-2.9.1-py2.py3-none-any.whl HTTP/1.1" 200 73454

創建本地Git倉庫

配置MySQL服務

  • 啓動mysql-server容器,設置密碼爲123456
$ docker run -d --name mysql-server -p 13306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql
  • 進入mysql-server容器,並安裝基本軟件包:
$ docker exec -it mysql-server bash
$ apt-get update
$ apt-get install vim
  • 配置mysql-server容器,增加UTF-8支持:
$ vi /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
datadir     = /var/lib/mysql
character-set-server = utf8
init_connect = 'SET NAMES utf8'

備份MySQL鏡像

  • 生成mysql-server鏡像,並刪除容器:
$ docker commit -m "mysql server." -a "LastRitter<[email protected]>" mysql-server mysql-server

$ docker rm mysql-server
  • 導出mysql-server鏡像(可選):
$ docker save -o openstack_pike_mysql_server_`date +%Y-%m-%d`.tar.gz mysql-server
  • 導入mysql-server鏡像(可選):
$ docker load --input openstack_pike_mysql_server_`date +%Y-%m-%d`.tar.gz

啓動MySQL服務

  • 啓動mysql-server容器:
$ mkdir -pv /opt/mysql
$ docker run -d --name mysql-server --restart=always -p 13306:3306 -e MYSQL_ROOT_PASSWORD=123456 -v /opt/mysql:/var/lib/mysql -v /etc/localtime:/etc/localtime mysql-server
  • 在主機上測試mysql-server服務是否正常:
$ yum install -y mysql
$ mysql -h 192.168.195.131 -P 13306 -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.21 MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> exit
Bye

創建Gogs數據庫

進入mysql-server 容器,創建Gogs數據庫:

$ docker exec -it mysql-server bash
$ mysql -h 127.0.0.1 -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.21 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database gogs default character set utf8 collate utf8_general_ci;
Query OK, 1 row affected (0.00 sec)

mysql> exit

啓動Gogs服務

  • 啓動gogs-server 容器:
$ mkdir -pv /opt/gogs
$ docker run -d --name=gogs-server --restart=always -p 15555:22 -p 13000:3000 -v /opt/gogs:/data -v /etc/localtime:/etc/localtime gogs/gogs
  • 登錄Gogs服務Web頁面http://192.168.195.131:13000,進行相關初始化設置,創建管理員賬戶,最後設置SSH Key
數據庫主機:192.168.195.131:13306
數據庫用戶名:root
數據庫用戶密碼:123456

域名:192.168.195.131
SSH端口:15555

HTTP端口:13000
應用URL:http://192.168.195.131:13000/

同步Nova源碼

  • 克隆官方源碼:
$ git clone https://git.openstack.org/openstack/nova
# Or
$ git clone https://github.com/openstack/nova.git
  • Gogs中創建Nova項目,然後將其添加爲遠程倉庫:
$ cd nova
$ git remote add local ssh://[email protected]:15555/lastritter/nova.git

$ git remote -v          
local	ssh://[email protected]:15555/lastritter/nova.git (fetch)
local	ssh://[email protected]:15555/lastritter/nova.git (push)
origin	https://github.com/openstack/nova.git (fetch)
origin	https://github.com/openstack/nova.git (push)
  • 推送masterstable/pike分支到本地Gogs倉庫:
$ git push local master:master
$ git checkout -b pike/origin remotes/origin/stable/pike
$ git push local pike/origin:pike/origin
  • 查看16.1.0版本Nova源碼的Tag信息:
$ git show 16.1.0
tag 16.1.0
Tagger: OpenStack Release Bot <[email protected]>
Date:   Thu Feb 15 23:53:11 2018 +0000

nova 16.1.0 release

meta:version: 16.1.0
meta:diff-start: -
meta:series: pike
meta:release-type: release
meta:pypi: no
meta:first: no
meta:release:Author: Matt Riedemann <[email protected]>
meta:release:Commit: Matt Riedemann <[email protected]>
meta:release:Change-Id: I0c4d2dfc306d711b1f649d94782e8ae40475c43f
meta:release:Code-Review+1: Lee Yarwood <[email protected]>
meta:release:Code-Review+2: Sean McGinnis <[email protected]>
meta:release:Code-Review+2: Tony Breeds <[email protected]>
meta:release:Workflow+1: Tony Breeds <[email protected]>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAABAgAGBQJahh1nAAoJEEIeZHKBH5qBF70H/ibKNK+jHbcJrqc49ZDC8bU5
86539Pa0QwPbrREwFFzGbt9w9I6Grh9gXa4BpsQAHGM+mRi0RBqCY2WdJ2PoKwEx
fH3bCUaYvS4JFUZgQGmpidWM4RPmhOZ4wmdkbqy4soBrncmMsBxnlJ/q91DlPWUd
KMH4LGInZ0xq3APvYTNP/H8nJttrIQbgy8hgVPrQ+SLw/1hqW9zSkRqHIBGjlcec
EvoQD+2CBQ8Cthn7lsB+5h7x+efYgv+3kAwzvBslMLDp6y+x9VEzkhQIDX4xaD7j
9dLv+0p/OevA7gmC54rTs15R00qf2JzaMqaQ2tUg7HER2S33OfEOfbkcIqd3B78=
=yqzR
-----END PGP SIGNATURE-----

commit 806eda3da84d6f9b47c036ff138415458b837536
Merge: 6d06aa4 6d1877b
Author: Zuul <[email protected]>
Date:   Tue Feb 13 17:31:57 2018 +0000

    Merge "Query all cells for service version in _validate_bdm" into stable/pike
  • 創建16.1.0版本Nova的開發分支,並推送到本地Git倉庫:
$ git checkout -b pike/devel
$ git reset --hard 16.1.0
HEAD 現在位於 806eda3 Merge "Query all cells for service version in _validate_bdm" into stable/pike

$ git push -u local pike/devel:pike/devel
Total 0 (delta 0), reused 0 (delta 0)
To ssh://[email protected]:15555/lastritter/nova.git
 * [new branch]      pike/devel -> pike/devel
分支 pike/devel 設置爲跟蹤來自 local 的遠程分支 pike/devel。
  • 查看分支狀況:
$ git branch -av
  master                       c683518 Merge "Fix allocation_candidates not to ignore shared RPs"
* pike/devel                   806eda3 Merge "Query all cells for service version in _validate_bdm" into stable/pike
  pike/origin                  708342f Merge "compute: Cleans up allocations after failed resize" into stable/pike
  remotes/origin/HEAD          -> origin/master
  remotes/origin/master        c683518 Merge "Fix allocation_candidates not to ignore shared RPs"
  remotes/origin/stable/ocata  781e7b3 Merge "Don't try to delete build request during a reschedule" into stable/ocata
  remotes/origin/stable/pike   708342f Merge "compute: Cleans up allocations after failed resize" into stable/pike
  remotes/origin/stable/queens 307382f Use ksa session for cinder microversion check
  remotes/work/master          c683518 Merge "Fix allocation_candidates not to ignore shared RPs"
  remotes/work/pike/devel      806eda3 Merge "Query all cells for service version in _validate_bdm" into stable/pike
  remotes/work/pike/origin     708342f Merge "compute: Cleans up allocations after failed resize" into stable/pike

開始構建鏡像

修改鏡像模板

  • 修改Base鏡像Dockerfile模板文件(docker/base/Dockerfile.j2),把BEGIN REPO ENABLEMENTEND REPO ENABLEMENT之間的部分替換成如下命令:
RUN mkdir -pv /tmp/rpms /tmp/keys

COPY cache/rpms/* /tmp/rpms/
COPY cache/keys/* /tmp/keys/
COPY cache/local_repo.conf /tmp/local_repo.conf

RUN rpm -ivh /tmp/rpms/*.rpm

RUN rpm --import /tmp/keys/RPM-GPG-KEY-MariaDB \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
    && rpm --import /tmp/keys/GPG-KEY-elasticsearch \
    && rpm --import /tmp/keys/influxdb.key \
    && rpm --import /tmp/keys/gpg.key \
    && rpm --import /tmp/keys/RPM-GPG-KEY-grafana \
    && rpm --import /tmp/keys/GPG-KEY-td-agent \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
    && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization

RUN rm -rfv /tmp/rpms /tmp/keys /etc/yum.repos.d/*

COPY cache/local/* /etc/yum.repos.d/
RUN yum_local_repo_url_base=`cat /tmp/local_repo.conf`;ls /etc/yum.repos.d/*.repo | xargs sed -i 's/yum_local_repo_url_base/'$yum_local_repo_url_base'/g'

RUN rm -rfv /tmp/local_repo.conf
  • 新增本地Repo地址配置文件:
$ vi docker/base/cache/local_repo.conf
192.168.195.131:10022
  • 下載dumb-init文件:
$ export http_proxy=192.168.195.1:1080; export https_proxy=$http_proxy

$ curl -sSL https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 -o /opt/kolla/kolla/docker/base/cache/dumb-init
  • 使用本地dumb-init文件:
--- a/docker/base/Dockerfile.j2
+++ b/docker/base/Dockerfile.j2
@@ -253,8 +253,8 @@ COPY curlrc /root/.curlrc
 
 {% if base_arch == 'x86_64' %}
 
-RUN curl -sSL https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 -o /usr/local/bin/dumb-init \
-    && chmod +x /usr/local/bin/dumb-init \
+COPY cache/dumb-init /usr/local/bin/dumb-init
+RUN  chmod +x /usr/local/bin/dumb-init \
     && sed -i 's|#!|#!/usr/local/bin/dumb-init |' /usr/local/bin/kolla_start
 
 {% else %}
  • 下載get-pip.pyrequirements-stable-pike.tar.gz文件:
$ mkdir -pv /opt/kolla/kolla/docker/openstack-base/cache
$ curl https://bootstrap.pypa.io/get-pip.py -o /opt/kolla/kolla/docker/openstack-base/cache/get-pip.py

$ curl http://tarballs.openstack.org/requirements/requirements-stable-pike.tar.gz -o /opt/kolla/kolla/docker/openstack-base/cache/requirements-stable-pike.tar.gz
  • 修改對應的Dockerfile模板:
--- a/docker/openstack-base/Dockerfile.j2
+++ b/docker/openstack-base/Dockerfile.j2
@@ -276,8 +276,8 @@ ENV DEBIAN_FRONTEND noninteractive
 {{ macros.install_packages(openstack_base_packages | customizable("packages")) }}
 
 {% block source_install_python_pip %}
-RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py \
-    && python get-pip.py \
+COPY cache/get-pip.py get-pip.py
+RUN python get-pip.py \
     && rm get-pip.py
 {% endblock %}

@@ -393,7 +393,8 @@ RUN python get-pip.py \
     ]
 %}
 
-ADD openstack-base-archive /openstack-base-source
+COPY cache/requirements-stable-pike.tar.gz /tmp/requirements-stable-pike.tar.gz
+RUN mkdir -pv /openstack-base-source && tar xvf /tmp/requirements-stable-pike.tar.gz -C /openstack-base-source && rm -rfv /tmp/requirements-stable-pike.tar.gz
 RUN ln -s openstack-base-source/* /requirements \
     && mkdir -p /var/lib/kolla \
     && {{ macros.install_pip(['virtualenv'], constraints = false)}} \
  • 使用本地Pip服務:
--- a/docker/openstack-base/Dockerfile.j2
+++ b/docker/openstack-base/Dockerfile.j2
@@ -280,7 +280,10 @@ ENV DEBIAN_FRONTEND noninteractive
 #COPY cache/get-pip.py get-pip.py
 #RUN python get-pip.py \
 #    && rm get-pip.py
-RUN pip install --upgrade pip
+RUN mkdir -pv ~/.pip/
+COPY cache/local_repo.conf /tmp/local_repo.conf
+COPY cache/pip.conf /root/.pip/pip.conf
+RUN pip_ip=`cat /tmp/local_repo.conf | awk -F : '{print $1}'`; pip_port=`cat /tmp/local_repo.conf | awk -F : '{print $2}'`; sed -i 's/pip_local_repo_ip/'$pip_ip'/g' /root/.pip/pip.conf; sed -i 's/pip_local_repo_port/'$pip_port'/g' /root/.pip/pip.conf; pip install --upgrade pip && rm -rfv /tmp/local_repo.conf
 {% endblock %}
 
 {% set openstack_base_pip_packages = [
--- /dev/null
+++ b/docker/openstack-base/cache/local_repo.conf
@@ -0,0 +1 @@
+192.168.195.131:3141
--- /dev/null
+++ b/docker/openstack-base/cache/pip.conf
@@ -0,0 +1,3 @@
+[global]
+trusted-host = pip_local_repo_ip
+index-url = http://pip_local_repo_ip:pip_local_repo_port/simple

基本使用說明

命令幫助查看:

$ python tools/build.py --help         
usage: kolla-build [-h] [--base BASE] [--base-arch BASE_ARCH]
                   [--base-image BASE_IMAGE] [--base-tag BASE_TAG]
                   [--build-args BUILD_ARGS] [--cache] [--config-dir DIR]
                   [--config-file PATH] [--debug] [--docker-dir DOCKER_DIR]
                   [--format FORMAT] [--keep] [--list-dependencies]
                   [--list-images] [--logs-dir LOGS_DIR]
                   [--namespace NAMESPACE] [--nocache] [--nodebug] [--nokeep]
                   [--nolist-dependencies] [--nolist-images] [--nopull]
                   [--nopush] [--noskip-existing] [--noskip-parents]
                   [--notemplate-only] [--profile PROFILE] [--pull] [--push]
                   [--push-threads PUSH_THREADS] [--registry REGISTRY]
                   [--retries RETRIES] [--save-dependency SAVE_DEPENDENCY]
                   [--skip-existing] [--skip-parents] [--tag TAG]
                   [--tarballs-base TARBALLS_BASE] [--template-only]
                   [--template-override TEMPLATE_OVERRIDE] [--threads THREADS]
                   [--timeout TIMEOUT] [--type INSTALL_TYPE] [--version]
                   [--work-dir WORK_DIR]
                   [regex [regex ...]]

positional arguments:
  regex                 Build only images matching regex and its dependencies

optional arguments:
  -h, --help            show this help message and exit
  --base BASE, -b BASE  The distro type of the base image. Allowed values are
                        centos, rhel, ubuntu, oraclelinux, debian Allowed
                        values: centos, rhel, ubuntu, oraclelinux, debian
  --base-arch BASE_ARCH
                        The base architecture. Default is same as host Allowed
                        values: x86_64, ppc64le, aarch64
  --base-image BASE_IMAGE
                        The base image name. Default is the same with base.
                        For non-x86 architectures use full name like
                        "aarch64/debian".
  --base-tag BASE_TAG   The base distro image tag
  --build-args BUILD_ARGS
                        Set docker build time variables
  --cache               Use the Docker cache when building
  --config-dir DIR      Path to a config directory to pull `*.conf` files
                        from. This file set is sorted, so as to provide a
                        predictable parse order if individual options are
                        over-ridden. The set is parsed after the file(s)
                        specified via previous --config-file, arguments hence
                        over-ridden options in the directory take precedence.
  --config-file PATH    Path to a config file to use. Multiple config files
                        can be specified, with values in later files taking
                        precedence. Defaults to None.
  --debug, -d           Turn on debugging log level
  --docker-dir DOCKER_DIR, -D DOCKER_DIR
                        Path to additional docker file template directory
  --format FORMAT, -f FORMAT
                        Format to write the final results in Allowed values:
                        json, none
  --keep                Keep failed intermediate containers
  --list-dependencies, -l
                        Show image dependencies (filtering supported)
  --list-images         Show all available images (filtering supported)
  --logs-dir LOGS_DIR   Path to logs directory
  --namespace NAMESPACE, -n NAMESPACE
                        The Docker namespace name
  --nocache             The inverse of --cache
  --nodebug             The inverse of --debug
  --nokeep              The inverse of --keep
  --nolist-dependencies
                        The inverse of --list-dependencies
  --nolist-images       The inverse of --list-images
  --nopull              The inverse of --pull
  --nopush              The inverse of --push
  --noskip-existing     The inverse of --skip-existing
  --noskip-parents      The inverse of --skip-parents
  --notemplate-only     The inverse of --template-only
  --profile PROFILE, -p PROFILE
                        Build a pre-defined set of images, see [profiles]
                        section in config. The default profiles are: infra,
                        main, aux, default, gate
  --pull                Attempt to pull a newer version of the base image
  --push                Push images after building
  --push-threads PUSH_THREADS
                        The number of threads to user while pushing Images.
                        Note: Docker can not handle threading push properly
  --registry REGISTRY   The docker registry host. The default registry host is
                        Docker Hub
  --retries RETRIES, -r RETRIES
                        The number of times to retry while building
  --save-dependency SAVE_DEPENDENCY
                        Path to the file to store the docker image dependency
                        in Graphviz dot format
  --skip-existing       Do not rebuild images present in the docker cache
  --skip-parents        Do not rebuild parents of matched images
  --tag TAG             The Docker tag
  --tarballs-base TARBALLS_BASE
                        Base url to OpenStack tarballs
  --template-only       Don't build images. Generate Dockerfile only
  --template-override TEMPLATE_OVERRIDE
                        Path to template override file
  --threads THREADS, -T THREADS
                        The number of threads to use while building. (Note:
                        setting to one will allow real time logging)
  --timeout TIMEOUT     Time in seconds after which any operation times out
  --type INSTALL_TYPE, -t INSTALL_TYPE
                        The method of the OpenStack install. Allowed values
                        are binary, source, rdo, rhos Allowed values: binary,
                        source, rdo, rhos
  --version             show program's version number and exit
  --work-dir WORK_DIR   Path to be used as working directory.By default, a
                        temporary dir is created
  • --base BASE, -b BASE 指定Base鏡像的發行版,默認值是centos,可選的值有:centosrhelubuntuoraclelinuxdebian
  • --base-arch BASE_ARCH 指定系統架構,默認值與主機相同,可選的值有:x86_64ppc64leaarch64
  • --base-image BASE_IMAGE 指定Base鏡像名稱,默認與--base參數相同,對於非x86架構,使用全名,比如aarch64/debian
  • --base-tag BASE_TAG 指定Base發行版的鏡像Tag
  • --build-args BUILD_ARGS 設置Docker構建時的變量;
  • --cache 構建時使用緩存;
  • --config-dir DIR 指定*.conf配置文件目錄,配置文件按順序執行,且在--config-file指定的文件之後;
  • --config-file PATH 指定要使用的配置文件,可以指定多次;
  • --debug, -d 開啓Debug日誌;
  • --docker-dir DOCKER_DIR, -D DOCKER_DIR 指定Dockerfile臨時目錄;
  • --format FORMAT, -f FORMAT 指定執行結果格式,可選值有:jsonnone
  • --keep 保留失敗的臨時容器;
  • --list-dependencies, -l 顯示鏡像依賴(支持過濾);
  • --list-images 顯示所有可用鏡像(支持過濾);
  • --logs-dir LOGS_DIR 指定日誌目錄;
  • --namespace NAMESPACE 指定鏡像的名字空間;
  • --nocache 不使用緩存;
  • --nodebug 不顯示調試信息;
  • --nokeep 不保留臨時容器;
  • --nolist-dependencies 不顯示鏡像依賴;
  • --nolist-images 不顯示可構建鏡像;
  • --nopull 不嘗試拉取新版Base鏡像;
  • --nopush 構建完成後不push鏡像;
  • --noskip-existing 構建時不跳過存在的鏡像;
  • --noskip-parents 構建時不跳過依賴鏡像;
  • --notemplate-only 生成Dockerfile,同時構建鏡像;
  • --profile PROFILE, -p PROFILE 構建配置文件[profiles]節中預定義的鏡像,默認值有:inframainauxdefaultgate
  • --pull 構建時試圖拉取最新的Base鏡像;
  • --push 構建完畢後push鏡像;
  • --push-threads PUSH_THREADS 指定push時的線程數;
  • --registry REGISTRY 指定pushDocker Registry,默認是Docker Hub
  • --retries RETRIES, -r RETRIES 指定構建時重試次數;
  • --save-dependency SAVE_DEPENDENCY 保存Graphviz dot format格式的鏡像依賴關係到指定路徑。
  • --skip-existing 不構建Docker緩存中存在的鏡像;
  • --skip-parents 不構建依賴的鏡像;
  • --tag TAG 指定生成的鏡像Tag
  • --tarballs-base TARBALLS_BASE 指定OpenStack壓縮包Base URL
  • --template-only 只生成Dockerfile,不構建鏡像;
  • --template-override TEMPLATE_OVERRIDE 指定臨時覆蓋文件;
  • --threads THREADS, -T THREADS 指定構建線程數量;
  • --timeout TIMEOUT 設置操作超時時間,單位是秒;
  • --type INSTALL_TYPE, -t INSTALL_TYPE 設置OpenStack安裝類型,可選的值有:binarysourcerdorhos
  • --version 顯示程序版本;
  • --work-dir WORK_DIR 指定工作目錄,默認使用臨時目錄。

正式開始構建

  • 使用默認配置,構建所有源碼鏡像:
$ python tools/build.py -t source
  • 使用默認配置,不使用緩存,構建openstack-base和所依賴的源碼鏡像:
$ python tools/build.py -t source --nocache openstack-base
  • 使用profile構建openstack-base鏡像:
$ cp -v /opt/kolla/kolla/etc/kolla/kolla-build.conf /opt/kolla/

$ vi /opt/kolla/kolla-build.conf
[profiles]
myprofile=openstack-base

$ python tools/build.py -t source --debug --nocache --nopull --work-dir /opt/kolla/ --config-file /opt/kolla/kolla-build.conf --profile myprofile
  • 使用默認配置,不使用緩存,不構建或者pull依賴的鏡像,構建nova-base源碼鏡像:
$ python tools/build.py -t source --nocache --skip-parents --nopull nova-base
  • 使用本地源碼構建nova-base鏡像:
$ mkdir -pv docker/nova/nova-base/cache

$ wget http://tarballs.openstack.org/nova/nova-16.1.0.tar.gz -O /opt/kolla/kolla/docker/nova/nova-base/cache/nova-16.1.0.tar.gz

$ wget http://tarballs.openstack.org/blazar/blazar-0.3.0.tar.gz -O /opt/kolla/kolla/docker/nova/nova-base/cache/blazar-0.3.0.tar.gz


$ vi /opt/kolla/kolla-build.conf
[nova-base]
type = local
location = /opt/kolla/kolla/docker/nova/nova-base/cache/nova-16.1.0.tar.gz

[nova-base-plugin-blazar]
type = local
location = /opt/kolla/kolla/docker/nova/nova-base/cache/blazar-0.3.0.tar.gz


$ python tools/build.py -t source --nocache --skip-parents --nopull --config-file /opt/kolla/kolla-build.conf nova-base
  • 使用Git源碼構建鏡(不使用--nocache參數,也可以正確更新源碼):
$ vi /opt/kolla/kolla-build.conf
[nova-base]
type = git
location = http://192.168.195.131:13000/lastritter/nova.git
reference = pike/devel

[nova-base-plugin-blazar]
type = local
location = /opt/kolla/kolla/docker/nova/nova-base/cache/blazar-0.3.0.tar.gz


$ python tools/build.py -t source --skip-parents --nopull --config-file /opt/kolla/kolla-build.conf nova-base

創建構建鏡像

  • 啓動Docker In Docker服務鏡像:
$ docker run --privileged --name dind --restart=always -d docker:stable-dind

$ docker exec -it dind sh  
$ vi /etc/docker/daemon.json
{
  "registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}

$ docker restart dind
  • 啓動Docker In Docker客戶端鏡像,並安裝基本的軟件包:
$ docker run -it --name kolla-build --link dind:docker docker:edge sh

$ apk update
$ apk add git py-pip gcc python-dev linux-headers libffi-dev musl-dev openssl-dev perl python sshpass

$ pip install --upgrade pip
$ pip install pyopenssl tox
  • 獲取Kolla源碼:
$ git clone https://gitee.com/lastritter/kolla.git
$ cd kolla
# git checkout -b devel/pike remotes/origin/devel/pike
  • 生成配置文件:
$ pip install -r requirements.txt -r test-requirements.txt
$ tox -e genconfig

$ mkdir -pv /etc/kolla/
$ cp -v etc/kolla/kolla-build.conf /etc/kolla/
  • 生成Kolla構建Docker鏡像,並刪除容器:
$ docker commit -m "openstack pike kolla build." -a "LastRitter<[email protected]>" kolla-build kolla-build:pike

$ docker rm kolla-build
  • 導出Kolla構建Docker鏡像(可選):
$ docker save -o openstack_pike_kolla_build_`date +%Y-%m-%d`.tar.gz kolla-build:pike
  • 導入Kolla構建Docker鏡像(可選):
$ docker load --input openstack_pike_kolla_build_`date +%Y-%m-%d`.tar.gz

快速部署環境

使用前面生成的Docker鏡像,以及修改過的Kolla源碼,快速部署新的構建環境,也可以只部署Kolla-Build鏡像,其他的使用原有服務。部署環境的主機IP172.29.101.166

部署Yum服務

  • 導入Yum同步鏡像:
$ mkdir -pv /opt/yum/repo
$ docker load --input /path/to/openstack_pike_yum_sync_2018-03-27.tar.gz
  • 同步遠程Yum源:
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike reposync -p /opt/yum/repo/
# Or
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike reposync -p /opt/yum/repo/ --repoid=base
  • 創建本地Yum倉庫軟件包索引文件:
$ yum install -y createrepo
$ ls /opt/yum/repo/ | xargs -I {} createrepo -p /opt/yum/repo/{}
  • 啓動Yum倉庫服務,暴露12222端口:
$ docker run --name=yum-server --restart=always -d -p 12222:80 -v /opt/yum/repo:/usr/share/nginx/html nginx
  • 測試Yum倉庫服務是否正常:
$ curl -X GET http://172.29.101.166:12222/base/repodata/repomd.xml
  • 導入Yum倉庫測試Docker鏡像:
$ docker load --input /path/to/openstack_pike_yum_client_2018-03-30.tar.gz
  • 測試Yum倉庫是否正常:
$ docker run --rm -it yum-client:pike /bin/bash

$ ls /etc/yum.repos.d/*.repo | xargs sed -i 's/yum_local_repo_url_base/172.29.101.166:12222/g'

$ yum repolist

部署Pip服務

  • 導入pip-server鏡:
$ docker load --input openstack_pike_pip_server_2018-03-31.tar.gz
  • 啓動一個pip-server容器:
$ mkdir -pv /opt/pip
$ docker run --name=pip-server --restart=always -d -p 3141:3141 -v /opt/:/opt/ pip-server:pike pypi-server -p 3141 -P ~/.htaccess /opt/pip
  • 批量下載軟件包:
$ vi /opt/requirements.txt

$ docker run --rm -it -v /opt/:/opt/ pip-server:pike bash
$ cd /opt/pip/ && pip download -r /opt/requirements.txt
  • 建立索引:
$ docker run --rm -it -v /opt/:/opt/ pip-server:pike dir2pi --normalize-package-names /opt/pip/

部署Kolla環境

  • 導入Kolla-Build鏡像
$ docker load --input openstack_pike_kolla_build_2018-03-30.tar.gz
  • 啓動Kolla-Build鏡像:
$ docker run --privileged --name dind --restart=always -d docker:stable-dind

$ docker exec -it dind sh  
$ vi /etc/docker/daemon.json
{
  "registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}

$ docker restart dind

$ docker run -it --name kolla-build --link dind:docker kolla-build:pike sh
  • 替換本地YumPip服務的IP地址:
$ cd kolla && git pull
$ vi docker/base/cache/local_repo.conf
172.29.101.166:12222
$ vi docker/openstack-basebase/cache/local_repo.conf
172.29.101.166:3141

正式開始構建

  • 構建openstack-base鏡像:
$ docker pull centos:7

$ python tools/build.py -t source --nocache openstack-base
# Or
$ python tools/build.py -t source --skip-parents openstack-base
  • 查看構建成功的鏡像:
$ docker run -it --rm --link dind:docker kolla-build:pike docker images

鏡像啓動分析

爲了方便理解,使用如下命令生成的Dockerfile來進行分析:

$ python tools/build.py -t source --template-only --work-dir=..
  • 查看Base鏡像的Dockerfile,容器啓動時,使用dumb-init程序來創建初始化進程環境,執行kolla_start(來源於start.sh腳本)命令(../docker/base/Dockerfile):
COPY set_configs.py /usr/local/bin/kolla_set_configs
COPY start.sh /usr/local/bin/kolla_start
COPY sudoers /etc/sudoers
COPY curlrc /root/.curlrc

COPY cache/dumb-init /usr/local/bin/dumb-init
RUN  chmod +x /usr/local/bin/dumb-init \
    && sed -i 's|#!|#!/usr/local/bin/dumb-init |' /usr/local/bin/kolla_start

RUN touch /usr/local/bin/kolla_extend_start \
    && chmod 755 /usr/local/bin/kolla_start /usr/local/bin/kolla_extend_start /usr/local/bin/kolla_set_configs \
    && chmod 440 /etc/sudoers \
    && mkdir -p /var/log/kolla \
    && chown :kolla /var/log/kolla \
    && chmod 2775 /var/log/kolla \
    && rm -f /tmp/kolla_bashrc

CMD ["kolla_start"]
  • start.sh 腳本首先執行kolla_set_configs(由docker/base/set_configs.py複製生成)來初始化配置,然後執行擴展啓動命令kolla_extend_start(由派生的鏡像提供),最後執行/run_command中指定的命令啓動服務(docker/base/start.sh ):
#!/bin/bash
set -o errexit

# Processing /var/lib/kolla/config_files/config.json as root.  This is necessary
# to permit certain files to be controlled by the root user which should
# not be writable by the dropped-privileged user, especially /run_command
sudo -E kolla_set_configs
CMD=$(cat /run_command)
ARGS=""

if [[ ! "${!KOLLA_SKIP_EXTEND_START[@]}" ]]; then
    # Run additional commands if present
    . kolla_extend_start
fi

echo "Running command: '${CMD}${ARGS:+ $ARGS}'"
exec ${CMD} ${ARGS}
You have new mail. 
  • kolla_set_configs主函數(docker/base/start.sh):
def main():
    try:
        parser = argparse.ArgumentParser()
        parser.add_argument('--check',
                            action='store_true',
                            required=False,
                            help='Check whether the configs changed')
        args = parser.parse_args()
        config = load_config()

        if args.check:
            execute_config_check(config)
        else:
            execute_config_strategy(config)
    except ExitingException as e:
        LOG.error("%s: %s", e.__class__.__name__, e)
        return e.exit_code
    except Exception:
        LOG.exception('Unexpected error:')
        return 2
    return 0


if __name__ == "__main__":
    sys.exit(main())
  • kolla_set_configs首先加載配置文件,如果沒有定義KOLLA_CONFIG環境變量,則加載/var/lib/kolla/config_files/config.json這個JSon配置文件(docker/base/start.sh):
def load_config():
    def load_from_env():
        config_raw = os.environ.get("KOLLA_CONFIG")
        if config_raw is None:
            return None

        # Attempt to read config
        try:
            return json.loads(config_raw)
        except ValueError:
            raise InvalidConfig('Invalid json for Kolla config')

    def load_from_file():
        config_file = os.environ.get("KOLLA_CONFIG_FILE")
        if not config_file:
            config_file = '/var/lib/kolla/config_files/config.json'
        LOG.info("Loading config file at %s", config_file)

        # Attempt to read config file
        with open(config_file) as f:
            try:
                return json.load(f)
            except ValueError:
                raise InvalidConfig(
                    "Invalid json file found at %s" % config_file)
            except IOError as e:
                raise InvalidConfig(
                    "Could not read file %s: %r" % (config_file, e))

    config = load_from_env()
    if config is None:
        config = load_from_file()

    LOG.info('Validating config file')
    validate_config(config)
    return config
  • kolla_set_configs最後根據配置文件和環境變量複製OpenStack組件的配置文件,並把啓動命令寫入/run_command文件(docker/base/start.sh):
def execute_config_strategy(config):
    config_strategy = os.environ.get("KOLLA_CONFIG_STRATEGY")
    LOG.info("Kolla config strategy set to: %s", config_strategy)
    if config_strategy == "COPY_ALWAYS":
        copy_config(config)
        handle_permissions(config)
    elif config_strategy == "COPY_ONCE":
        if os.path.exists('/configured'):
            raise ImmutableConfig(
                "The config strategy prevents copying new configs",
                exit_code=0)
        else:
            copy_config(config)
            handle_permissions(config)
            os.mknod('/configured')
    else:
        raise InvalidConfig('KOLLA_CONFIG_STRATEGY is not set properly')

def copy_config(config):
    if 'config_files' in config:
        LOG.info('Copying service configuration files')
        for data in config['config_files']:
            config_file = ConfigFile(**data)
            config_file.copy()
    else:
        LOG.debug('No files to copy found in config')

    LOG.info('Writing out command to execute')
    LOG.debug("Command is: %s", config['command'])
    # The value from the 'command' key will be written to '/run_command'
    with open('/run_command', 'w+') as f:
        f.write(config['command'])
  • 查看nova_compute容器中的環境變量和配置:
$ docker exec -it nova_compute bash

$ env | grep KOLLA
KOLLA_CONFIG_STRATEGY=COPY_ALWAYS
KOLLA_BASE_DISTRO=centos
KOLLA_INSTALL_TYPE=source
PS1=$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ 
KOLLA_SERVICE_NAME=nova-compute
KOLLA_INSTALL_METATYPE=mixed

$ cat /run_command
nova-compute

$ cat /var/lib/kolla/config_files/config.json
{
    "command": "nova-compute",
    "config_files": [
        {
            "source": "/var/lib/kolla/config_files/nova.conf",
            "dest": "/etc/nova/nova.conf",
            "owner": "nova",
            "perm": "0600"
        },
        {
            "source": "/var/lib/kolla/config_files/policy.json",
            "dest": "/etc/nova/policy.json",
            "owner": "nova",
            "perm": "0600",
            "optional": true
        }    ],
    "permissions": [
        {
            "path": "/var/log/kolla/nova",
            "owner": "nova:nova",
            "recurse": true
        },
        {
            "path": "/var/lib/nova",
            "owner": "nova:nova",
            "recurse": true
        }
    ]
}

Kolla源碼分析

命令總覽

  • 查看build.py命令,是一個符號鏈接:
$ ll tools/build.py
lrwxrwxrwx 1 root root 21 3月  20 13:30 tools/build.py -> ../kolla/cmd/build.py
  • 命令入口,調用kolla.image.build模塊的run_build方法(kolla/cmd/build.py):
import os
import sys

# NOTE(SamYaple): Update the search path to prefer PROJECT_ROOT as the source
#                 of packages to import if we are using local tools instead of
#                 pip installed kolla tools
PROJECT_ROOT = os.path.abspath(os.path.join(
    os.path.dirname(os.path.realpath(__file__)), '../..'))
if PROJECT_ROOT not in sys.path:
    sys.path.insert(0, PROJECT_ROOT)

from kolla.image import build


def main():
    statuses = build.run_build()
    if statuses:
        (bad_results, good_results, unmatched_results,
         skipped_results) = statuses
        if bad_results:
            return 1
    return 0


if __name__ == '__main__':
    sys.exit(main())
  • 命令主函數(kolla/image/build.py):
def run_build():
    """Build container images.

    :return: A 3-tuple containing bad, good, and unmatched container image
    status dicts, or None if no images were built.
    """
    conf = cfg.ConfigOpts()
    common_config.parse(conf, sys.argv[1:], prog='kolla-build')

    if conf.debug:
        LOG.setLevel(logging.DEBUG)

    kolla = KollaWorker(conf)
    kolla.setup_working_dir()
    kolla.find_dockerfiles()
    kolla.create_dockerfiles()

    if conf.template_only:
        LOG.info('Dockerfiles are generated in %s', kolla.working_dir)
        return

    # We set the atime and mtime to 0 epoch to preserve allow the Docker cache
    # to work like we want. A different size or hash will still force a rebuild
    kolla.set_time()

    if conf.save_dependency:
        kolla.build_image_list()
        kolla.find_parents()
        kolla.filter_images()
        kolla.save_dependency(conf.save_dependency)
        LOG.info('Docker images dependency are saved in %s',
                 conf.save_dependency)
        return
    if conf.list_images:
        kolla.build_image_list()
        kolla.find_parents()
        kolla.filter_images()
        kolla.list_images()
        return
    if conf.list_dependencies:
        kolla.build_image_list()
        kolla.find_parents()
        kolla.filter_images()
        kolla.list_dependencies()
        return

    push_queue = six.moves.queue.Queue()
    queue = kolla.build_queue(push_queue)
    workers = []

    with join_many(workers):
        try:
            for x in six.moves.range(conf.threads):
                worker = WorkerThread(conf, queue)
                worker.setDaemon(True)
                worker.start()
                workers.append(worker)

            for x in six.moves.range(conf.push_threads):
                worker = WorkerThread(conf, push_queue)
                worker.setDaemon(True)
                worker.start()
                workers.append(worker)

            # sleep until queue is empty
            while queue.unfinished_tasks or push_queue.unfinished_tasks:
                time.sleep(3)

            # ensure all threads exited happily
            push_queue.put(WorkerThread.tombstone)
            queue.put(WorkerThread.tombstone)
        except KeyboardInterrupt:
            for w in workers:
                w.should_stop = True
            push_queue.put(WorkerThread.tombstone)
            queue.put(WorkerThread.tombstone)
            raise

    results = kolla.summary()
    kolla.cleanup()
    if conf.format == 'json':
        print(json.dumps(results))
    return kolla.get_image_statuses()

參數解析

  • 解析配置文件和命令行參數(kolla/image/build.py):
from oslo_config import cfg

#...

def run_build():

#...

    conf = cfg.ConfigOpts()
    common_config.parse(conf, sys.argv[1:], prog='kolla-build')

#...
  • 參數解析(kolla/common/config.py):
def parse(conf, args, usage=None, prog=None,
          default_config_files=None):
    conf.register_cli_opts(_CLI_OPTS)
    conf.register_opts(_BASE_OPTS)
    conf.register_opts(_PROFILE_OPTS, group='profiles')
    for name, opts in gen_all_source_opts():
        conf.register_opts(opts, name)
    for name, opts in gen_all_user_opts():
        conf.register_opts(opts, name)

    conf(args=args,
         project='kolla',
         usage=usage,
         prog=prog,
         version=version.cached_version_string(),
         default_config_files=default_config_files)

    # NOTE(jeffrey4l): set the default base tag based on the
    # base option
    conf.set_default('base_tag', DEFAULT_BASE_TAGS.get(conf.base))

    if not conf.base_image:
        conf.base_image = conf.base
  • 查看默認和可選參數配置(kolla/common/config.py):
#...

BASE_OS_DISTRO = ['centos', 'rhel', 'ubuntu', 'oraclelinux', 'debian']
BASE_ARCH = ['x86_64', 'ppc64le', 'aarch64']
DEFAULT_BASE_TAGS = {
    'centos': '7',
    'rhel': '7',
    'oraclelinux': '7-slim',
    'debian': 'stretch',
    'ubuntu': '16.04',
}
DISTRO_RELEASE = {
    'centos': '7',
    'rhel': '7',
    'oraclelinux': '7',
    'debian': 'stretch',
    'ubuntu': '16.04',
}

# This is noarch repository so we will use it on all architectures
DELOREAN = \
    "https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo"

# TODO(hrw): with move to Pike+1 we need to make sure that aarch64 repo
#            gets updated (docker/base/aarch64-cbs.repo file)
#            there is ongoing work to sort that out
DELOREAN_DEPS = {
    'x86_64': "https://trunk.rdoproject.org/centos7/delorean-deps.repo",
    'aarch64': "",
    'ppc64le': ""
}

INSTALL_TYPE_CHOICES = ['binary', 'source', 'rdo', 'rhos']

TARBALLS_BASE = "http://tarballs.openstack.org"

#...

SOURCES = {
    'openstack-base': {
        'type': 'url',
        'location': ('$tarballs_base/requirements/'
                     'requirements-stable-pike.tar.gz')},
    'aodh-base': {
        'type': 'url',
        'location': ('$tarballs_base/aodh/'
                     'aodh-5.1.0.tar.gz')},
    'barbican-base': {
        'type': 'url',
        'location': ('$tarballs_base/barbican/'
                     'barbican-5.0.0.tar.gz')},
    'bifrost-base': {
        'type': 'url',
        'location': ('$tarballs_base/bifrost/'
                     'bifrost-4.0.1.tar.gz')},

#...

準備構建環境

  • 創建KollaWorker對象(kolla/image/build.py):
#...

def run_build():

#...

    kolla = KollaWorker(conf)

#...
  • 初始化構建過程中的各種變量(kolla/image/build.py):
class KollaWorker(object):

    def __init__(self, conf):
        self.conf = conf
        self.images_dir = self._get_images_dir()
        self.registry = conf.registry
        if self.registry:
            self.namespace = self.registry + '/' + conf.namespace
        else:
            self.namespace = conf.namespace
        self.base = conf.base
        self.base_tag = conf.base_tag
        self.install_type = conf.install_type
        self.tag = conf.tag
        self.base_arch = conf.base_arch
        self.images = list()
        rpm_setup_config = ([repo_file for repo_file in
                             conf.rpm_setup_config if repo_file is not None])
        self.rpm_setup = self.build_rpm_setup(rpm_setup_config)

        rh_base = ['centos', 'oraclelinux', 'rhel']
        rh_type = ['source', 'binary', 'rdo', 'rhos']
        deb_base = ['ubuntu', 'debian']
        deb_type = ['source', 'binary']

        if not ((self.base in rh_base and self.install_type in rh_type) or
                (self.base in deb_base and self.install_type in deb_type)):
            raise exception.KollaMismatchBaseTypeException(
                '{} is unavailable for {}'.format(self.install_type, self.base)
            )

        if self.install_type == 'binary':
            self.install_metatype = 'rdo'
        elif self.install_type == 'source':
            self.install_metatype = 'mixed'
        elif self.install_type == 'rdo':
            self.install_type = 'binary'
            self.install_metatype = 'rdo'
        elif self.install_type == 'rhos':
            self.install_type = 'binary'
            self.install_metatype = 'rhos'
        else:
            raise exception.KollaUnknownBuildTypeException(
                'Unknown install type'
            )

        self.image_prefix = self.base + '-' + self.install_type + '-'

        self.regex = conf.regex
        self.image_statuses_bad = dict()
        self.image_statuses_good = dict()
        self.image_statuses_unmatched = dict()
        self.image_statuses_skipped = dict()
        self.maintainer = conf.maintainer

        docker_kwargs = docker.utils.kwargs_from_env()
        self.dc = docker.APIClient(version='auto', **docker_kwargs)

設置工作目錄

  • 使用KollaWorker初始化工作目錄(kolla/image/build.py):
#...

def run_build():

#...

    kolla.setup_working_dir()

#...
  • 如果設置了work_dir則在指定路徑中創建docker目錄作爲工作目錄,否則使用時間戳生成一個臨時目錄,最後把docker目錄裏的所有文件複製到工作目錄(kolla/image/build.py):
class KollaWorker(object):

#...

    def setup_working_dir(self):
        """Creates a working directory for use while building."""
        if self.conf.work_dir:
            self.working_dir = os.path.join(self.conf.work_dir, 'docker')
        else:
            ts = time.time()
            ts = datetime.datetime.fromtimestamp(ts).strftime(
                '%Y-%m-%d_%H-%M-%S_')
            self.temp_dir = tempfile.mkdtemp(prefix='kolla-' + ts)
            self.working_dir = os.path.join(self.temp_dir, 'docker')
        self.copy_dir(self.images_dir, self.working_dir)
        for dir in self.conf.docker_dir:
            self.copy_dir(dir, self.working_dir)
        self.copy_apt_files()
        LOG.debug('Created working dir: %s', self.working_dir)

查找Dockerfile.j2

  • 使用KollaWorker對象查找所有的Dockerfile.j2(kolla/image/build.py):
#...

def run_build():

#...
    kolla.find_dockerfiles()

#...
  • 生成所有Dockerfile.j2文件的列表(kolla/image/build.py):
class KollaWorker(object):

#...

    def find_dockerfiles(self):
        """Recursive search for Dockerfiles in the working directory."""
        self.docker_build_paths = list()
        path = self.working_dir
        filename = 'Dockerfile.j2'

        for root, dirs, names in os.walk(path):
            if filename in names:
                self.docker_build_paths.append(root)
                LOG.debug('Found %s', root.split(self.working_dir)[1])

        LOG.debug('Found %d Dockerfiles', len(self.docker_build_paths))

生成Dockerfile

  • 使用KollaWorker 對象生成Dockerfile(kolla/image/build.py):
#...

def run_build():

#...
    kolla.create_dockerfiles()

#...
  • 首先生成各種變量配置的列表,最後使用Dockerfile.j2模板文件生成Dockerfile(kolla/image/build.py):
class KollaWorker(object):

#...

    def create_dockerfiles(self):
        kolla_version = version.version_info.cached_version_string()
        supported_distro_release = common_config.DISTRO_RELEASE.get(
            self.base)
        for path in self.docker_build_paths:
            template_name = "Dockerfile.j2"
            image_name = path.split("/")[-1]
            ts = time.time()
            build_date = datetime.datetime.fromtimestamp(ts).strftime(
                '%Y%m%d')
            values = {'base_distro': self.base,
                      'base_image': self.conf.base_image,
                      'base_distro_tag': self.base_tag,
                      'base_arch': self.base_arch,
                      'supported_distro_release': supported_distro_release,
                      'install_metatype': self.install_metatype,
                      'image_prefix': self.image_prefix,
                      'install_type': self.install_type,
                      'namespace': self.namespace,
                      'tag': self.tag,
                      'maintainer': self.maintainer,
                      'kolla_version': kolla_version,
                      'image_name': image_name,
                      'users': self.get_users(),
                      'rpm_setup': self.rpm_setup,
                      'build_date': build_date}
            env = jinja2.Environment(  # nosec: not used to render HTML
                loader=jinja2.FileSystemLoader(self.working_dir))
            env.filters.update(self._get_filters())
            env.globals.update(self._get_methods())
            tpl_path = os.path.join(
                os.path.relpath(path, self.working_dir),
                template_name)

            template = env.get_template(tpl_path)
            if self.conf.template_override:
                tpl_dict = self._merge_overrides(self.conf.template_override)
                template_name = os.path.basename(tpl_dict.keys()[0])
                values['parent_template'] = template
                env = jinja2.Environment(  # nosec: not used to render HTML
                    loader=jinja2.DictLoader(tpl_dict))
                env.filters.update(self._get_filters())
                env.globals.update(self._get_methods())
                template = env.get_template(template_name)
            content = template.render(values)
            content_path = os.path.join(path, 'Dockerfile')
            with open(content_path, 'w') as f:
                LOG.debug("Rendered %s into:", tpl_path)
                LOG.debug(content)
                f.write(content)
                LOG.debug("Wrote it to %s", content_path)

(kolla/image/build.py):

from kolla.template import filters as jinja_filters

#...

class KollaWorker(object):

#...

    def _get_filters(self):
        filters = {
            'customizable': jinja_filters.customizable,
        }
        return filters

    def _get_methods(self):
        """Mapping of available Jinja methods.

        return a dictionary that maps available function names and their
        corresponding python methods to make them available in jinja templates
        """

        return {
            'debian_package_install': jinja_methods.debian_package_install,
        }

(kolla/template/filters.py):

from jinja2 import contextfilter


@contextfilter
def customizable(context, val_list, call_type):
    name = context['image_name'].replace("-", "_") + "_" + call_type + "_"
    if name + "override" in context:
        return context[name + "override"]
    if name + "append" in context:
        val_list.extend(context[name + "append"])
    if name + "remove" in context:
        for removal in context[name + "remove"]:
            if removal in val_list:
                val_list.remove(removal)
    return val_list

非構建功能

  • 如果是臨時構建(僅生成Dockerfile)、保存依賴關係、顯示鏡像列表、顯示鏡像依賴關係,則處理完後直接返回。(kolla/image/build.py):
def run_build():

#...

    if conf.template_only:
        LOG.info('Dockerfiles are generated in %s', kolla.working_dir)
        return

    # We set the atime and mtime to 0 epoch to preserve allow the Docker cache
    # to work like we want. A different size or hash will still force a rebuild
    kolla.set_time()

    if conf.save_dependency:
        kolla.build_image_list()
        kolla.find_parents()
        kolla.filter_images()
        kolla.save_dependency(conf.save_dependency)
        LOG.info('Docker images dependency are saved in %s',
                 conf.save_dependency)
        return
    if conf.list_images:
        kolla.build_image_list()
        kolla.find_parents()
        kolla.filter_images()
        kolla.list_images()
        return
    if conf.list_dependencies:
        kolla.build_image_list()
        kolla.find_parents()
        kolla.filter_images()
        kolla.list_dependencies()
        return
  • 生成鏡像列表:
    def build_image_list(self):
        def process_source_installation(image, section):
            installation = dict()
            # NOTE(jeffrey4l): source is not needed when the type is None
            if self.conf._get('type', self.conf._get_group(section)) is None:
                if image.parent_name is None:
                    LOG.debug('No source location found in section %s',
                              section)
            else:
                installation['type'] = self.conf[section]['type']
                installation['source'] = self.conf[section]['location']
                installation['name'] = section
                if installation['type'] == 'git':
                    installation['reference'] = self.conf[section]['reference']
            return installation

        all_sections = (set(six.iterkeys(self.conf._groups)) |
                        set(self.conf.list_all_sections()))

        for path in self.docker_build_paths:
            # Reading parent image name
            with open(os.path.join(path, 'Dockerfile')) as f:
                content = f.read()

            image_name = os.path.basename(path)
            canonical_name = (self.namespace + '/' + self.image_prefix +
                              image_name + ':' + self.tag)
            parent_search_pattern = re.compile(r'^FROM.*$', re.MULTILINE)
            match = re.search(parent_search_pattern, content)
            if match:
                parent_name = match.group(0).split(' ')[1]
            else:
                parent_name = ''
            del match
            image = Image(image_name, canonical_name, path,
                          parent_name=parent_name,
                          logger=make_a_logger(self.conf, image_name),
                          docker_client=self.dc)

            if self.install_type == 'source':
                # NOTE(jeffrey4l): register the opts if the section didn't
                # register in the kolla/common/config.py file
                if image.name not in self.conf._groups:
                    self.conf.register_opts(common_config.get_source_opts(),
                                            image.name)
                image.source = process_source_installation(image, image.name)
                for plugin in [match.group(0) for match in
                               (re.search('^{}-plugin-.+'.format(image.name),
                                          section) for section in
                                all_sections) if match]:
                    try:
                        self.conf.register_opts(
                            common_config.get_source_opts(),
                            plugin
                        )
                    except cfg.DuplicateOptError:
                        LOG.debug('Plugin %s already registered in config',
                                  plugin)
                    image.plugins.append(
                        process_source_installation(image, plugin))
                for addition in [
                    match.group(0) for match in
                    (re.search('^{}-additions-.+'.format(image.name),
                     section) for section in all_sections) if match]:
                    try:
                        self.conf.register_opts(
                            common_config.get_source_opts(),
                            addition
                        )
                    except cfg.DuplicateOptError:
                        LOG.debug('Addition %s already registered in config',
                                  addition)
                    image.additions.append(
                        process_source_installation(image, addition))

            self.images.append(image)
  • 查找每個鏡像的依賴關係:
    def find_parents(self):
        """Associate all images with parents and children."""
        sort_images = dict()

        for image in self.images:
            sort_images[image.canonical_name] = image

        for parent_name, parent in sort_images.items():
            for image in sort_images.values():
                if image.parent_name == parent_name:
                    parent.children.append(image)
                    image.parent = parent
  • 根據命令參數或者profile過濾掉不需要構建或處理的鏡像:
    def filter_images(self):
        """Filter which images to build."""
        filter_ = list()

        if self.regex:
            filter_ += self.regex
        elif self.conf.profile:
            for profile in self.conf.profile:
                if profile not in self.conf.profiles:
                    self.conf.register_opt(cfg.ListOpt(profile,
                                                       default=[]),
                                           'profiles')
                if len(self.conf.profiles[profile]) == 0:
                    msg = 'Profile: {} does not exist'.format(profile)
                    raise ValueError(msg)
                else:
                    filter_ += self.conf.profiles[profile]

        if filter_:
            patterns = re.compile(r"|".join(filter_).join('()'))
            for image in self.images:
                if image.status in (STATUS_MATCHED, STATUS_SKIPPED):
                    continue
                if re.search(patterns, image.name):
                    image.status = STATUS_MATCHED
                    while (image.parent is not None and
                           image.parent.status not in (STATUS_MATCHED,
                                                       STATUS_SKIPPED)):
                        image = image.parent
                        if self.conf.skip_parents:
                            image.status = STATUS_SKIPPED
                        elif (self.conf.skip_existing and
                              image.in_docker_cache()):
                            image.status = STATUS_SKIPPED
                        else:
                            image.status = STATUS_MATCHED
                        LOG.debug('Image %s matched regex', image.name)
                else:
                    image.status = STATUS_UNMATCHED
        else:
            for image in self.images:
                image.status = STATUS_MATCHED
  • 保存依賴關係:
    def save_dependency(self, to_file):
        try:
            import graphviz
        except ImportError:
            LOG.error('"graphviz" is required for save dependency')
            raise
        dot = graphviz.Digraph(comment='Docker Images Dependency')
        dot.body.extend(['rankdir=LR'])
        for image in self.images:
            if image.status not in [STATUS_MATCHED]:
                continue
            dot.node(image.name)
            if image.parent is not None:
                dot.edge(image.parent.name, image.name)

        with open(to_file, 'w') as f:
            f.write(dot.source)
  • 顯示鏡像列表:
    def list_images(self):
        for count, image in enumerate([
            image for image in self.images if image.status == STATUS_MATCHED
        ]):
            print(count + 1, ':', image.name)
  • 顯示鏡像依賴:
    def list_dependencies(self):
        match = False
        for image in self.images:
            if image.status in [STATUS_MATCHED]:
                match = True
            if image.parent is None:
                base = image
        if not match:
            print('Nothing matched!')
            return
        def list_children(images, ancestry):
            children = six.next(iter(ancestry.values()))
            for image in images:
                if image.status not in [STATUS_MATCHED]:
                    continue
                if not image.children:
                    children.append(image.name)
                else:
                    newparent = {image.name: []}
                    children.append(newparent)
                    list_children(image.children, newparent)

        ancestry = {base.name: []}
        list_children(base.children, ancestry)
        json.dump(ancestry, sys.stdout, indent=2)

創建任務隊列

  • 使用KollaWorker 構建一個任務隊列,然後根據配置的構建和push線程數,創建多個隊列,執行其中的任務(kolla/image/build.py):
def run_build():

#...

    push_queue = six.moves.queue.Queue()
    queue = kolla.build_queue(push_queue)
    workers = []

    with join_many(workers):
        try:
            for x in six.moves.range(conf.threads):
                worker = WorkerThread(conf, queue)
                worker.setDaemon(True)
                worker.start()
                workers.append(worker)

            for x in six.moves.range(conf.push_threads):
                worker = WorkerThread(conf, push_queue)
                worker.setDaemon(True)
                worker.start()
                workers.append(worker)

            # sleep until queue is empty
            while queue.unfinished_tasks or push_queue.unfinished_tasks:
                time.sleep(3)

            # ensure all threads exited happily
            push_queue.put(WorkerThread.tombstone)
            queue.put(WorkerThread.tombstone)
        except KeyboardInterrupt:
            for w in workers:
                w.should_stop = True
            push_queue.put(WorkerThread.tombstone)
            queue.put(WorkerThread.tombstone)
            raise
  • 使用KollaWorker 對象的build_queue方法,爲每個鏡像創建一個BuildTask 對象,並加入到任務隊列中(kolla/image/build.py):
class KollaWorker(object):

#...

    def build_queue(self, push_queue):
        """Organizes Queue list.

        Return a list of Queues that have been organized into a hierarchy
        based on dependencies
        """
        self.build_image_list()
        self.find_parents()
        self.filter_images()

        queue = six.moves.queue.Queue()

        for image in self.images:
            if image.status == STATUS_UNMATCHED:
                # Don't bother queuing up build tasks for things that
                # were not matched in the first place... (not worth the
                # effort to run them, if they won't be used anyway).
                continue
            if image.parent is None:
                queue.put(BuildTask(self.conf, image, push_queue))
                LOG.info('Added image %s to queue', image.name)

        return queue
  • 每個線程使用一個WorkerThread對象從任務隊列中循環取出每個鏡像關聯的BuildTask對象,根據設置的重試次數,執行他的run方法進行鏡像構建(kolla/image/build.py):
class WorkerThread(threading.Thread):
    """Thread that executes tasks until the queue provides a tombstone."""

    #: Object to be put on worker queues to get them to die.
    tombstone = object()

    def __init__(self, conf, queue):
        super(WorkerThread, self).__init__()
        self.queue = queue
        self.conf = conf
        self.should_stop = False

    def run(self):
        while not self.should_stop:
            task = self.queue.get()
            if task is self.tombstone:
                # Ensure any other threads also get the tombstone.
                self.queue.put(task)
                break
            try:
                for attempt in six.moves.range(self.conf.retries + 1):
                    if self.should_stop:
                        break
                    LOG.info("Attempt number: %s to run task: %s ",
                             attempt + 1, task.name)
                    try:
                        task.run()
                        if task.success:
                            break
                    except Exception:
                        LOG.exception('Unhandled error when running %s',
                                      task.name)
                    # try again...
                    task.reset()
                if task.success and not self.should_stop:
                    for next_task in task.followups:
                        LOG.info('Added next task %s to queue',
                                 next_task.name)
                        self.queue.put(next_task)
            finally:
                self.queue.task_done()

執行構建任務

  • BuildTask對象初始化和run方法,最終調用builder方法進行鏡像構建(kolla/image/build.py):
class BuildTask(DockerTask):
    """Task that builds out an image."""

    def __init__(self, conf, image, push_queue):
        super(BuildTask, self).__init__()
        self.conf = conf
        self.image = image
        self.push_queue = push_queue
        self.nocache = not conf.cache
        self.forcerm = not conf.keep
        self.logger = image.logger

    @property
    def name(self):
        return 'BuildTask(%s)' % self.image.name

    def run(self):
        self.builder(self.image)
        if self.image.status in (STATUS_BUILT, STATUS_SKIPPED):
            self.success = True
  • 構建時首先獲取和生成源碼包,最後使用Docker構建鏡像(kolla/image/build.py):
class BuildTask(DockerTask):

#...

    def builder(self, image):

        def make_an_archive(items, arcname, item_child_path=None):
            if not item_child_path:
                item_child_path = arcname
            archives = list()
            items_path = os.path.join(image.path, item_child_path)
            for item in items:
                archive_path = self.process_source(image, item)
                if image.status in STATUS_ERRORS:
                    raise ArchivingError
                archives.append(archive_path)
            if archives:
                for archive in archives:
                    with tarfile.open(archive, 'r') as archive_tar:
                        archive_tar.extractall(path=items_path)
            else:
                try:
                    os.mkdir(items_path)
                except OSError as e:
                    if e.errno == errno.EEXIST:
                        self.logger.info(
                            'Directory %s already exist. Skipping.',
                            items_path)
                    else:
                        self.logger.error('Failed to create directory %s: %s',
                                          items_path, e)
                        image.status = STATUS_CONNECTION_ERROR
                        raise ArchivingError
            arc_path = os.path.join(image.path, '%s-archive' % arcname)
            with tarfile.open(arc_path, 'w') as tar:
                tar.add(items_path, arcname=arcname)
            return len(os.listdir(items_path))

        self.logger.debug('Processing')

        if image.status == STATUS_SKIPPED:
            self.logger.info('Skipping %s' % image.name)
            return

        if image.status == STATUS_UNMATCHED:
            return

        if (image.parent is not None and
                image.parent.status in STATUS_ERRORS):
            self.logger.error('Parent image error\'d with message "%s"',
                              image.parent.status)
            image.status = STATUS_PARENT_ERROR
            return

        image.status = STATUS_BUILDING
        self.logger.info('Building')

        if image.source and 'source' in image.source:
            self.process_source(image, image.source)
            if image.status in STATUS_ERRORS:
                return

        if self.conf.install_type == 'source':
            try:
                plugins_am = make_an_archive(image.plugins, 'plugins')
            except ArchivingError:
                self.logger.error(
                    "Failed turning any plugins into a plugins archive")
                return
            else:
                self.logger.debug(
                    "Turned %s plugins into plugins archive",
                    plugins_am)
            try:
                additions_am = make_an_archive(image.additions, 'additions')
            except ArchivingError:
                self.logger.error(
                    "Failed turning any additions into a additions archive")
                return
            else:
                self.logger.debug(
                    "Turned %s additions into additions archive",
                    additions_am)

        # Pull the latest image for the base distro only
        pull = self.conf.pull if image.parent is None else False

        buildargs = self.update_buildargs()
        try:
            for response in self.dc.build(path=image.path,
                                          tag=image.canonical_name,
                                          nocache=not self.conf.cache,
                                          rm=True,
                                          pull=pull,
                                          forcerm=self.forcerm,
                                          buildargs=buildargs):
                stream = json.loads(response.decode('utf-8'))
                if 'stream' in stream:
                    for line in stream['stream'].split('\n'):
                        if line:
                            self.logger.info('%s', line)
                if 'errorDetail' in stream:
                    image.status = STATUS_ERROR
                    self.logger.error('Error\'d with the following message')
                    for line in stream['errorDetail']['message'].split('\n'):
                        if line:
                            self.logger.error('%s', line)
                    return
        except docker.errors.DockerException:
            image.status = STATUS_ERROR
            self.logger.exception('Unknown docker error when building')
        except Exception:
            image.status = STATUS_ERROR
            self.logger.exception('Unknown error when building')
        else:
            image.status = STATUS_BUILT
            self.logger.info('Built')
  • 獲取和生成源碼包時,根據源碼來源的不同使用不同方法,最終的源碼包名字都是鏡像名加上-archive(如openstack-base-archive),在生成鏡像時複製解壓進鏡像裏(kolla/image/build.py):
class BuildTask(DockerTask):

#...

    def process_source(self, image, source):
        dest_archive = os.path.join(image.path, source['name'] + '-archive')

        if source.get('type') == 'url':
            self.logger.debug("Getting archive from %s", source['source'])
            try:
                r = requests.get(source['source'], timeout=self.conf.timeout)
            except requests_exc.Timeout:
                self.logger.exception(
                    'Request timed out while getting archive from %s',
                    source['source'])
                image.status = STATUS_ERROR
                return

            if r.status_code == 200:
                with open(dest_archive, 'wb') as f:
                    f.write(r.content)
            else:
                self.logger.error(
                    'Failed to download archive: status_code %s',
                    r.status_code)
                image.status = STATUS_ERROR
                return

        elif source.get('type') == 'git':
            clone_dir = '{}-{}'.format(dest_archive,
                                       source['reference'].replace('/', '-'))
            if os.path.exists(clone_dir):
                self.logger.info("Clone dir %s exists. Removing it.",
                                 clone_dir)
                shutil.rmtree(clone_dir)

            try:
                self.logger.debug("Cloning from %s", source['source'])
                git.Git().clone(source['source'], clone_dir)
                git.Git(clone_dir).checkout(source['reference'])
                reference_sha = git.Git(clone_dir).rev_parse('HEAD')
                self.logger.debug("Git checkout by reference %s (%s)",
                                  source['reference'], reference_sha)
            except Exception as e:
                self.logger.error("Failed to get source from git", image.name)
                self.logger.error("Error: %s", e)
                # clean-up clone folder to retry
                shutil.rmtree(clone_dir)
                image.status = STATUS_ERROR
                return

            with tarfile.open(dest_archive, 'w') as tar:
                tar.add(clone_dir, arcname=os.path.basename(clone_dir))

        elif source.get('type') == 'local':
            self.logger.debug("Getting local archive from %s",
                              source['source'])
            if os.path.isdir(source['source']):
                with tarfile.open(dest_archive, 'w') as tar:
                    tar.add(source['source'],
                            arcname=os.path.basename(source['source']))
            else:
                shutil.copyfile(source['source'], dest_archive)

        else:
            self.logger.error("Wrong source type '%s'", source.get('type'))
            image.status = STATUS_ERROR
            return

        # Set time on destination archive to epoch 0
        os.utime(dest_archive, (0, 0))

        return dest_archive
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章