【K8S】基於Docker+K8S+GitLab/SVN+Jenkins+Harbor搭建持續集成交付環境(環境搭建篇)

寫在前面

最近在 K8S 1.18.2 版本的集羣上搭建DevOps環境,期間遇到了各種坑。目前,搭建環境的過程中出現的各種坑均已被填平,特此記錄,並分享給大家!

服務器規劃

IP 主機名 節點 操作系統
192.168.175.101 binghe101 K8S Master CentOS 8.0.1905
192.168.175.102 binghe102 K8S Worker CentOS 8.0.1905
192.168.175.103 binghe103 K8S Worker CentOS 8.0.1905

安裝環境版本

軟件名稱 軟件版本 說明
Docker 19.03.8 提供容器環境
docker-compose 1.25.5 定義和運行由多個容器組成的應用
K8S 1.8.12 是一個開源的,用於管理雲平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單並且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。
GitLab 12.1.6 代碼倉庫(與SVN安裝一個即可)
Harbor 1.10.2 私有鏡像倉庫
Jenkins 2.89.3 持續集成交付
SVN 1.10.2 代碼倉庫(與GitLab安裝一個即可)
JDK 1.8.0_202 Java運行基礎環境
maven 3.6.3 構建項目的基礎插件

服務器免密碼登錄

在各服務器執行如下命令。

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 

將binghe102和binghe103服務器上的id_rsa.pub文件複製到binghe101服務器。

[root@binghe102 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/102
[root@binghe103 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/103

在binghe101服務器上執行如下命令。

cat ~/.ssh/102 >> ~/.ssh/authorized_keys
cat ~/.ssh/103 >> ~/.ssh/authorized_keys

然後將authorized_keys文件分別複製到binghe102、binghe103服務器。

[root@binghe101 ~]# scp .ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
[root@binghe101 ~]# scp .ssh/authorized_keys binghe103:/root/.ssh/authorized_keys

刪除binghe101節點上~/.ssh下的102和103文件。

rm ~/.ssh/102
rm ~/.ssh/103

安裝JDK

需要在每臺服務器上安裝JDK環境。到Oracle官方下載JDK,我這裏下的JDK版本爲1.8.0_202,下載後解壓並配置系統環境變量。

tar -zxvf jdk1.8.0_212.tar.gz
mv jdk1.8.0_212 /usr/local

接下來,配置系統環境變量。

vim /etc/profile

配置項內容如下所示。

JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH

接下來執行如下命令使系統環境變量生效。

source /etc/profile

安裝Maven

到Apache官方下載Maven,我這裏下載的Maven版本爲3.6.3。下載後直接解壓並配置系統環境變量。

tar -zxvf apache-maven-3.6.3-bin.tar.gz
mv apache-maven-3.6.3-bin /usr/local

接下來,就是配置系統環境變量。

vim /etc/profile

配置項內容如下所示。

JAVA_HOME=/usr/local/jdk1.8.0_212
MAVEN_HOME=/usr/local/apache-maven-3.6.3-bin
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH MAVEN_HOME PATH

接下來執行如下命令使系統環境變量生效。

source /etc/profile

接下來,修改Maven的配置文件,如下所示。

<localRepository>/home/repository</localRepository>

將Maven下載的Jar包存儲到/home/repository目錄下。

安裝Docker環境

本文檔基於Docker 19.03.8 版本搭建Docker環境。

在所有服務器上創建install_docker.sh腳本,腳本內容如下所示。

export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
dnf install yum*
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
systemctl enable docker.service
systemctl start docker.service
docker version

在每臺服務器上爲install_docker.sh腳本賦予可執行權限,並執行腳本即可。

安裝docker-compose

注意:在每臺服務器上安裝docker-compose

1.下載docker-compose文件

curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose 

2.爲docker-compose文件賦予可執行權限

chmod a+x /usr/local/bin/docker-compose

3.查看docker-compose版本

[root@binghe ~]# docker-compose version
docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

安裝K8S集羣環境

本文檔基於K8S 1.8.12版本來搭建K8S集羣

安裝K8S基礎環境

在所有服務器上創建install_k8s.sh腳本文件,腳本文件的內容如下所示。

#配置阿里雲鏡像加速器
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

#安裝nfs-utils
yum install -y nfs-utils
yum install -y wget

#啓動nfs-server
systemctl start nfs-server
systemctl enable nfs-server

#關閉防火牆
systemctl stop firewalld
systemctl disable firewalld

#關閉SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 關閉 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

#修改 /etc/sysctl.conf
# 如果有配置,則修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# 可能沒有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# 執行命令以應用
sysctl -p

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 卸載舊版本K8S
yum remove -y kubelet kubeadm kubectl

# 安裝kubelet、kubeadm、kubectl,這裏我安裝的是1.18.2版本,你也可以安裝1.17.2版本
yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2

# 修改docker Cgroup Driver爲systemd
# # 將/usr/lib/systemd/system/docker.service文件中的這一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改爲 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在添加 worker 節點時可能會碰到如下錯誤
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

# 設置 docker 鏡像,提高 docker 鏡像下載速度和穩定性
# 如果訪問 https://hub.docker.io 速度非常穩定,亦可以跳過這個步驟
# curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}

# 重啓 docker,並啓動 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version

在每臺服務器上爲install_k8s.sh腳本賦予可執行權限,並執行腳本即可。

初始化Master節點

只在binghe101服務器上執行的操作。

1.初始化Master節點的網絡環境

注意:下面的命令需要在命令行手動執行。

# 只在 master 節點執行
# export 命令只在當前 shell 會話中有效,開啓新的 shell 窗口後,如果要繼續安裝過程,請重新執行此處的 export 命令
export MASTER_IP=192.168.175.101
# 替換 k8s.master 爲 您想要的 dnsName
export APISERVER_NAME=k8s.master
# Kubernetes 容器組所在的網段,該網段安裝完成後,由 kubernetes 創建,事先並不存在於物理網絡中
export POD_SUBNET=172.18.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

2.初始化Master節點

在binghe101服務器上創建init_master.sh腳本文件,文件內容如下所示。

#!/bin/bash
# 腳本出錯時終止執行
set -e

if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m請確保您已經設置了環境變量 POD_SUBNET 和 APISERVER_NAME \033[0m"
  echo 當前POD_SUBNET=$POD_SUBNET
  echo 當前APISERVER_NAME=$APISERVER_NAME
  exit 1
fi


# 查看完整配置選項 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
EOF

# kubeadm init
# 根據服務器網速的情況,您需要等候 3 - 10 分鐘
kubeadm init --config=kubeadm-config.yaml --upload-certs

# 配置 kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# 安裝 calico 網絡插件
# 參考文檔 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises
echo "安裝calico-3.13.1"
rm -f calico-3.13.1.yaml
wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml

賦予init_master.sh腳本文件可執行權限並執行腳本。

3.查看Master節點的初始化結果

(1)確保所有容器組處於Running狀態

# 執行如下命令,等待 3-10 分鐘,直到所有的容器組處於 Running 狀態
watch kubectl get pod -n kube-system -o wide

具體執行如下所示。

[root@binghe101 ~]# watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide                                                                                                                          binghe101: Sun May 10 11:01:32 2020

NAME                                       READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES          
calico-kube-controllers-5b8b769fcd-5dtlp   1/1     Running   0          118s   172.18.203.66     binghe101   <none>           <none>          
calico-node-fnv8g                          1/1     Running   0          118s   192.168.175.101   binghe101   <none>           <none>          
coredns-546565776c-27t7h                   1/1     Running   0          2m1s   172.18.203.67     binghe101   <none>           <none>          
coredns-546565776c-hjb8z                   1/1     Running   0          2m1s   172.18.203.65     binghe101   <none>           <none>          
etcd-binghe101                             1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-apiserver-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-controller-manager-binghe101          1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>          
kube-proxy-dvgsr                           1/1     Running   0          2m1s   192.168.175.101   binghe101   <none>           <none>          
kube-scheduler-binghe101                   1/1     Running   0          2m7s   192.168.175.101   binghe101   <none>           <none>

(2) 查看 Master 節點初始化結果

kubectl get nodes -o wide

具體執行如下所示。

[root@binghe101 ~]# kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION         CONTAINER-RUNTIME
binghe101   Ready    master   3m28s   v1.18.2   192.168.175.101   <none>        CentOS Linux 8 (Core)   4.18.0-80.el8.x86_64   docker://19.3.8

初始化Worker節點

1.獲取join命令參數

在Master節點(binghe101服務器)上執行如下命令獲取join命令參數。

kubeadm token create --print-join-command

具體執行如下所示。

[root@binghe101 ~]# kubeadm token create --print-join-command
W0510 11:04:34.828126   56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

其中,有如下一行輸出。

kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

這行代碼就是獲取到的join命令。

注意:join命令中的token的有效時間爲 2 個小時,2小時內,可以使用此 token 初始化任意數量的 worker 節點。

2.初始化Worker節點

針對所有的 worker 節點執行,在這裏,就是在binghe102服務器和binghe103服務器上執行。

在命令分別手動執行如下命令。

# 只在 worker 節點執行
# 192.168.175.101 爲 master 節點的內網 IP
export MASTER_IP=192.168.175.101
# 替換 k8s.master 爲初始化 master 節點時所使用的 APISERVER_NAME
export APISERVER_NAME=k8s.master
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# 替換爲 master 節點上 kubeadm token create 命令輸出的join
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 

具體執行如下所示。

[root@binghe102 ~]# export MASTER_IP=192.168.175.101
[root@binghe102 ~]# export APISERVER_NAME=k8s.master
[root@binghe102 ~]# echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
[root@binghe102 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
W0510 11:08:27.709263   42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

根據輸出結果可以看出,Worker節點加入了K8S集羣。

注意:kubeadm join…就是master 節點上 kubeadm token create 命令輸出的join。

3.查看初始化結果

在Master節點(binghe101服務器)執行如下命令查看初始化結果。

kubectl get nodes -o wide

具體執行如下所示。

[root@binghe101 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
binghe101   Ready    master   20m     v1.18.2
binghe102   Ready    <none>   2m46s   v1.18.2
binghe103   Ready    <none>   2m46s   v1.18.2

注意:kubectl get nodes命令後面加上-o wide參數可以輸出更多的信息。

重啓K8S集羣引起的問題

1.Worker節點故障不能啓動

Master 節點的 IP 地址發生變化,導致 worker 節點不能啓動。需要重新安裝K8S集羣,並確保所有節點都有固定的內網 IP 地址。

2.Pod崩潰或不能正常訪問

重啓服務器後使用如下命令查看Pod的運行狀態。

kubectl get pods --all-namespaces

發現很多 Pod 不在 Running 狀態,此時,需要使用如下命令刪除運行不正常的Pod。

kubectl delete pod <pod-name> -n <pod-namespece>

注意:如果Pod 是使用 Deployment、StatefulSet 等控制器創建的,K8S 將創建新的 Pod 作爲替代,重新啓動的 Pod 通常能夠正常工作。

K8S安裝ingress-nginx

注意:在Master節點(binghe101服務器上執行)

1.創建ingress-nginx命名空間

創建ingress-nginx-namespace.yaml文件,文件內容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    name: ingress-nginx

執行如下命令創建ingress-nginx命名空間。

kubectl apply -f ingress-nginx-namespace.yaml

2.安裝ingress controller

創建ingress-nginx-mandatory.yaml文件,文件內容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---

執行如下命令安裝ingress controller。

kubectl apply -f ingress-nginx-mandatory.yaml

3.安裝K8S SVC:ingress-nginx

主要是用來用於暴露pod:nginx-ingress-controller。

創建service-nodeport.yaml文件,文件內容如下所示。

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

執行如下命令安裝。

kubectl apply -f service-nodeport.yaml

4.訪問K8S SVC:ingress-nginx

查看ingress-nginx命名空間的部署情況,如下所示。

[root@binghe101 k8s]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
default-http-backend-796ddcd9b-vfmgn        1/1     Running   1          10h
nginx-ingress-controller-58985cc996-87754   1/1     Running   2          10h

在命令行服務器命令行輸入如下命令查看ingress-nginx的端口映射情況。

kubectl get svc -n ingress-nginx 

具體如下所示。

[root@binghe101 k8s]# kubectl get svc -n ingress-nginx 
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
default-http-backend   ClusterIP   10.96.247.2   <none>        80/TCP                       7m3s
ingress-nginx          NodePort    10.96.40.6    <none>        80:30080/TCP,443:30443/TCP   4m35s

所以,可以通過Master節點(binghe101服務器)的IP地址和30080端口號來訪問ingress-nginx,如下所示。

[root@binghe101 k8s]# curl 192.168.175.101:30080       
default backend - 404

也可以在瀏覽器打開http://192.168.175.101:30080 來訪問ingress-nginx,如下所示。
在這裏插入圖片描述

K8S安裝gitlab代碼倉庫

注意:在Master節點(binghe101服務器上執行)

1.創建k8s-ops命名空間

創建k8s-ops-namespace.yaml文件,文件內容如下所示。

apiVersion: v1
kind: Namespace
metadata:
  name: k8s-ops
  labels:
    name: k8s-ops

執行如下命令創建命名空間。

kubectl apply -f k8s-ops-namespace.yaml 

2.安裝gitlab-redis

創建gitlab-redis.yaml文件,文件的內容如下所示。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: sameersbn/redis
        imagePullPolicy: IfNotPresent
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 10
          timeoutSeconds: 5
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/redis

---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis

首先,在命令行執行如下命令創建/data1/docker/xinsrv/redis目錄。

mkdir -p /data1/docker/xinsrv/redis

執行如下命令安裝gitlab-redis。

kubectl apply -f gitlab-redis.yaml 

3.安裝gitlab-postgresql

創建gitlab-postgresql.yaml,文件內容如下所示。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  selector:
    matchLabels:
      name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: sameersbn/postgresql
        imagePullPolicy: IfNotPresent
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/postgresql
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: postgres
  selector:
    name: postgresql

首先,執行如下命令創建/data1/docker/xinsrv/postgresql目錄。

mkdir -p /data1/docker/xinsrv/postgresql

接下來,安裝gitlab-postgresql,如下所示。

kubectl apply -f gitlab-postgresql.yaml

4.安裝gitlab

(1)配置用戶名和密碼

首先,在命令行使用base64編碼爲用戶名和密碼進行轉碼,本示例中,使用的用戶名爲admin,密碼爲admin.1231

轉碼情況如下所示。

[root@binghe101 k8s]# echo -n 'admin' | base64 
YWRtaW4=
[root@binghe101 k8s]# echo -n 'admin.1231' | base64 
YWRtaW4uMTIzMQ==

轉碼後的用戶名爲:YWRtaW4= 密碼爲:YWRtaW4uMTIzMQ==

也可以對base64編碼後的字符串解碼,例如,對密碼字符串解碼,如下所示。

[root@binghe101 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode 
admin.1231

接下來,創建secret-gitlab.yaml文件,主要是用戶來配置GitLab的用戶名和密碼,文件內容如下所示。

apiVersion: v1
kind: Secret
metadata:
  namespace: k8s-ops
  name: git-user-pass
type: Opaque
data:
  username: YWRtaW4=
  password: YWRtaW4uMTIzMQ==

執行配置文件的內容,如下所示。

kubectl create -f ./secret-gitlab.yaml

(2)安裝GitLab

創建gitlab.yaml文件,文件的內容如下所示。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  selector:
    matchLabels:
      name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: sameersbn/gitlab:12.1.6
        imagePullPolicy: IfNotPresent
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: GITLAB_TIMEZONE
          value: Beijing
        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: git-user-pass
              key: password
        - name: GITLAB_ROOT_EMAIL
          value: [email protected]
        - name: GITLAB_HOST
          value: gitlab.binghe.com
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "30022"
        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"
        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 01:00
        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/gitlab
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  ports:
    - name: http
      port: 80
      nodePort: 30088
    - name: ssh
      port: 22
      targetPort: ssh
      nodePort: 30022
  type: NodePort
  selector:
    name: gitlab

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gitlab
  namespace: k8s-ops
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: gitlab.binghe.com
    http:
      paths:
      - backend:
          serviceName: gitlab
          servicePort: http

注意:在配置GitLab時,監聽主機時,不能使用IP地址,需要使用主機名或者域名,上述配置中,我使用的是gitlab.binghe.com主機名。

在命令行執行如下命令創建/data1/docker/xinsrv/gitlab目錄。

mkdir -p /data1/docker/xinsrv/gitlab

安裝GitLab,如下所示。

kubectl apply -f gitlab.yaml

5.安裝完成

查看k8s-ops命名空間部署情況,如下所示。

[root@binghe101 k8s]# kubectl get pod -n k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          11s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h

也可以使用如下命令查看。

[root@binghe101 k8s]# kubectl get pod --namespace=k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          36s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h

二者效果一樣。

接下來,查看GitLab的端口映射,如下所示。

[root@binghe101 k8s]# kubectl get svc -n k8s-ops
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                     AGE
gitlab       NodePort    10.96.153.100   <none>        80:30088/TCP,22:30022/TCP   2m42s
postgresql   ClusterIP   10.96.203.119   <none>        5432/TCP                    32m
redis        ClusterIP   10.96.107.150   <none>        6379/TCP                    10h

此時,可以看到,可以通過Master節點(binghe101)的主機名gitlab.binghe.com和端口30088就能夠訪問GitLab。由於我這裏使用的是虛擬機來搭建相關的環境,在本機訪問虛擬機映射的gitlab.binghe.com時,需要配置本機的hosts文件,在本機的hosts文件中加入如下配置項。

192.168.175.101 gitlab.binghe.com

注意:在Windows操作系統中,hosts文件所在的目錄如下。

C:\Windows\System32\drivers\etc

接下來,就可以在瀏覽器中通過鏈接:http://gitlab.binghe.com:30088 來訪問GitLab了,如下所示。

在這裏插入圖片描述

此時,可以通過用戶名root和密碼admin.1231來登錄GitLab了。

注意:這裏的用戶名是root而不是admin,因爲root是GitLab默認的超級用戶。

在這裏插入圖片描述

登錄後的界面如下所示。

在這裏插入圖片描述

到此,K8S安裝gitlab完成。

安裝Harbor私有倉庫

注意:這裏將Harbor私有倉庫安裝在Master節點(binghe101服務器)上,實際生產環境中建議安裝在其他服務器。

1.下載Harbor的離線安裝版本

wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

2.解壓Harbor的安裝包

tar -zxvf harbor-offline-installer-v1.10.2.tgz

解壓成功後,會在服務器當前目錄生成一個harbor目錄。

3.配置Harbor

注意:這裏,我將Harbor的端口修改成了1180,如果不修改Harbor的端口,默認的端口是80。

(1)修改harbor.yml文件

cd harbor
vim harbor.yml

修改的配置項如下所示。

hostname: 192.168.175.101
http:
  port: 1180
harbor_admin_password: binghe123
###並把https註釋掉,不然在安裝的時候會報錯:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path

(2)修改daemon.json文件

修改/etc/docker/daemon.json文件,沒有的話就創建,在/etc/docker/daemon.json文件中添加如下內容。

[root@binghe~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
  "insecure-registries":["192.168.175.101:1180"]
}

也可以在服務器上使用 ip addr 命令查看本機所有的IP地址段,將其配置到/etc/docker/daemon.json文件中。這裏,我配置後的文件內容如下所示。

{
    "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"],
    "insecure-registries":["192.168.175.0/16","172.17.0.0/16", "172.18.0.0/16", "172.16.29.0/16", "192.168.175.101:1180"]
}

4.安裝並啓動harbor

配置完成後,輸入如下命令即可安裝並啓動Harbor

[root@binghe harbor]# ./install.sh 

5.登錄Harbor並添加賬戶

安裝成功後,在瀏覽器地址欄輸入http://192.168.175.101:1180打開鏈接,如下圖所示。

在這裏插入圖片描述

輸入用戶名admin和密碼binghe123,登錄系統,如下圖所示

在這裏插入圖片描述

接下來,我們選擇用戶管理,添加一個管理員賬戶,爲後續打包Docker鏡像和上傳Docker鏡像做準備。添加賬戶的步驟如下所示。

在這裏插入圖片描述

在這裏插入圖片描述

此處填寫的密碼爲Binghe123。

點擊確定後,如下所示。
在這裏插入圖片描述

此時,賬戶binghe還不是管理員,此時選中binghe賬戶,點擊“設置爲管理員”。

在這裏插入圖片描述

在這裏插入圖片描述

此時,binghe賬戶就被設置爲管理員了。到此,Harbor的安裝就完成了。

6.修改Harbor端口

如果安裝Harbor後,大家需要修改Harbor的端口,可以按照如下步驟修改Harbor的端口,這裏,我以將80端口修改爲1180端口爲例

(1)修改harbor.yml文件

cd harbor
vim harbor.yml

修改的配置項如下所示。

hostname: 192.168.175.101
http:
  port: 1180
harbor_admin_password: binghe123
###並把https註釋掉,不然在安裝的時候會報錯:ERROR:root:Error: The protocol is https but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path

(2)修改docker-compose.yml文件

vim docker-compose.yml

修改的配置項如下所示。

ports:
      - 1180:80

(3)修改config.yml文件

cd common/config/registry
vim config.yml

修改的配置項如下所示。

realm: http://192.168.175.101:1180/service/token

(4)重啓Docker

systemctl daemon-reload
systemctl restart docker.service

(5)重啓Harbor

[root@binghe harbor]# docker-compose down
Stopping harbor-log ... done
Removing nginx             ... done
Removing harbor-portal     ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing redis             ... done
Removing registry          ... done
Removing registryctl       ... done
Removing harbor-db         ... done
Removing harbor-log        ... done
Removing network harbor_harbor
 
[root@binghe harbor]# ./prepare
prepare base dir is set to /mnt/harbor
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
 
[root@binghe harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db   ... done
Creating redis       ... done
Creating registry    ... done
Creating registryctl ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating harbor-portal     ... done
Creating nginx             ... done
 
[root@binghe harbor]# docker ps -a
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                             PORTS

安裝Jenkins(一般的做法)

1.安裝nfs(之前安裝過的話,可以省略此步)

使用 nfs 最大的問題就是寫權限,可以使用 kubernetes 的 securityContext/runAsUser 指定 jenkins 容器中運行 jenkins 的用戶 uid,以此來指定 nfs 目錄的權限,讓 jenkins 容器可寫;也可以不限制,讓所有用戶都可以寫。這裏爲了簡單,就讓所有用戶可寫了。

如果之前已經安裝過nfs,則這一步可以省略。找一臺主機,安裝 nfs,這裏,我以在Master節點(binghe101服務器)上安裝nfs爲例。

在命令行輸入如下命令安裝並啓動nfs。

yum install nfs-utils -y
systemctl start nfs-server
systemctl enable nfs-server

2.創建nfs共享目錄

在Master節點(binghe101服務器)上創建 /opt/nfs/jenkins-data目錄作爲nfs的共享目錄,如下所示。

mkdir -p /opt/nfs/jenkins-data

接下來,編輯/etc/exports文件,如下所示。

vim /etc/exports

在/etc/exports文件文件中添加如下一行配置。

/opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)

這裏的 ip 使用 kubernetes node 節點的 ip 範圍,後面的 all_squash 選項會將所有訪問的用戶都映射成 nfsnobody 用戶,不管你是什麼用戶訪問,最終都會壓縮成 nfsnobody,所以只要將 /opt/nfs/jenkins-data 的屬主改爲 nfsnobody,那麼無論什麼用戶來訪問都具有寫權限。

這個選項在很多機器上由於用戶 uid 不規範導致啓動進程的用戶不同,但是同時要對一個共享目錄具有寫權限時很有效。

接下來,爲 /opt/nfs/jenkins-data目錄授權,並重新加載nfs,如下所示。

chown -R 1000 /opt/nfs/jenkins-data/
systemctl reload nfs-server

在K8S集羣中任意一個節點上使用如下命令進行驗證:

showmount -e NFS_IP

如果能夠看到 /opt/nfs/jenkins-data 就表示 ok 了。

具體如下所示。

[root@binghe101 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24

[root@binghe102 ~]# showmount -e 192.168.175.101
Export list for 192.168.175.101:
/opt/nfs/jenkins-data 192.168.175.0/24

3.創建PV

Jenkins 其實只要加載對應的目錄就可以讀取之前的數據,但是由於 deployment 無法定義存儲卷,因此我們只能使用 StatefulSet。

首先創建 pv,pv 是給 StatefulSet 使用的,每次 StatefulSet 啓動都會通過 volumeClaimTemplates 這個模板去創建 pvc,因此必須得有 pv,才能供 pvc 綁定。

創建jenkins-pv.yaml文件,文件內容如下所示。

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins
spec:
  nfs:
    path: /opt/nfs/jenkins-data
    server: 192.168.175.101
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 1Ti

我這裏給了 1T存儲空間,可以根據實際配置。

執行如下命令創建pv。

kubectl apply -f jenkins-pv.yaml 

4.創建serviceAccount

創建service account,因爲 jenkins 後面需要能夠動態創建 slave,因此它必須具備一些權限。

創建jenkins-service-account.yaml文件,文件內容如下所示。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins

上述配置中,創建了一個 RoleBinding 和一個 ServiceAccount,並且將 RoleBinding 的權限綁定到這個用戶上。所以,jenkins 容器必須使用這個 ServiceAccount 運行才行,不然 RoleBinding 的權限它將不具備。

RoleBinding 的權限很容易就看懂了,因爲 jenkins 需要創建和刪除 slave,所以才需要上面這些權限。至於 secrets 權限,則是 https 證書。

執行如下命令創建serviceAccount。

kubectl apply -f jenkins-service-account.yaml 

5.安裝Jenkins

創建jenkins-statefulset.yaml文件,文件內容如下所示。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  selector:
    matchLabels:
      name: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: docker.io/jenkins/jenkins:lts
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 32100
          resources:
            limits:
              cpu: 4
              memory: 4Gi
            requests:
              cpu: 4
              memory: 4Gi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
  # pvc 模板,對應之前的 pv
  volumeClaimTemplates:
    - metadata:
        name: jenkins-home
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Ti

jenkins 部署時需要注意它的副本數,你的副本數有多少就要有多少個 pv,同樣,存儲會有多倍消耗。這裏我只使用了一個副本,因此前面也只創建了一個 pv。

使用如下命令安裝Jenkins。

kubectl apply -f jenkins-statefulset.yaml 

6.創建Service

創建jenkins-service.yaml文件,文件內容如下所示。

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  # type: LoadBalancer
  selector:
    name: jenkins
  # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
  #externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      nodePort: 31888
      targetPort: 8080
      protocol: TCP
    - name: jenkins-agent
      port: 32100
      nodePort: 32100
      targetPort: 32100
      protocol: TCP
  type: NodePort

使用如下命令安裝Service。

kubectl apply -f jenkins-service.yaml 

7.安裝 ingress

jenkins 的 web 界面需要從集羣外訪問,這裏我們選擇的是使用 ingress。創建jenkins-ingress.yaml文件,文件內容如下所示。

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: jenkins
              servicePort: 31888
      host: jekins.binghe.com

這裏,需要注意的是host必須配置爲域名或者主機名,否則會報錯,如下所示。

The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.175.101": must be a DNS name, not an IP address

使用如下命令安裝ingress。

kubectl apply -f jenkins-ingress.yaml 

最後,由於我這裏使用的是虛擬機來搭建相關的環境,在本機訪問虛擬機映射的jekins.binghe.com時,需要配置本機的hosts文件,在本機的hosts文件中加入如下配置項。

192.168.175.101 jekins.binghe.com

注意:在Windows操作系統中,hosts文件所在的目錄如下。

C:\Windows\System32\drivers\etc

接下來,就可以在瀏覽器中通過鏈接:http://jekins.binghe.com:31888 來訪問Jekins了。

物理機安裝SVN

這裏,以在Master節點(binghe101服務器)上安裝SVN爲例。

1.使用yum安裝SVN

在命令行執行如下命令安裝SVN。

yum -y install subversion 

2.創建SVN庫

依次執行如下命令。

#創建/data/svn
mkdir -p /data/svn 
#初始化svn
svnserve -d -r /data/svn
#創建代碼倉庫
svnadmin create /data/svn/test

3.配置SVN

mkdir /data/svn/conf
cp /data/svn/test/conf/* /data/svn/conf/
cd /data/svn/conf/
[root@binghe101 conf]# ll
總用量 20
-rw-r--r-- 1 root root 1080 5月  12 02:17 authz
-rw-r--r-- 1 root root  885 5月  12 02:17 hooks-env.tmpl
-rw-r--r-- 1 root root  309 5月  12 02:17 passwd
-rw-r--r-- 1 root root 4375 5月  12 02:17 svnserve.conf
  • 配置authz文件,
vim authz

配置後的內容如下所示。

[aliases]
# joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average

[groups]
# harry_and_sally = harry,sally
# harry_sally_and_joe = harry,sally,&joe
SuperAdmin = admin
binghe = admin,binghe

# [/foo/bar]
# harry = rw
# &joe = r
# * =

# [repository:/baz/fuz]
# @harry_and_sally = rw
# * = r

[test:/]
@SuperAdmin=rw
@binghe=rw
  • 配置passwd文件
vim passwd

配置後的內容如下所示。

[users]
# harry = harryssecret
# sally = sallyssecret
admin = admin123
binghe = binghe123
  • 配置 svnserve.conf
vim svnserve.conf

配置後的文件如下所示。

### This file controls the configuration of the svnserve daemon, if you
### use it to allow access to this repository.  (If you only allow
### access through http: and/or file: URLs, then this file is
### irrelevant.)

### Visit http://subversion.apache.org/ for more information.

[general]
### The anon-access and auth-access options control access to the
### repository for unauthenticated (a.k.a. anonymous) users and
### authenticated users, respectively.
### Valid values are "write", "read", and "none".
### Setting the value to "none" prohibits both reading and writing;
### "read" allows read-only access, and "write" allows complete 
### read/write access to the repository.
### The sample settings below are the defaults and specify that anonymous
### users have read-only access to the repository, while authenticated
### users have read and write access to the repository.
anon-access = none
auth-access = write
### The password-db option controls the location of the password
### database file.  Unless you specify a path starting with a /,
### the file's location is relative to the directory containing
### this configuration file.
### If SASL is enabled (see below), this file will NOT be used.
### Uncomment the line below to use the default password file.
password-db = /data/svn/conf/passwd
### The authz-db option controls the location of the authorization
### rules for path-based access control.  Unless you specify a path
### starting with a /, the file's location is relative to the
### directory containing this file.  The specified path may be a
### repository relative URL (^/) or an absolute file:// URL to a text
### file in a Subversion repository.  If you don't specify an authz-db,
### no path-based access control is done.
### Uncomment the line below to use the default authorization file.
authz-db = /data/svn/conf/authz
### The groups-db option controls the location of the file with the
### group definitions and allows maintaining groups separately from the
### authorization rules.  The groups-db file is of the same format as the
### authz-db file and should contain a single [groups] section with the
### group definitions.  If the option is enabled, the authz-db file cannot
### contain a [groups] section.  Unless you specify a path starting with
### a /, the file's location is relative to the directory containing this
### file.  The specified path may be a repository relative URL (^/) or an
### absolute file:// URL to a text file in a Subversion repository.
### This option is not being used by default.
# groups-db = groups
### This option specifies the authentication realm of the repository.
### If two repositories have the same authentication realm, they should
### have the same password database, and vice versa.  The default realm
### is repository's uuid.
realm = svn
### The force-username-case option causes svnserve to case-normalize
### usernames before comparing them against the authorization rules in the
### authz-db file configured above.  Valid values are "upper" (to upper-
### case the usernames), "lower" (to lowercase the usernames), and
### "none" (to compare usernames as-is without case conversion, which
### is the default behavior).
# force-username-case = none
### The hooks-env options specifies a path to the hook script environment 
### configuration file. This option overrides the per-repository default
### and can be used to configure the hook script environment for multiple 
### repositories in a single file, if an absolute path is specified.
### Unless you specify an absolute path, the file's location is relative
### to the directory containing this file.
# hooks-env = hooks-env

[sasl]
### This option specifies whether you want to use the Cyrus SASL
### library for authentication. Default is false.
### Enabling this option requires svnserve to have been built with Cyrus
### SASL support; to check, run 'svnserve --version' and look for a line
### reading 'Cyrus SASL authentication is available.'
# use-sasl = true
### These options specify the desired strength of the security layer
### that you want SASL to provide. 0 means no encryption, 1 means
### integrity-checking only, values larger than 1 are correlated
### to the effective key length for encryption (e.g. 128 means 128-bit
### encryption). The values below are the defaults.
# min-encryption = 0
# max-encryption = 256

接下來,將/data/svn/conf目錄下的svnserve.conf文件複製到/data/svn/test/conf/目錄下。如下所示。

[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/
cp:是否覆蓋'/data/svn/test/conf/svnserve.conf'? y

4.啓動SVN服務

(1)創建svnserve.service服務

創建svnserve.service文件

vim /usr/lib/systemd/system/svnserve.service

文件的內容如下所示。

[Unit]
Description=Subversion protocol daemon
After=syslog.target network.target
Documentation=man:svnserve(8)

[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/svnserve
#ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS
ExecStart=/usr/bin/svnserve --daemon $OPTIONS
PrivateTmp=yes

[Install]
WantedBy=multi-user.target

接下來執行如下命令使配置生效。

systemctl daemon-reload

命令執行成功後,修改 /etc/sysconfig/svnserve 文件。

vim /etc/sysconfig/svnserve 

修改後的文件內容如下所示。

# OPTIONS is used to pass command-line arguments to svnserve.
# 
# Specify the repository location in -r parameter:
OPTIONS="-r /data/svn"

(2)啓動SVN

首先查看SVN狀態,如下所示。

[root@itence10 conf]# systemctl status svnserve.service
● svnserve.service - Subversion protocol daemon
   Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:svnserve(8)

可以看到,此時SVN並沒有啓動,接下來,需要啓動SVN。

systemctl start svnserve.service

設置SVN服務開機自啓動。

systemctl enable svnserve.service

接下來,就可以下載安裝TortoiseSVN,輸入鏈接svn://192.168.0.10/test 並輸入用戶名binghe,密碼binghe123來連接SVN了。

物理機安裝Jenkins

注意:安裝Jenkins之前需要安裝JDK和Maven,我這裏同樣將Jenkins安裝在Master節點(binghe101服務器)。

1.啓用Jenkins庫

運行以下命令以下載repo文件並導入GPG密鑰:

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

2.安裝Jenkins

執行如下命令安裝Jenkis。

yum install jenkins

接下來,修改Jenkins默認端口,如下所示。

vim /etc/sysconfig/jenkins

修改後的兩項配置如下所示。

JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java"
JENKINS_PORT="18080"

此時,已經將Jenkins的端口由8080修改爲18080

3.啓動Jenkins

在命令行輸入如下命令啓動Jenkins。

systemctl start jenkins

配置Jenkins開機自啓動。

systemctl enable jenkins

查看Jenkins的運行狀態。

[root@itence10 ~]# systemctl status jenkins
● jenkins.service - LSB: Jenkins Automation Server
   Loaded: loaded (/etc/rc.d/init.d/jenkins; generated)
   Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 71 (limit: 26213)
   Memory: 550.8M

說明,Jenkins啓動成功。

配置Jenkins運行環境

1.登錄Jenkins

首次安裝後,需要配置Jenkins的運行環境。首先,在瀏覽器地址欄訪問鏈接http://192.168.0.10:18080,打開Jenkins界面。

在這裏插入圖片描述

根據提示使用如下命令到服務器上找密碼值,如下所示。

[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword
71af861c2ab948a1b6efc9f7dde90776

將密碼71af861c2ab948a1b6efc9f7dde90776複製到文本框,點擊繼續。會跳轉到自定義Jenkins頁面,如下所示。

在這裏插入圖片描述

這裏,可以直接選擇“安裝推薦的插件”。之後會跳轉到一個安裝插件的頁面,如下所示。

在這裏插入圖片描述

此步驟可能有下載失敗的情況,可直接忽略。

2.安裝插件

需要安裝的插件

  • Kubernetes Cli Plugin:該插件可直接在Jenkins中使用kubernetes命令行進行操作。

  • Kubernetes plugin: 使用kubernetes則需要安裝該插件

  • Kubernetes Continuous Deploy Plugin:kubernetes部署插件,可根據需要使用

還有更多的插件可供選擇,可點擊 系統管理->管理插件進行管理和添加,安裝相應的Docker插件、SSH插件、Maven插件。其他的插件可以根據需要進行安裝。如下圖所示。

在這裏插入圖片描述

在這裏插入圖片描述

3.配置Jenkins

(1)配置JDK和Maven

在Global Tool Configuration中配置JDK和Maven,如下所示,打開Global Tool Configuration界面。

在這裏插入圖片描述

接下來就開始配置JDK和Maven了。

由於我在服務器上將Maven安裝在/usr/local/maven-3.6.3目錄下,所以,需要在“Maven 配置”中進行配置,如下圖所示。
在這裏插入圖片描述

接下來,配置JDK,如下所示。

在這裏插入圖片描述

注意:不要勾選“Install automatically”

接下來,配置Maven,如下所示。

在這裏插入圖片描述

注意:不要勾選“Install automatically”

(2)配置SSH

進入Jenkins的Configure System界面配置SSH,如下所示。

在這裏插入圖片描述

找到 SSH remote hosts 進行配置。

在這裏插入圖片描述

在這裏插入圖片描述

配置完成後,點擊Check connection按鈕,會顯示 Successfull connection。如下所示。

在這裏插入圖片描述

至此,Jenkins的基本配置就完成了。

寫在最後

如果覺得文章對你有點幫助,請微信搜索並關注「 冰河技術 」微信公衆號,跟冰河學習各種編程技術。

最後附上K8S最全知識圖譜鏈接:

https://www.processon.com/view/link/5ac64532e4b00dc8a02f05eb?spm=a2c4e.10696291.0.0.6ec019a4bYSFIw#map

祝大家在學習K8S時,少走彎路。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章