1.項目發佈方案概述
(1)藍綠髮布
項目邏輯上分爲AB組,在項目升級時,首先把A組從負載均衡中摘除,進行新版本的部署。B組仍然繼續提供服務。A組升級完成上線,B組從負載均衡中摘除。
特點:
• 策略簡單
• 升級/回滾速度快
• 用戶無感知,平滑過渡
缺點:
• 需要兩倍以上服務器資源
• 短時間內浪費一定資源成本
(2)灰度發佈
灰度發佈:只升級部分服務,即讓一部分用戶繼續用老版本,一部分用戶開始用新版本,如果用戶對新版本沒有什麼意見,那麼逐步擴大範圍,把所有用戶都遷移到新版本上面來。
特點:
• 保證整體系統穩定性
• 用戶無感知,平滑過渡
缺點:
• 自動化要求高
(3)滾動發佈
滾動發佈:每次只升級一個或多個服務,升級完成後加入生產環境,不斷執行這個過程,直到集羣中的全部舊版升級新版本。
特點:
• 用戶無感知,平滑過渡
缺點:
• 部署週期長
• 發佈策略較複雜
• 不易回滾
2.發佈流程設計
192.168.74.230 k8s-master
192.168.74.246 k8s-node1
192.168.74.247 k8s-node2
192.168.74.248 harbor git nfs
3.準備代碼版本倉庫Git和容器鏡像倉庫Harbor
在192.168.74.248搭建harbor倉庫和git倉庫
#配置密鑰對互信
ssh-keygen
ssh-copy-id [email protected]
#配置git倉庫
mkdir java-demo.git
cd java-demo.git/
git --bare init
chmod +x docker-compose
mv docker-compose /usr/local/bin/
(1)給這個節點安裝docker 準備harbor的環境
yum install -y yum-utils device-mapper-persistent-data lvm2
wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
mv docker-ce.repo /etc/yum.repos.d/
systemctl start docker && systemctl enable docker
vim /etc/docker/daemon.json
{ “registry-mirrors”: [“https://h5d0mk6d.mirror.aliyuncs.com”] }
systemctl restart docker
(2)製作證書
mkdir /application/ssl -p
cd /application/ssl
openssl req -newkey rsa:4096 -nodes -sha512 -subj “/C=CN/ST=/L=/O=/OU=/CN=qushuaibo.com” -keyout ca.key -x509 -days 3650 -out ca.crt
openssl req -newkey rsa:4096 -nodes -sha512 -subj “/C=CN/ST=/L=/O=e/OU=/CN=qushuaibo.com” -keyout qushuaibo.com.key -out qushuaibo.com.csr
openssl x509 -req -days 3650 -in qushuaibo.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out qushuaibo.com.crt
(3)配置harbor(這裏採用的https方式域名訪問)
tar xf harbor-offline-installer-v1.7.5.tgz
cd harbor/
sed -i ‘s#hostname = reg.mydomain.com#hostname = qushuaibo.com#g’ /root/harbor/harbor.cfg
sed -i ‘s#hostname = reg.mydomain.com#hostname = qushuaibo.com#g’ /root/harbor/harbor.cfg
sed -i ‘s#ui_url_protocol = http#ui_url_protocol = https#g’ /root/harbor/harbor.cfg
sed -i ‘s#ssl_cert = /data/cert/server.crt#ssl_cert = /application/ssl/qushuaibo.com.crt#g’ /root/harbor/harbor.cfg
sed -i ‘s#ssl_cert_key = /data/cert/server.key#ssl_cert_key = /application/ssl/qushuaibo.com.key#g’ /root/harbor/harbor.cfg
sed -i ‘s#harbor_admin_password = Harbor12345#harbor_admin_password = 123456#g’ /root/harbor/harbor.cfg
(4)將證書分發至個個node和master節點並更新證書,信任此證書
mkdir -r /etc/docker/certs.d/qushuaibo.com/ #k8s集羣內部都創建放證書的地方
scp qushuaibo.com.crt 192.168.74.230:/etc/docker/certs.d/qushuaibo.com/ #master
scp qushuaibo.com.crt 192.168.74.246:/etc/docker/certs.d/qushuaibo.com/ #node1
scp qushuaibo.com.crt 192.168.74.247:/etc/docker/certs.d/qushuaibo.com/ #node2
cd /etc/docker/certs.d/qushuaibo.com/
update-ca-trust
(5)啓動harbor
./prepare
./install.sh
4.Jenkins在K8S中動態創建代理
https://github.com/jenkinsci/kubernetes-plugin/tree/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes
在k8s中部署jenkins
kubectl apply -f .
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
# 如果上傳插件超出默認會報"413 Request Entity Too Large", 增加 client_max_body_size
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
# nginx-ingress controller版本小於 0.9.0.beta-18 的配置
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: 50m
ingress.kubernetes.io/proxy-request-buffering: "off"
spec:
rules:
- host: jenkins.example.com
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 80
rbac.yml
**# 創建名爲jenkins的ServiceAccount**
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
# 創建名爲jenkins的Role,授予允許管理API組的資源Pod
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
# 將名爲jenkins的Role綁定到名爲jenkins的ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
service-account.yml
# In GKE need to get RBAC permissions first with
# kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
service.yml
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
selector:
name: jenkins
type: NodePort
ports:
-
name: http
port: 80
targetPort: 8080
protocol: TCP
nodePort: 30006
-
name: agent
port: 50000
protocol: TCP
statefulset.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jenkins
labels:
name: jenkins
spec:
serviceName: jenkins
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
name: jenkins
labels:
name: jenkins
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: LIMITS_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
- name: JAVA_OPTS
value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 200
timeoutSeconds: 5
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
securityContext:
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: jenkins-home
spec:
storageClassName: "managed-nfs-storage"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
修改jenkins代理以及安裝插件git pip kubernetes Kubernetes Continuous Deploy
在配置系統中拉到最下面
5.構建Jenkins-Slave鏡像
製作jenkins-slave鏡像並上傳至haror
docker build -t qushuaibo.com/jenkins/jenkins_slave:v1 .
Dockerfile爲
FROM centos:7
LABEL maintainer lizhenliang
RUN yum install -y java-1.8.0-openjdk maven curl git libtool-ltdl-devel && \
yum clean all && \
rm -rf /var/cache/yum/* && \
mkdir -p /usr/share/jenkins
COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave #這個是個腳本爲了就是訪問slave.jar
COPY settings.xml /etc/maven/settings.xml #這個是爲了maven的源
RUN chmod +x /usr/bin/jenkins-slave
ENTRYPOINT ["jenkins-slave"]
6.Jenkins Pipeline構建流水線發佈
• Jenkins Pipeline是一套插件,支持在Jenkins中實現集成和持續交付管道;
• Pipeline通過特定語法對簡單到複雜的傳輸管道進行建模;
• 聲明式:遵循與Groovy相同語法。pipeline { }
• 腳本式:支持Groovy大部分功能,也是非常表達和靈活的工具。node { }
• Jenkins Pipeline的定義被寫入一個文本文件,稱爲Jenkinsfile。
先測試jenkins是否可以拉起slave
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: "qushuaibo.com/jenkins/jenkins_slave:v1"
),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker')
],
)
{
node("jenkins-slave"){
stage('拉取代碼'){
}
}
}
標籤上下對應纔可以把下面的步驟在上面這個slave中
設置保存用戶名密碼(登陸harbor用的)
7.Jenkins在Kubernetes中持續部署
❖ 使用Jenkins三個插件
• Kubernetes
• Pipeline
• Kubernetes Continuous Deploy
• git
• 也可以裝一個新皮膚Blue Ocean
❖ CI/CD環境特點
• Slave彈性伸縮
• 基於鏡像隔離構建環境
• 流水線發佈,易維護
❖Jenkins參數化構建可幫助你完成更復雜環境CI/CD
持續部署是需要Kubernetes Continuous Deploy 這個插件來做
那麼我們就需要把認證信息給jenkins,纔可以確保jenkins能連接k8s
cat .kube/config
上圖爲jenkins連接k8s的憑證
創建拉取鏡像憑證
kubectl create secret docker-registry registry-pull-secret --docker-username=admin --docker-server=https://qushuaibo.com --docker-password=123456
Pipeline腳本如下
// 公共
def registry = "qushuaibo.com" //倉庫座標
// 項目
def project = "welcome" //項目倉庫名字
def app_name = "demo" //項目名稱
def image_name = "${registry}/${project}/${app_name}:${Branch}-${BUILD_NUMBER}"
//k8s部署用的鏡像名稱通過值用sed來給yaml傳遞
//Branch是分支我們需要設置的參數化構建
//BUILD_NUMBER 這個是jenkins的內置變量
def git_address = "[email protected]:/home/git/java-demo.git"
//git倉庫位置
// 認證
def secret_name = "registry-pull-secret" //k8s部署應用拉鏡像的憑證
def docker_registry_auth = "c87990b8-0802-4ab7-9d6c-dfe48e581ff2"
//harbor的username和password兩個變量的傳入,以及這樣可以不將私有倉庫賬號密碼暴露在外面
def git_auth = "dcfb69a8-8a74-47c6-ad09-1780d91802c9"
//git倉庫的憑據,等於是把下面的拿上來可以不同的倉庫多次利用
def k8s_auth = "da3ad51c-69aa-4cc3-b938-744b9f622cda"
//jenkins連接k8s所需要的認證
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: "${registry}/jenkins/jenkins_slave:v1"
),
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
hostPathVolume(mountPath: '/usr/bin/docker', hostPath: '/usr/bin/docker'),
hostPathVolume(mountPath: '/root/.m2', hostPath: '/tmp/m2')
//將宿主機的docker環境掛載在jenkins-slave中就不用在jenkins-slave中安裝docker
//掛載.m2是爲了將maven拉取的依賴包同步在宿主機的/tmp/m2中這樣子可以多次利用,也可以用共享存儲讓多節點一起使用
],
ImagePullSecrets: [‘registry-pull-secret’], //私有鏡像需要加這個,共有倉庫不需要
)
{
node("jenkins-slave"){
//這裏的jeknins-slave對應上面的label這樣纔可以白傲冥是用上面的從來做這個任務,根據不同的項目,做不同的從jenkins環境,這裏是java所以從配置的是maven和java環境
// 第一步
stage('拉取代碼'){
checkout([$class: 'GitSCM', branches: [[name: "${Branch}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
}
// 第二步
stage('代碼編譯'){
sh "mvn clean package -Dmaven.test.skip=true"
}
// 第三步
stage('構建鏡像'){
withCredentials([usernamePassword(credentialsId: "${docker_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
echo '
FROM lizhenliang/tomcat
RUN rm -rf /usr/local/tomcat/webapps/*
ADD target/*.war /usr/local/tomcat/webapps/ROOT.war
' > Dockerfile
docker build -t ${image_name} .
docker login -u ${username} -p '${password}' https://${registry}
docker push ${image_name}
"""
}
}
// 第四步
stage('部署到K8S平臺'){
sh """
sed -i 's#\$IMAGE_NAME#${image_name}#' deploy.yml
sed -i 's#\$SECRET_NAME#${secret_name}#' deploy.yml
"""
kubernetesDeploy configs: 'deploy.yml', kubeconfigId: "${k8s_auth}"
}
}
}