文章目录
pod特点:
最小部署单元
一组容器的集合
一个pod中的容器共享网络命名空间
pod的生命周期是短暂的
一:pod容器分类
1.1 infrastructure container 基础容器
维护整个pod的网络空间
查看容器的网络
[root@node01 ~]# cat /k8s/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.247.143 \
--kubeconfig=/k8s/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/cfg/bootstrap.kubeconfig \
--config=/k8s/cfg/kubelet.config \
--cert-dir=/k8s/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
每次创建pod时,–pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"就会创建基础容器,与pod对应,对于用户是透明的
[root@node01 ~]# docker ps | grep registry #这个是基础容器
56ad95a6c12c registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "/pause" 6 hours ago Up 6 hours k8s_POD_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_0
[root@node01 ~]#
node节点加入集群后,就会创建一个基础容器,这个容器是用来管理pod的
1.2 initcontainers 初始化容器
先于业务容器开始执行——以前pod中容器是并行开启,现在进行了改进
1.3 container 业务容器
并行启动
官方网站:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
一荚可以有多个运行应用程序的容器,但也可以有一个或多个初始化容器,它们在启动应用程序容器之前就已运行。
初始化容器与常规容器完全一样,除了:
- 初始化容器始终会运行到完成状态。
- 每个init容器必须成功完成才能启动下一个容器。
如果Pod的初始化容器失败,Kubernetes将反复重启Pod,直到初始化容器成功。但是,如果Pod具有restartPolicy
永不,则Kubernetes不会重新启动Pod。
要为Pod指定初始化容器,请将initContainers
字段添加到Pod规范中,作为类型为Container的对象数组 和app containers
数组一起。初始化容器的状态.status.initContainerStatuses
作为容器状态的数组在字段中返回(类似于该.status.containerStatuses
字段)。
kubelet组件管理1基础容器和2初始化容器
运维工作者主要负责业务容器
yaml中的app容器就是业务容器
容器创建有两种:apply和create
apply包含create,即创建新的容器;也可以在原有容器的基础上进行更新
二:镜像拉取策略(image PullPolicy)
IfNotPresent:默认值,镜像在宿主机不存在时才拉取
Always:每次创建Pod都会重新拉取一次镜像
Never:Pod永远不会主动拉取这个镜像
官方网站介绍:https://kubernetes.io/docs/concepts/containers/images
示例:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
EOF
2.1 演示kubectl edit查看默认容器的镜像拉取策略
从默认编辑器编辑资源。
edit命令允许您直接编辑可以通过命令行工具检索的任何API资源。这将打开
由KUBE _EDITOR或编辑器环境变量定义的编辑器,或者在Linux或“记事本”中退回到“vi”
对于Windows。您可以编辑多个对象,尽管每次只应用一个更改。该命令接受文件名为
以及命令行参数,尽管您指向的文件必须是以前保存的资源版本。
使用用于获取资源的API版本进行编辑。若要使用特定的API版本进行编辑,请完全限定
资源、版本和组。
默认格式是YAML。要编辑JSON,请指定“-o JSON”。
标记——Windows行结束符可用于强制Windows行结束符,否则将作为操作的默认结束符
系统将被使用。
如果更新时发生错误,将在磁盘上创建一个临时文件,其中包含未应用的文件
的变化。更新资源时最常见的错误是另一个编辑器更改服务器上的资源。当这个
发生时,您必须将更改应用到资源的较新版本,或将临时保存的副本更新到
包括最新的资源版本。
例子:
#编辑名为“docker-registry”的服务:
kubectl edit svc/docker-registry
#使用替代编辑器
KUBE_EDITOR=“nano” kubectl edit svc/docker-registry
#编辑任务“myjob”在JSON使用v1 API格式:
kubectl edit job.v1.batch/myjob -o json
在YAML中编辑部署’mydeployment’,并将修改后的配置保存在注释中:
kubectl edit deployment/mydeployment -o yaml --save-config
选项:
——allow-missing-template-keys=true:如果为真,当字段或映射键丢失时,忽略模板中的任何错误
的模板。仅适用于golang和jsonpath输出格式。
-f,——filename=[]:文件名、目录或用于编辑资源的文件的URL
——include-uninitialized=false:如果为真,kubectl命令将应用于未初始化的对象。如果显式设置为
false,此标志覆盖使kubectl命令应用于未初始化的对象的其他标志,例如“——all”。
具有空元数据的对象。初始化器被认为是初始化的。
-o,——output= ":输出格式。之一:
json | yaml | | |模板名称go-template | go-template-file | templatefile | jsonpath | jsonpath-file。
——Output -patch=false:如果资源被编辑,则输出补丁。
——record=false:在资源注释中记录当前的kubectl命令。如果设置为false,则不记录
命令。如果设置为true,则记录命令。如果没有设置,默认情况下只更新现有注释值
已经存在。
-R,——recursive=false:递归地处理-f,——filename中使用的目录。当您想要管理时,它非常有用
在同一目录中组织的相关清单。
——save-config=false:如果为真,则当前对象的配置将保存在其注释中。否则,
注释将保持不变。当您希望将来在这个对象上执行kubectl apply时,这个标志非常有用。
——template= ":模板字符串或模板文件的路径,当-o=go-template, -o=go-template-file时使用。的
模板格式是golang模板[http://golang.org/pkg/text/template/#pkg-overview]。
——validate=true:如果为真,在发送输入之前使用模式验证输入
——windows-line- ended =false:默认为平台本机的行结束符。
用法:
kubectl edit (RESOURCE/NAME | -f FILENAME) [options]
[root@master1 ~]# kubectl edit deploy/nginx-deployment #使用edit会进入编辑器
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2020-05-10T06:44:19Z
generation: 1
labels:
app: nginx
name: nginx-deployment
namespace: default
resourceVersion: "520771"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
uid: acb9ae71-9289-11ea-a668-000c29db840b
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.4
imagePullPolicy: IfNotPresent #镜像拉取默认为宿主机不存咋该镜像就拉取
name: nginx1
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always #重启策略为总是
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: 2020-05-10T06:44:21Z
lastUpdateTime: 2020-05-10T06:44:21Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2020-05-10T06:44:19Z
lastUpdateTime: 2020-05-10T06:44:21Z
message: ReplicaSet "nginx-deployment-78cdb5b557" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
#:q退出
Edit cancelled, no changes made.
2.2 编写yaml文件测试
[root@master1 ~]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: nginx-03
image: nginx
imagePullPolicy: Always
command: [ "echo","SUCCESS" ]
[root@master1 ~]# kubectl create -f pod1.yaml
pod/pod1 created
[root@master1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
nginx-6c94d899fd-xsxct 1/1 Running 0 2d9h
nginx-deployment-78cdb5b557-6z2sf 1/1 Running 0 9h
nginx-deployment-78cdb5b557-9pdf8 1/1 Running 0 9h
nginx-deployment-78cdb5b557-f2hx2 1/1 Running 0 9h
pod1 0/1 ContainerCreating 0 10s
pod1 0/1 Completed 0 17s
pod1 0/1 Completed 1 33s
pod1 0/1 CrashLoopBackOff 1 34s
pod1 0/1 Completed 2 64s
pod1 0/1 CrashLoopBackOff 2 77s
pod1 0/1 Completed 3 104s
pod1资源处于创建完成、重启(crashloopbackoff重启)循环的状态,创建失败
2.3 失败的原因是因为命令启动冲突
删除command:[ “echo”,“SUCCESS” ],同时修改修改一下版本
^C[root@master1 ~]vim pod1.yamll
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: nginx-03
image: nginx:1.14
imagePullPolicy: Always
# command: [ "echo","SUCCESS" ]
[root@master1 ~]# kubectl create -f pod1.yaml #发现pod1已存在,先删除再重新创建
Error from server (AlreadyExists): error when creating "pod1.yaml": pods "pod1" already exists
[root@master1 ~]# kubectl delete pod/pod1
pod "pod1" deleted
[root@master1 ~]# kubectl create -f pod1.yaml
pod/pod1 created
[root@master1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
nginx-6c94d899fd-xsxct 1/1 Running 0 2d9h
nginx-deployment-78cdb5b557-6z2sf 1/1 Running 0 9h
nginx-deployment-78cdb5b557-9pdf8 1/1 Running 0 9h
nginx-deployment-78cdb5b557-f2hx2 1/1 Running 0 9h
pod1 0/1 ContainerCreating 0 3s
pod1 1/1 Running 0 20s
此时运行成功
备注:删除的方式还可以指定由什么yaml文件去删除对应的资源
[root@master1 ~]# kubectl delete -f pod1.yaml
pod "pod1" deleted
创建的方式也可以使用apply
^C[root@master1 ~]# kubectl delete -f pod1.yaml
pod "pod1" deleted
[root@master1 ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
nginx-6c94d899fd-xsxct 1/1 Running 0 2d10h
nginx-deployment-78cdb5b557-6z2sf 1/1 Running 0 9h
nginx-deployment-78cdb5b557-9pdf8 1/1 Running 0 9h
nginx-deployment-78cdb5b557-f2hx2 1/1 Running 0 9h
pod1 0/1 ContainerCreating 0 3s
pod1 1/1 Running 0 11s
2.4 查看分配节点
^C[root@master1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-6c94d899fd-xsxct 1/1 Running 0 2d9h 172.17.42.6 192.168.247.144 <none>
nginx-deployment-78cdb5b557-6z2sf 1/1 Running 0 9h 172.17.42.3 192.168.247.144 <none>
nginx-deployment-78cdb5b557-9pdf8 1/1 Running 0 9h 172.17.42.4 192.168.247.144 <none>
nginx-deployment-78cdb5b557-f2hx2 1/1 Running 0 9h 172.17.45.3 192.168.247.143 <none>
pod1 1/1 Running 0 59s 172.17.45.4 192.168.247.143 <none>
2.5 到node节点去curl验证
[root@node01 ~]# curl -I 172.17.45.4
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Sun, 10 May 2020 16:00:13 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes
基础容器是在kubelet启动后向apiserver申请,后被统一创建
三:部署harbor私有仓库,加入到集群内
创建一个harbor仓库的详细步骤可以看之前的文档
https://blog.csdn.net/Lfwthotpt/article/details/105729801
这里的步骤就不详解了
3.1 基本环境,服务器主机需要安装python、docker和docker-compose环境
[root@localhost ~]# hostnamectl set-hostname harbor
[root@localhost ~]# su
[root@harbor ~]# mkdir /abc
[root@harbor ~]# mount.cifs //192.168.0.88/linuxs /abc
Password for root@//192.168.0.88/linuxs:
[root@harbor ~]# cp /abc/docker-compose .
[root@harbor ~]# cp /abc/harbor-offline-installer-v1.2.2.tgz .
[root@harbor ~]# mv docker-compose /usr/bin/
[root@harbor ~]# docker-compose -v
docker-compose version 1.23.1, build b02f1306
[root@harbor ~]# setenforce 0
[root@harbor ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@harbor ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@harbor ~]# yum install -y docker-ce
[root@harbor ~]# systemctl start docker
[root@harbor ~]# systemctl enable docker
[root@harbor docker]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://fk2yrsh1.mirror.aliyuncs.com"]
}
EOF
[root@harbor docker]# systemctl daemon-reload
[root@harbor docker]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
[root@harbor docker]# sysctl -p
net.ipv4.ip_forward = 1
[root@harbor docker]# systemctl restart network
[root@harbor docker]# systemctl restart docker
3.2 部署harbor服务
harbor被部署为多个docker容器,因此可以部署在任何支持docker的linux发行版上
先解压软件包
[root@harbor ~]# tar xf harbor-offline-installer-v1.2.2.tgz -C /usr/local/
[root@harbor docker]# cd /usr/local/harbor/
[root@harbor harbor]# ls
common docker-compose.yml harbor.v1.2.2.tar.gz NOTICE
docker-compose.clair.yml harbor_1_1_0_template install.sh prepare
docker-compose.notary.yml harbor.cfg LICENSE upgrade
[root@harbor harbor]# vim harbor.cfg
hostname = 192.168.247.147
[root@harbor harbor]# sh install.sh
Note: docker version: 19.03.8
Note: docker-compose version: 1.23.1
[Step 1]: loading Harbor images ...
[Step 2]: preparing environment ...
[Step 3]: checking existing instance of Harbor ...
[Step 4]: starting Harbor ...
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at http://192.168.247.147.
For more details, please visit https://github.com/vmware/harbor
此时harbor仓库架设成功
3.3 登录web
默认账号:admin,密码:Harbor12345
3.4 先创建一个项目,以供存放专门为这个项目使用的镜像
比如叫gsydianshang
此时项目中镜像是空的
四:将harbor与k8s中的docker关联
在node节点配置连接私有仓库——注意后面的逗号要添加上
4.1 以一个节点为例,别的节点也做
[root@node01 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://fk2yrsh1.mirror.aliyuncs.com"],
"insecure-registries":["192.168.247.147"]
}
[root@node01 ~]# systemctl restart docker
正常我们使用docker pull nginx时,默认拉取的是docker共有仓库镜像
docker pull 192.168.247.147/gsydianshang/nginx 拉取的是harbor仓库中gsy项目中的镜像
4.2 此时顺便看一下容器
[root@node01 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59b1c2158e1a nginx "nginx -g 'daemon of…" 16 seconds ago Up 16 seconds k8s_nginx-03_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_1
05d9f33ab362 bc26f1ed35cf "nginx -g 'daemon of…" 19 seconds ago Up 19 seconds k8s_nginx1_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_1
3faf494b46a0 registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "/pause" 20 seconds ago Up 19 seconds k8s_POD_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_1
4c89eb5f1dcb registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "/pause" 20 seconds ago Up 19 seconds k8s_POD_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_1
0dbb8f579312 nginx "nginx -g 'daemon of…" 33 hours ago Exited (0) 32 seconds ago k8s_nginx-03_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_0
f8261311ef62 registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "/pause" 33 hours ago Exited (0) 32 seconds ago k8s_POD_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_0
f36bb109b1df bc26f1ed35cf "nginx -g 'daemon of…" 42 hours ago Exited (0) 32 seconds ago k8s_nginx1_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_0
56ad95a6c12c registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 "/pause" 42 hours ago Exited (0) 32 seconds ago k8s_POD_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_0
39f034a2f24e centos:7 "/bin/bash" 12 days ago Exited (137) 22 seconds ago beautiful_jennings
其中有四个业务容器因为重启服务正常退出,但是新出现4个up的容器,这是因为k8s为了保持pod的正常运转,会自动根据副本集创建新容器
所以重启docker不会影响业务,因为k8s会自动重启
五:上传镜像到harbor
注意:在使用harbor下载镜像创建资源的时候,要保证node处于harbor登陆状态
5.1 两个节点都做登录
[root@node01 ~]# docker login 192.168.247.147
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
5.2 先拉取一个公网仓库的tomcat以供测试
[root@node01 ~]# docker pull tomcat
Using default tag: latest
Digest: sha256:cae591b6f798359b0ba2bdd9cc248e695ac6e14d20722c5ff82a9a138719896f
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest
[root@node01 ~]# docker images | grep tomcat
tomcat latest 927899a31456 2 weeks ago 647MB
5.3 上传镜像打标签
[root@node01 ~]# docker tag tomcat 192.168.247.147/gsydianshang/tomcat
[root@node01 ~]# docker push 192.168.247.147/gsydianshang/tomcat
The push refers to repository [192.168.247.147/gsydianshang/tomcat]
5.4 到web刷新一下查看
上传成功
5.5 查看本地镜像
[root@node01 ~]# docker images | grep tomcat
192.168.247.147/gsydianshang/tomcat latest 927899a31456 2 weeks ago 647MB
tomcat latest 927899a31456 2 weeks ago 647MB
5.6 此时先把本地打标签的删掉,然后从harbor下载测试
[root@node01 ~]# docker rmi 192.168.247.147/gsydianshang/tomcat:latest
Untagged: 192.168.247.147/gsydianshang/tomcat:latest
Untagged: 192.168.247.147/gsydianshang/tomcat@sha256:8672b0039fe1f37d3d35c11f65aefad5388fd46e260980b95304605397bb4942
[root@node01 ~]# docker images | grep tomcat
tomcat latest 927899a31456 2 weeks ago 647MB
5.7 进行镜像下载时出现问题,需要权限拒绝,需要登录
缺少仓库的凭据
[root@node01 ~]# docker pull 192.168.247.147/gsydiansahng/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 192.168.247.147/gsydiansahng/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
#重新登录了一下发现是我项目单词打错了
[root@node01 ~]# docker pull 192.168.247.147/gsydianshang/tomcat:latest
latest: Pulling from gsydianshang/tomcat
Digest: sha256:8672b0039fe1f37d3d35c11f65aefad5388fd46e260980b95304605397bb4942
Status: Downloaded newer image for 192.168.247.147/gsydianshang/tomcat:latest
192.168.247.147/gsydianshang/tomcat:latest
检查
[root@node01 ~]# docker images | grep tomcat
tomcat latest 927899a31456 2 weeks ago 647MB
192.168.247.147/gsydianshang/tomcat latest 927899a31456 2 weeks ago 647MB
此时web的下载数变成1
六: 这种方式是使用docker 下载,接下来测试使用K8s编辑yaml文件下载
6.1 先测试常规的kubectl run
[root@master1 ~]# kubectl run tomcat --image=tomcat
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/tomcat created
[root@master1 ~]# kubectl get pods | grep tomcat
tomcat-7c67d9584b-h5gzj 1/1 Running 0 22s
[root@master1 ~]# kubectl get pods/tomcat-7c67d9584b-h5gzj --export -o yaml #只查看跟镜像相关的
apiVersion: v1
kind: Pod
metadata:
labels:
run: tomcat
spec:
containers:
- image: tomcat #此处镜像指tomcat,指的是默认docker.io/tomcat,但是本地若是有镜像的话,优先从本地拉取
imagePullPolicy: Always
name: tomcat
resources: {}
[root@master1 ~]# [root@master1 ~]# kubectl describe pod tomcat-7c67d9584b-h5gzj
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m12s default-scheduler Successfully assigned default/tomcat-7c67d9584b-2jlc2 to 192.168.247.143
Normal Pulling 2m11s kubelet, 192.168.247.143 pulling image "tomcat"
Normal Pulled 2m7s kubelet, 192.168.247.143 Successfully pulled image "tomcat"
Normal Created 2m7s kubelet, 192.168.247.143 Created container
Normal Started 2m7s kubelet, 192.168.247.143 Started container
[root@master1 ~]# kubectl delete deploy tomcat
deployment.extensions "tomcat" deleted
docker.io代表从docker官网下载
或者在之前配置的指定registry为阿里云
6.2 这里可以手动写一个yaml文件去进行测试
[root@master1 ~]# vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-tomcat
spec:
replicas: 2
template:
metadata:
labels:
app: my-tomcat
spec:
containers:
- name: my-tomcat
image: docker.io/tomcat:8.0.52
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-tomcat
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 31111
selector:
app: my-tomcat
[root@master1 ~]# ls
anaconda-ks.cfg initial-setup-ks.cfg pod1.yaml 下载 图片 桌面 视频
gsy k8s tomcat-deployment.yaml 公共 文档 模板 音乐
[root@master1 ~]# kubectl create -f tomcat-deployment.yaml
deployment.extensions/my-tomcat created
service/my-tomcat created
6.3 查看tomcat的pod是否创建
[root@master1 ~]# kubectl get pods | grep tomcat
my-tomcat-57667b9d9-hcshh 0/1 ContainerCreating 0 51s
my-tomcat-57667b9d9-k8tj2 0/1 ContainerCreating 0 51s
tomcat-7c67d9584b-2jlc2 1/1 Running 0 12m
6.4 查看pod的详细信息
[root@master1 ~]# kubectl describe pod my-tomcat-57667b9d9-hcshh
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m6s default-scheduler Successfully assigned default/my-tomcat-57667b9d9-hcshh to 192.168.247.143
Normal Pulling 2m6s kubelet, 192.168.247.143 pulling image "docker.io/tomcat:8.0.52"
Normal Pulled 82s kubelet, 192.168.247.143 Successfully pulled image "docker.io/tomcat:8.0.52"
Normal Created 71s kubelet, 192.168.247.143 Created container
Normal Started 71s kubelet, 192.168.247.143 Started container
从docker.io处拉取镜像
pod资源没有重启状态,只有删掉和重新创建
七:把harbor参数天骄到yaml文件中
7.1 查看harbor登录凭据
在node节点
base64 使用64解码
-w 0 不转行输出
[root@node01 ~]# ls -a
. .bash_logout .config .esd_auth .local .viminfo 文档 音乐
.. .bash_profile .cshrc .ICEauthority .pki 下载 桌面
anaconda-ks.cfg .bashrc .dbus initial-setup-ks.cfg .ssh 公共 模板
.bash_history .cache .docker k8s .tcshrc 图片 视频
[root@node01 ~]# cat .docker/config.json | base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjI0Ny4xNDciOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuOCAobGludXgpIgoJfQp9
两个node节点上的这个验证码应该是一样的,因为都是以admin身份登录
依据这个harbor仓库的验证码,可以编写从harbor仓库拉取镜像的yaml文件
7.2 首先要先创建一个安全登录harbor的资源
[root@master1 ~]# vim registry-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registry-pull-secret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjI0Ny4xNDciOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuOCAobGludXgpIgoJfQp9
type: kubernetes.io/dockerconfigjson
[root@master1 ~]# kubectl create -f registry-pull-secret.yaml
secret/registry-pull-secret created
[root@master1 ~]# kubectl get secret
NAME TYPE DATA AGE
default-token-qm9rm kubernetes.io/service-account-token 3 11d
registry-pull-secret kubernetes.io/dockerconfigjson 1 24s
[root@master1 ~]# kubectl get secret -n kube-system
NAME TYPE DATA AGE
dashboard-admin-token-dmlzw kubernetes.io/service-account-token 3 3d22h
default-token-w9vck kubernetes.io/service-account-token 3 11d
kubernetes-dashboard-certs Opaque 11 3d22h
kubernetes-dashboard-key-holder Opaque 2 3d23h
kubernetes-dashboard-token-7dhnw kubernetes.io/service-account-token 3 3d23h
[root@master1 ~]# kubectl get secret -n kube-public
NAME TYPE DATA AGE
default-token-k8kx8 kubernetes.io/service-account-token 3 11d
7.3 验证时为了保证环境,首先删掉本地的tomcat镜像
[root@node02 ~]# docker images | grep tomcat
tomcat latest 927899a31456 2 weeks ago 647MB
tomcat 8.0.52 b4b762737ed4 22 months ago 356MB
#一个是kubectl创建,一个docker命令自己拉取的,都删掉,两个节点都确认一下
7.4 删除镜像前首先要查看是否有因此镜像创建的资源在启动
[root@node02 ~]# docker rmi tomcat:latest
Error response from daemon: conflict: unable to remove repository reference "tomcat:latest" (must force) - container 07d278ce7a99 is using its referenced image 927899a31456
[root@node02 ~]# docker rmi tomcat:latest -f
Untagged: tomcat:latest
Untagged: tomcat@sha256:cae591b6f798359b0ba2bdd9cc248e695ac6e14d20722c5ff82a9a138719896f
[root@node02 ~]# docker rmi tomcat:8.0.52
Error response from daemon: conflict: unable to remove repository reference "tomcat:8.0.52" (must force) - container 98da0f346725 is using its referenced image b4b762737ed4
[root@node02 ~]# docker rmi tomcat:8.0.52 -f
Untagged: tomcat:8.0.52
Untagged: tomcat@sha256:32d451f50c0f9e46011091adb3a726e24512002df66aaeecc3c3fd4ba6981bd4
[root@node02 ~]# docker images | grep tomcat
[root@node02 ~]#
[root@node01 ~]# docker images | grep tomcat
192.168.247.147/gsydianshang/tomcat latest 927899a31456 2 weeks ago 647MB
[root@node01 ~]# docker rmi 192.168.247.147/gsydianshang/tomcat:latest
[root@node01 ~]# docker images | grep tomcat
[root@node01 ~]#
有资源在跑,需要先删资源
7.5 若是强行删的话,就会出现none镜像
[root@node02 ~]# docker images | grep none
<none> <none> 927899a31456 2 weeks ago 647MB
<none> <none> b4b762737ed4 22 months ago 356MB
[root@node02 ~]# docker rmi b4b762737ed4 -f
Error response from daemon: conflict: unable to delete b4b762737ed4 (cannot be forced) - image is being used by running container 98da0f346725
[root@node02 ~]# docker ps -a | grep 98da0f346725
98da0f346725 b4b762737ed4 "catalina.sh run" 35 minutes ago Up 35 minutes k8s_my-tomcat_my-tomcat-57667b9d9-k8tj2_default_cf7c493c-93ee-11ea-a3ae-000c29a14bd3_0
7.6 删掉kube中的关于my-tomcat的资源,tomcat也删掉
[root@master1 ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-tomcat 2 2 2 2 37m
nginx 1 1 1 1 9d
nginx-deployment 3 3 3 3 43h
tomcat 1 1 1 1 25m
[root@master1 ~]# kubectl delete deploy/tomcat
deployment.extensions "tomcat" deleted
[root@master1 ~]# kubectl delete deploy/my-tomcat
deployment.extensions "my-tomcat" deleted
7.7 此时再删none成功
[root@node02 ~]# docker rmi b4b762737ed4
[root@node02 ~]# docker rmi 927899a31456
7.8 安全资源创建成功后,修改原有的tomcat.yaml
[root@master1 ~]# vim tomcat-deployment.yaml #只截取修改部分
spec:
imagePullSecrets:
- name: registry-pull-secret #这个镜像拉取安全凭据名称要与get的一致
containers:
- name: my-tomcat
image: 192.168.247.147/gsydianshang/tomcat
ports:
- containerPort: 80
7.9 此时harbor次数为1
原有tomcat资源被删掉
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-6c94d899fd-xsxct 1/1 Running 1 3d20h
nginx-deployment-78cdb5b557-6z2sf 1/1 Running 1 43h
nginx-deployment-78cdb5b557-9pdf8 1/1 Running 1 43h
nginx-deployment-78cdb5b557-f2hx2 1/1 Running 1 43h
pod1 1/1 Running 1 34h
7.10 开始创建
[root@master1 ~]# kubectl create -f tomcat-deployment.yaml
deployment.extensions/my-tomcat created
The Service "my-tomcat" is invalid: spec.ports[0].nodePort: Invalid value: 31111: provided port is already allocated
#反馈端口已被分配
[root@master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 11d
my-tomcat NodePort 10.0.0.61 <none> 8080:31111/TCP 49m
nginx-service NodePort 10.0.0.131 <none> 80:37651/TCP 43h
查看网络
[root@master1 ~]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/my-tomcat-6cbc7c4d65-gpkxr 1/1 Running 0 2m29s 172.17.42.6 192.168.247.144 <none>
pod/my-tomcat-6cbc7c4d65-hdmhc 1/1 Running 0 2m29s 172.17.45.4 192.168.247.143 <none>
pod/nginx-6c94d899fd-xsxct 1/1 Running 1 3d20h 172.17.42.3 192.168.247.144 <none>
pod/nginx-deployment-78cdb5b557-6z2sf 1/1 Running 1 43h 172.17.42.2 192.168.247.144 <none>
pod/nginx-deployment-78cdb5b557-9pdf8 1/1 Running 1 43h 172.17.42.4 192.168.247.144 <none>
pod/nginx-deployment-78cdb5b557-f2hx2 1/1 Running 1 43h 172.17.45.2 192.168.247.143 <none>
pod/pod1 1/1 Running 1 34h 172.17.45.3 192.168.247.143 <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 11d <none>
service/my-tomcat NodePort 10.0.0.61 <none> 8080:31111/TCP 50m app=my-tomcat
service/nginx-service NodePort 10.0.0.131 <none> 80:37651/TCP 43h app=nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/my-tomcat 2 2 2 2 2m29s my-tomcat 192.168.247.147/gsydianshang/tomcat app=my-tomcat
deployment.apps/nginx 1 1 1 1 9d nginx nginx:1.14 run=nginx
deployment.apps/nginx-deployment 3 3 3 3 43h nginx1 nginx:1.15.4 app=nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/my-tomcat-6cbc7c4d65 2 2 2 2m29s my-tomcat 192.168.247.147/gsydianshang/tomcat app=my-tomcat,pod-template-hash=6cbc7c4d65
replicaset.apps/nginx-6c94d899fd 1 1 1 3d20h nginx nginx:1.14 pod-template-hash=6c94d899fd,run=nginx
replicaset.apps/nginx-dbddb74b8 0 0 0 9d nginx nginx pod-template-hash=dbddb74b8,run=nginx
replicaset.apps/nginx-deployment-78cdb5b557 3 3 3 43h nginx1 nginx:1.15.4 app=nginx,pod-template-hash=78cdb5b557
7.11 查看pod描述信息
[root@master1 ~]# kubectl describe pod my-tomcat-6cbc7c4d65-gpkxr
Containers:
my-tomcat:
Container ID: docker://3b25590d6736ebc5322f1bc3e8b750057af4bd9e17566660e7b5ef8d79dd1565
Image: 192.168.247.147/gsydianshang/tomcat
Image ID: docker-pullable://192.168.247.147/gsydianshang/tomcat@sha256:8672b0039fe1f37d3d35c11f65aefad5388fd46e260980b95304605397bb4942
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m27s default-scheduler Successfully assigned default/my-tomcat-6cbc7c4d65-gpkxr to 192.168.247.144
Normal Pulling 4m26s kubelet, 192.168.247.144 pulling image "192.168.247.147/gsydianshang/tomcat"
Normal Pulled 3m38s kubelet, 192.168.247.144 Successfully pulled image "192.168.247.147/gsydianshang/tomcat"
Normal Created 3m38s kubelet, 192.168.247.144 Created container
Normal Started 3m37s kubelet, 192.168.247.144 Started container
镜像从harbor处拉取
查看harbor的web出镜像下载数量
刷新一下