手把手帶你入門kubernetes部署

實驗環境準備

k8s-master   192.168.2.156

k8s-node節點   192.168.2.161

Ps:兩臺保證時間同步,firewalld防火牆關閉,selinxu關閉,系統採用centos7.5版本

[k8s-master部署]

[root@k8s-master ~]# yum install kubernetes-master etcd flannel -y  

[root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.back
[root@k8s-master ~]# egrep -v "#|^$" /etc/etcd/etcd.conf.back > /etc/etcd/etcd.conf

[root@k8s-master ~]# cat /etc/etcd/etcd.conf

ETCD_DATA_DIR="/data/etcd/"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.2.156:2379"
[root@k8s-master ~]# mkdir -p /data/etcd
[root@k8s-master ~]# chmod 757 -R /data/etcd/

[root@k8s-master ~]# systemctl restart etcd
[root@k8s-master ~]# systemctl enable etcd

配置api文件並生成祕鑰認證

[root@k8s-master ~]# openssl genrsa -out /etc/kubernetes/serviceaccount.key 2048

[root@k8s-master ~]# sed -i  's#KUBE_API_ARGS=""#KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key"#g' /etc/kubernetes/apiserver

[root@k8s-master ~]# sed -i 's#KUBE_CONTROLLER_MANAGER_ARGS=""#KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key"#g' /etc/kubernetes/controller-manager

[root@k8s-master ~]# egrep -v "#|^$" /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.2.156:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key"

 [root@k8s-master ~]# vim /etc/kubernetes/config

 

[root@k8s-master ~]# systemctl restart kube-apiserver
[root@k8s-master ~]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master ~]# systemctl restart kube-controller-manager
[root@k8s-master ~]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master ~]# systemctl restart kube-scheduler

[root@k8s-master ~]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

[root@k8s-master ~]# iptables -P FORWARD ACCEPT

 【k8s-node節點配置】

[root@k8s-node1 ~]# yum install -y kubernetes-node flannel docker *rhsm*

[root@k8s-node1 ~]# vim /etc/kubernetes/config

 [root@k8s-node1 ~]# vim /etc/kubernetes/kubelet 

 

[root@k8s-node1 ~]# systemctl restart kube-proxy
[root@k8s-node1 ~]# systemctl restart kubelet

[root@k8s-node1 ~]# systemctl restart docker

[root@k8s-node1 ~]# iptables -P FORWARD ACCEPT

【配置flanneld網絡】

配置flanneld文件,IP地址統一寫master端IP即可

[root@k8s-master ~]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.2.156:2379 isLeader=true
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}
[root@k8s-master ~]# etcdctl get /atomic.io/network/config
{"Network":"172.17.0.0/16"}

分別重啓flanneld服務

#systemctl restart flanne

檢驗ping測互通

 

【k8s-Dashboard UI界面】

kubernetes最重要的工作就是對docker容器集羣進行統一的管理和調度,一般用命令行來操作kubernetes集羣以及各個節點,可用UI界面可視化操作,由此便需要兩個基礎列表鏡像

[root@k8s-node1 ~]# docker load < pod-infrastructure.tgz
[root@k8s-node1 ~]# docker tag $(docker images|grep none|awk '{print $3}') registry.access.redhat.com/rhel7/pod-infrastructure

[root@k8s-node1 ~]# docker load < kubernetes-dashboard-amd64.tgz
[root@k8s-node1 ~]# docker tag $(docker images|grep none|awk '{print $3}') bestwu/kubernetes-dashboard-amd64:v1.6.3

在master端創建dashboard-controller.yaml和dashboard-service.yaml兩個yuml文件,用於控制node節點鏡像

[root@k8s-master ~]# cat >dashboard-controller.yaml<<EOF

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: bestwu/kubernetes-dashboard-amd64:v1.6.3
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
args:
- --apiserver-host=http://192.168.2.156:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30

EOF

[root@k8s-master ~]# cat dashboard-service.yaml

apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090[root@k8s-master ~]# 

[root@k8s-master ~]#kubectl apply -f dashboard-controller.yaml
[root@k8s-master ~]#kubectl apply -f dashboard-service.yaml

最直接的可以到node節點查看兩個docker容器是否啓動

瀏覽器訪問http://192.168.2.156:8080/ui地址即可~

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章