本案例中使用RHEL 7.2作爲宿主機的操作系統,並且使用RHEL自帶的kubernetes安裝包。容器之間的跨主機通信網絡採用flannel實現。
架構如下:
具體的步驟如下:
一. 安裝配置kube-master
1. 安裝kubernetes和etcd
# yum -y install kubernetes etcd
2. 編輯/etc/etcd/etcd.conf,確保etcd監聽所有的IP地址,需要修改的配置如下:
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
3. 配置kube-aipserver,編輯/etc/kubernetes/apiserver,需要修改的配置如下:
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
注意:如果沒有配置service account,需要把KUBE_ADMISSION_CONTROL中的ServiceAccount去掉
4. 啓動etcd, kube-apiserver, kube-controller-manager和kube-scheduler
# for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICE
systemctl enable $SERVICE
done
5. 在etcd裏配置flannel網絡
# etcdctl mk /flannel/network/config '{"Network":"172.17.0.0/16"}'
二. 安裝配置minion
1. 在minion1和minion2上的共同配置:
1.1) 安裝kubernete和和flannel
# yum -y install flannel kubernetes
1.2) 爲flannel配置etcd服務,編輯/etc/sysconfig/flanneld,修改如下內容:
FLANNEL_ETCD="http://kube-master:2379"
FLANNEL_ETCD_KEY="/flannel/network"
1.3) 編輯/etc/kubernetes/config,修改如下內容:
KUBE_MASTER="--master=http://kube-master:8080"
2. 僅在mimion1的配置
編輯/etc/kubernetes/kubelet,修改如下內容:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname_override=minion1"
KUBELET_API_SERVER="--api_servers=http://kube-master:8080"
3. 僅在mimion2的配置
編輯/etc/kubernetes/kubelet,修改如下內容:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname_override=minion2"
KUBELET_API_SERVER="--api_servers=http://kube-master:8080"
4. 在minion1和minion2上啓動flanneld, kube-proxy, kubelet和docker服務
# for SERVICE in flanneld kube-proxy kubelet docker; do
systemctl restart $SERVICE
systemctl enable $SERVICE
systemctl status $SERVICE
done
三. 創建pods
1. 在kube-master上可以看到已經識別出兩個nodes,並且狀態都是Ready
[root@kube-master ~]# kubectl get nodes
NAME LABELS STATUS
minion1 kubernetes.io/hostname=minion1 Ready
minion2 kubernetes.io/hostname=minion2 Ready
2. 在kube-master上創建一個yaml格式的pod定義文件,命名爲cirros.yaml,內容如下:
[root@kube-master ~]# cat cirros.yaml
apiVersion: v1
kind: Pod
metadata:
name: cirros
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: cirros
image: "cirros"
command: ["ping","www.google.com"]
3. 創建pod,命令如下。創建過程中pod會被分配到某一個節點運行,所需的相應docker image會從docker hub獲取,缺省是docker.io上的docker hub。
[root@kube-master ~]# kubectl create -f cirros.yaml
檢查pod的狀態,可以看到該pod已經處於Running狀態,並且是分配到了minion2節點。
[root@kube-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE
cirros 1/1 Running 0 4m minion2
==> 注意,如果minion節點因爲防火牆的原因不能訪問google,上述創建會失敗,因爲對pod的操作需要用到image "gcr.io/google_containers/pause"來創建一個容器。解決辦法是從docker.io上下一個pause鏡像,然後重新tag成gcr.io/google_containers/pause,操作如下:
# docker pull docker.io/kubernetes/pause
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
docker.io/kubernetes/pause latest 6c4579af347b 17 months ago 239.8 kB
# dock tag 6c4579af347b gcr.io/google_containers/pause:0.8.0
# dock tag 6c4579af347b gcr.io/google_containers/pause
在minion2節點可以看到一個新的容器已經運行,並且在不斷的執行pod文件裏定義的ping命令。
[root@mimion2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73c70f63faf0 cirros "ping www.google.com" 12 seconds ago Up 11 seconds k8s_cirros.f11fd195_cirros_default_48387572-b114-11e5-89b5-001c42f9db16_dfc276ef
c911b2a51980 gcr.io/google_containers/pause:0.8.0 "/pause" 13 seconds ago Up 12 seconds k8s_POD.e4cc795_cirros_default_48387572-b114-11e5-89b5-001c42f9db16_285cc657
[root@mimion2 ~]# docker logs -f 73c70f63faf0
PING www.google.com (74.125.203.106): 56 data bytes
64 bytes from 74.125.203.106: seq=0 ttl=127 time=166.895 ms
64 bytes from 74.125.203.106: seq=1 ttl=127 time=192.749 ms
64 bytes from 74.125.203.106: seq=2 ttl=127 time=175.124 ms
64 bytes from 74.125.203.106: seq=3 ttl=127 time=173.619 ms