Ubuntu上手動安裝Kubernetes

背景

  兩臺Ubuntu16.04服務器:ip分別爲192.168.56.160和192.168.56.161。。
  Kubernetes版本:1.5.5
  Docker版本:1.12.6
  etcd版本:2.2.1
  flannel版本:0.5.6
  其中160服務器既做Kubernetes的master節點,又做node節點;161服務器只做node節點。
  master節點上需要部署:kube-apiserver、kube-controller-manager、kube-scheduler、etcd服務。
  node節點上部署:kubelet、kube-proxy、docker和flannel服務。

下載

Kubernetes下載

  Client二進制下載:https://dl.k8s.io/v1.5.5/kubernetes-client-linux-amd64.tar.gz
  Server二進制下載:https://dl.k8s.io/v1.5.5/kubernetes-server-linux-amd64.tar.gz
  我的服務器是linux,amd64的,如果有其他環境,可以前往頁面下載
  將可執行文件kubernetes目錄下,server和client目中的kube-apiserver、kube-controller-manager、kubectl、kubelet、kube-proxy、kube-scheduler等都拷貝到/usr/bin/目錄中。

etcd下載

  etcd的github release下載都是放在AWS S3上(點這裏)的,我這網絡訪問不了或者很慢,於是找了個國內的下載包(點這裏)。
  除此之外,還可以自己編譯etcd源碼,來獲取etcd的可執行文件。
  將etcd的可執行文件etcd和etcdctl拷貝到/usr/bin/目錄。

flannel下載

  flannel和etcd都是coreOS公司的產品,所以flannel的github release下載也是放在AWS S3上。不過幸好flannel的編譯很簡單,從github上下載,然後直接編譯即可。然後會在flannel的bin或者dist目錄下(版本不同可能導致目錄不同)生成flannel可執行文件。

$ git clone -b v0.5.6 https://github.com/coreos/flannel.git
$ cd flannel
$ ./build

  具體的編譯方法可能會不同,請參考flannel目錄下的README.md文件。
  將可執行文件flanneld拷貝到/usr/bin/目錄。
  創建/usr/bin/flannel目錄,並將dist目錄下的mk-docker-opts.sh文件拷貝到/usr/bin/flannel/中。

Kubernetes master配置

etcd配置

創建數據目錄

$ sudo mkdir -p /var/lib/etcd/

創建配置目錄和文件

$ sudo mkdir -p /etc/etcd/
$ sudo vim /etc/etcd/etcd.conf

ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.160:2379"

創建systemd文件

$ sudo vim /lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target


[Service]
User=root
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

啓動服務

$ sudo systemctl daemon-reload 
$ sudo systemctl enable etcd
$ sudo systemctl start etcd

測試服務端口

$ sudo systemctl status etcd

● etcd.service - Etcd Server
   Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2017-03-27 11:19:35 CST; 7s ago
...

  再查看端口是否正常開放。

$ netstat -apn | grep 2379
tcp6       0      0 :::2379                 :::*                    LISTEN      7211/etcd 

創建一個etcd網絡

$ etcdctl set /coreos.com/network/config '{ "Network": "192.168.4.0/24" }'

  如果部署的是etcd集羣,那麼每臺etcd服務器上都需要執行上述步驟。但我這裏只使用了standalone,所以我的etcd服務就搞定了。

Kubernetes通用配置

創建Kubernetes配置目錄

$ sudo mkdir /etc/kubernetes

Kubernetes通用配置文件

  /etc/kubernetes/config文件中,存儲的是Kubernetes各組件的通用配置信息。

$ sudo vim /etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.56.160:8080"

配置kube-apiserver服務

  在Kubernetes的master主機上。

創建kube-apiserver配置文件

  kube-apiserver的專用配置文件爲/etc/kubernetes/apiserver。

$ sudo vim /etc/kubernetes/apiserver

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.160:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.4.0/24"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

創建systemd文件

$ sudo vim /lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
Wants=etcd.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置kube-controller-manager服務

創建kube-controller-manager配置文件

  kube-controller-manager的專用配置文件爲/etc/kubernetes/controller-manager

$ sudo vim /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS=""

創建systemd文件

$ sudo vim /lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
After=kube-apiserver.service
Requires=etcd.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置kube-scheduler服務

創建kube-scheduler配置文件

  kube-scheduler的專用配置文件爲/etc/kubernetes/scheduler

$ sudo vim /etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS=""

創建systemd文件

$ sudo vim /lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
        $KUBE_LOGTOSTDERR \
        $KUBE_MASTER
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啓動Kubernetes master節點的服務

$ sudo systemctl daemon-reload
$ sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
$ sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Kubernetes node配置

  Kubernetes node節點也需要配置/etc/kubernetes/config文件,內容與Kubernetes mater節點一致。

flannel配置

創建配置目錄和文件

$ sudo vim /etc/default/flanneld.conf

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.56.160:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

  其中,FLANNEL_ETCD_PREFIX選項就是剛纔配置的etcd網絡。

創建systemd文件

$ sudo vim /lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
After=etcd.service
Before=docker.service

[Service]
User=root
EnvironmentFile=/etc/default/flanneld.conf
ExecStart=/usr/bin/flanneld \
        -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
        -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
        $FLANNEL_OPTIONS
ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

啓動服務

$ sudo systemctl daemon-reload 
$ sudo systemctl enable flanneld
$ sudo systemctl start flanneld

查看服務是否啓動

$ sudo systemctl status flanneld
● flanneld.service - Flanneld
   Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2017-03-27 11:59:00 CST; 6min ago
...

docker配置

docker安裝

  通過apt來安裝docker。

$ sudo apt -y install docker.io

使flannel作用docker網絡

  修改docker的systemd配置文件。

$ sudo mkdir /lib/systemd/system/docker.service.d
$ sudo vim /lib/systemd/system/docker.service.d/flannel.conf

[Service]
EnvironmentFile=-/run/flannel/docker

  重啓docker服務。

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

  查看docker是否有了flannel的網絡。

$ sudo ps -ef | grep docker

root     11285     1  1 15:14 ?        00:00:01 /usr/bin/dockerd -H fd:// --bip=192.168.4.129/25 --ip-masq=true --mtu=1472
...

配置kubelet服務

創建kubelet的數據目錄

$ sudo mkdir /var/lib/kubelet

創建kubelete配置文件

  kubelet的專用配置文件爲/etc/kubernetes/kubelet

$ sudo vim /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=192.168.56.161"
KUBELET_API_SERVER="--api-servers=http://192.168.56.160:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true"

創建systemd文件

$ sudo vim /lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBELET_API_SERVER \
        $KUBELET_ADDRESS \
        $KUBELET_PORT \
        $KUBELET_HOSTNAME \
        $KUBE_ALLOW_PRIV \
        $KUBELET_POD_INFRA_CONTAINER \
        $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

啓動kubelet服務

$ sudo systemctl daemon-reload
$ sudo systemctl enable kubelet
$ sudo systemctl start kubelet

配置kube-proxy服務

創建kube-proxy配置文件

  kube-proxy的專用配置文件爲/etc/kubernetes/proxy

$ sudo vim /etc/kubernetes/proxy

# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""

創建systemd文件

$ sudo vim /lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

啓動kube-proxy服務

$ sudo systemctl daemon-reload
$ sudo systemctl enable kube-proxy
$ sudo systemctl start kube-proxy

查詢node狀態 

  執行kubectl get node命令來查看node狀態。都爲Ready狀態時,則說明node節點已經成功連接到master,如果不是該狀態,則需要到該節點上,定位下原因。可通過journalctl -u kubelet.service命令來查看kubelet服務的日誌。

$ kubectl get node
NAME             STATUS     AGE
192.168.56.160   Ready      2d
192.168.56.161   Ready      2d

Kubernetes測試  

  測試Kubernetes是否成功安裝。

編寫yaml文件

  在Kubernetes master上創建一個nginx.yaml,用於創建一個nginx的ReplicationController。

$ vim rc_nginx.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

創建pod

  執行kubectl create命令創建ReplicationController。該ReplicationController配置中有兩個副本,並且我們的環境有兩個Kubernetes Node,因此,它應該會在兩個Node上分別運行一個Pod。
  注意:這個過程可能會需要很長的時間,它會從網上拉取nginx鏡像,還有pod-infrastructure這個關鍵鏡像。

$ kubectl create -f rc_nginx.yaml

查詢狀態 

  執行kubectl get pod和rc命令來查看pod和rc狀態。剛開始可能會處於containerCreating的狀態,待需要的鏡像下載完成後,就會創建具體的容器。pod狀態應該顯示Running狀態。

$ kubectl get rc
NAME      DESIRED   CURRENT   READY     AGE
nginx     2         2         2         5m

$ kubectl get pod -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP              NODE
nginx-1j5x4   1/1       Running   0          5m        192.168.4.130   192.168.56.160
nginx-6bd28   1/1       Running   0          5m        192.168.4.130   192.168.56.161

  大功告成!!!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章