Kubeadm v1.13.1部署virtlet

環境

系統版本爲Centos7.5,使用kubeadm v1.13.1 部署的k8s環境

[root@master k8s]# cat /etc/*release
CentOS Linux release 7.5.1804 (Core) 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

CentOS Linux release 7.5.1804 (Core) 
CentOS Linux release 7.5.1804 (Core) 
[root@master k8s]# 
[root@master k8s]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:36:44Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[root@master k8s]# 
[root@master k8s]# 

部署

部署K8s環境

[root@master k8s]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.0 --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.66.250.200]
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.66.250.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.66.250.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.003749 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: l5dcj6.uux508f3oqnb4dv8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.66.250.200:6443 --token l5dcj6.uux508f3oqnb4dv8 --discovery-token-ca-cert-hash sha256:c9c30cdb146da1357ba7948825f98d98ed17404ea4081043a06efdcb759f386e

[root@master k8s]# mkdir -p $HOME/.kube
[root@master k8s]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sending incremental file list
>f..t...... admin.conf
          5,449 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=0/1)

sent 5,545 bytes  received 35 bytes  11,160.00 bytes/sec
total size is 5,449  speedup is 0.98
[root@master k8s]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@master k8s]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/master untainted
[root@master k8s]# 
[root@master k8s]# kubectl apply -f ../weave-daemonset-k8s-1.7.yaml  
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
[root@master k8s]#
[root@master k8s]# kubectl get pod --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-78d4cf999f-nfgll         1/1     Running   0          12m
kube-system   coredns-78d4cf999f-vhgwc         1/1     Running   0          12m
kube-system   etcd-master                      1/1     Running   0          11m
kube-system   kube-apiserver-master            1/1     Running   0          11m
kube-system   kube-controller-manager-master   1/1     Running   0          12m
kube-system   kube-proxy-8w2sv                 1/1     Running   0          12m
kube-system   kube-scheduler-master            1/1     Running   0          12m
kube-system   weave-net-zzwps                  2/2     Running   0          30s
[root@master k8s]# 

下載virtlet相關文件

下載地址:https://github.com/Mirantis/virtlet/blob/master/deploy/real-cluster.md

下載鏡像文件:
curl https://raw.githubusercontent.com/Mirantis/virtlet/master/deploy/images.yaml >images.yaml

下載virtletctl工具:
curl -SL -o virtletctl https://github.com/Mirantis/virtlet/releases/download/v1.4.4/virtletctl

下載cirros鏡像編排文件:
wget https://raw.githubusercontent.com/Mirantis/virtlet/v1.4.4/examples/cirros-vm.yaml

[root@master virtlet]# ls
cirros.yaml  images.yaml  virtletctl
[root@master virtlet]# 

部署virtlet環境

當前節點沒有node接入,屬於all-in-one方式部署,設置當前node(master)的標籤允許virtlet以daemonset方式運行

[root@master virtlet]# kubectl label node master extraRuntime=virtlet
node/master labeled
[root@master virtlet]# 

把cirros的image配置寫入到kubernets的configmap

[root@master virtlet]# kubectl create configmap -n kube-system virtlet-image-translations --from-file images.yaml
configmap/virtlet-image-translations created
[root@master virtlet]#

啓動virtlet

[root@master virtlet]# ./virtletctl gen | kubectl apply -f -
daemonset.apps/virtlet created
clusterrolebinding.rbac.authorization.k8s.io/virtlet created
clusterrole.rbac.authorization.k8s.io/virtlet created
clusterrole.rbac.authorization.k8s.io/configmap-reader created
clusterrole.rbac.authorization.k8s.io/virtlet-userdata-reader created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-node-binding created
clusterrolebinding.rbac.authorization.k8s.io/vm-userdata-binding created
clusterrole.rbac.authorization.k8s.io/virtlet-crd created
clusterrolebinding.rbac.authorization.k8s.io/virtlet-crd created
serviceaccount/virtlet created
customresourcedefinition.apiextensions.k8s.io/virtletimagemappings.virtlet.k8s created
customresourcedefinition.apiextensions.k8s.io/virtletconfigmappings.virtlet.k8s created
[root@master virtlet]# 
[root@master virtlet]# kubectl get daemonset --all-namespaces
NAMESPACE     NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   kube-proxy   1         1         1       1            1           <none>          13m
kube-system   virtlet      1         1         1       1            1           <none>          2m11s
kube-system   weave-net    1         1         1       1            1           <none>          77s
[root@master virtlet]# 
[root@master virtlet]# kubectl get pod --all-namespaces      
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-78d4cf999f-nfgll         1/1     Running   0          13m
kube-system   coredns-78d4cf999f-vhgwc         1/1     Running   0          13m
kube-system   etcd-master                      1/1     Running   0          13m
kube-system   kube-apiserver-master            1/1     Running   0          12m
kube-system   kube-controller-manager-master   1/1     Running   0          13m
kube-system   kube-proxy-8w2sv                 1/1     Running   0          13m
kube-system   kube-scheduler-master            1/1     Running   0          13m
kube-system   virtlet-r59mf                    3/3     Running   0          90s
kube-system   weave-net-zzwps                  2/2     Running   0          103s
[root@master virtlet]# 
[root@master virtlet]# ps -ef | grep virtlet
root     18984 18963  0 16:58 ?        00:00:00 /usr/local/bin/virtlet --v 1
root     20436 48990  0 17:00 pts/0    00:00:00 grep --color=auto virtlet
[root@master virtlet]# 

啓動cirros鏡像

[root@master virtlet]# kubectl apply -f cirros.yaml 
pod/cirros-vm created
[root@master virtlet]# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
cirros-vm   1/1     Running   0          8s
[root@master virtlet]# 

修改cirros鏡像的root密碼

[root@master virtlet]# docker ps -a | grep cirros
27c19b86f18c        17d52dddc815                                        "/sbin/init"             About a minute ago   Up About a minute                               k8s_cirros-vm_cirros-vm_default_85e5d6c3-0a80-11e9-9c43-44a84246cee1_0
d447475f902f        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 About a minute ago   Up About a minute                               k8s_POD_cirros-vm_default_85e5d6c3-0a80-11e9-9c43-44a84246cee1_0
[root@master virtlet]# docker exec -it 27c19b86f18c passwd root
Changing password for root
New password: 
Bad password: similar to hostname
Retype password: 
Password for root changed by root
[root@master virtlet]# 

登錄cirros鏡像

cirros啓動完成後就可以使用ssh登錄系統了

[root@master virtlet]# kubectl describe pod cirros-vm | grep IP:
IP:                 10.32.0.7
[root@master virtlet]# 
[root@master virtlet]# 
[root@master virtlet]# 
[root@master virtlet]# ping 10.32.0.7               
PING 10.32.0.7 (10.32.0.7) 56(84) bytes of data.
64 bytes from 10.32.0.7: icmp_seq=1 ttl=64 time=0.086 ms
64 bytes from 10.32.0.7: icmp_seq=2 ttl=64 time=0.066 ms
^C
--- 10.32.0.7 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.066/0.076/0.086/0.010 ms
[root@master virtlet]# 
[root@master virtlet]# 
[root@master virtlet]# ssh 10.32.0.7
The authenticity of host '10.32.0.7 (10.32.0.7)' can't be established.
ECDSA key fingerprint is SHA256:hkufsXABGMZrw68iONlp1djf3jWLeNIqfo4dm3mnMjM.
ECDSA key fingerprint is MD5:b4:b4:50:d6:29:76:fb:26:3a:aa:78:41:64:c7:16:76.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.32.0.7' (ECDSA) to the list of known hosts.
[email protected]'s password: 
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1376 qdisc noqueue 
    link/ether ee:08:47:5f:4d:ed brd ff:ff:ff:ff:ff:ff
    inet 10.32.0.7/12 brd 10.47.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec08:47ff:fe5f:4ded/64 scope link tentative flags 08 
       valid_lft forever preferred_lft forever
# ls /
bin       dev       home      lib       linuxrc   mnt       opt       root      sbin      tmp       var
boot      etc       init      lib64     media     old-root  proc      run       sys       usr
# 
# exit
Connection to 10.32.0.7 closed.
[root@master virtlet]# 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章