雲原生的定義
雲原生技術有利於各組織在公有云、私有云和混合雲等新型動態環境中,構建和運行可彈性擴展的應用。雲原生的代表技術包括容器、服務網格、微服務、不可變基礎設施和聲明式API。
這些技術能夠構建容錯性好、易於管理和便於觀察的松耦合系統。結合可靠的自動化手段,雲原生技術使工程師能夠輕鬆地對系統作出頻繁和可預測的重大變更。
雲原生的設計哲學
雲原生本身甚至不能稱爲是一種架構,它首先是一種基礎設施,運行在其上的應用稱作雲原生應用,只有符合雲原生設計哲學的應用架構才叫雲原生應用架構。
雲原生的設計理念
雲原生系統的設計理念如下:
- 面向分佈式設計(Distribution):容器、微服務、API 驅動的開發;
- 面向配置設計(Configuration):一個鏡像,多個環境配置;
- 面向韌性設計(Resistancy):故障容忍和自愈;
- 面向彈性設計(Elasticity):彈性擴展和對環境變化(負載)做出響應;
- 面向交付設計(Delivery):自動拉起,縮短交付時間;
- 面向性能設計(Performance):響應式,併發和資源高效利用;
- 面向自動化設計(Automation):自動化的 DevOps;
- 面向診斷性設計(Diagnosability):集羣級別的日誌、metric 和追蹤;
- 面向安全性設計(Security):安全端點、API Gateway、端到端加密;
雲原生應用程序
雲原生應用程序被設計爲在平臺上運行,並設計用於彈性,敏捷性,可操作性和可觀察性。彈性包含失敗而不是試圖阻止它們;它利用了在平臺上運行的動態特性。敏捷性允許快速部署和快速迭代。可操作性從應用程序內部控制應用程序生命週期,而不是依賴外部進程和監視器。可觀察性提供信息來回答有關應用程序狀態的問題。
實現雲原生應用程序所需特性的常用方法:
- 微服務
- 健康報告
- 遙測數據
- 彈性
- 聲明式的,而不是命令式的
微服務
微服務 (Microservices) 是一種軟件架構風格,它是以專注於單一責任與功能的小型功能區塊 (Small Building Blocks) 爲基礎,利用模塊化的方式組合出複雜的大型應用程序,各功能區塊使用與語言無關 (Language-Independent/Language agnostic) 的 API 集相互通信。
微服務是一種以業務功能爲主的服務設計概念,每一個服務都具有自主運行的業務功能,對外開放不受語言限制的 API (最常用的是 HTTP),應用程序則是由一個或多個微服務組成。
健康報告
爲了提高雲原生應用程序的可操作性,應用程序應該暴露健康檢查。開發人員可以將其實施爲命令或過程信號,以便應用程序在執行自我檢查之後響應,或者更常見的是:通過應用程序提供Web服務,返回HTTP狀態碼來檢查健康狀態。
一個很好的例子就是當平臺需要知道應用程序何時可以接收流量。在應用程序啓動時,如果它不能正確處理流量,它就應該表現爲未準備好。
遙測數據
遙測數據是進行決策所需的信息。確實,遙測數據可能與健康報告重疊,但它們有不同的用途。健康報告通知我們應用程序生命週期狀態,而遙測數據通知我們應用程序業務目標。
測量的指標有時稱爲服務級指標(SLI)或關鍵性能指標(KPI)。這些是特定於應用程序的數據,可以確保應用程序的性能處於服務級別目標(SLO)內。
彈性
一旦你有遙測和監測數據,你需要確保你的應用程序對故障有適應能力。彈性是基礎設施的責任,但云原生應用程序也需要承擔部分工作。在雲原生應用程序中考慮彈性的兩個主要方面:爲失敗設計和優雅降級。
爲失敗設計
設計一個以失敗期望爲目標的應用程序將比假定可用性的應用程序更具防禦性。當故障不可避免時,將會有額外的檢查,故障模式和日誌內置到應用程序中。
優雅降級
雲原生應用程序處理過載的一種方式。
聲明式,非命令式
聲明式編程是一種編程範式,與命令式編程相對立。它描述目標的性質,讓電腦明白目標,而非流程。聲明式編程不用告訴電腦問題領域,從而避免隨之而來的副作用。而命令式編程則需要用算法來明確的指出每一步該怎麼做。
聲明式通信模型規範了通信模型,並且它將功能實現從應用程序轉移到遠程API或服務端點,從而實現某種狀態到達期望狀態。這有助於簡化應用程序,並使它們彼此的行爲更具可預測性。
例子:SQL數據庫
其實你很早就接觸過聲明式編程語言, SQL語言就是很典型的例子:
SELECT * from user WHERE user_name = Ben
上面是一個很普通的SQL查詢語句,我只只聲明我想要找一個叫Ben的用戶(What) , 就是不說SQL該怎麼(How)去尋找怎麼做。接下來我們看看如果用命令式語言寫會是什麼樣的:
//user=[{user_name:'ou',user_id=1},.....]
var user
for(var i = 0; i < user.length; i++){
if(user.user_name == "Ben")
{
print("find");
break;
}
}
通過上面的對比你可以看出聲明式語言的優勢-短小精悍,你並不會知道程序的控制流(control flow)我們不需要告訴程序如何去尋找(How),而是隻告訴程序我們想要的結果(What),讓程序自己來解決過程(How)。當然SQL具體的細節還是用命令式的編程風格來實現的。
Play with Kubernetes
創建Kubernetes集羣
登陸Play with Kubernetes,啓動第一個實例作爲Master節點,在web終端上執行:
- 初始化master節點:
kubeadm init --apiserver-advertise-address $(hostname -i)
輸出如下:
[node1 ~]$ kubeadm init --apiserver-advertise-address $(hostname -i)
Initializing machine ID from random generator.
[init] using Kubernetes version: v1.11.10
[preflight] running pre-flight checks
[WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
I1117 13:53:18.409493 885 kernel_validator.go:81] Validating kernel version
I1117 13:53:18.409685 885 kernel_validator.go:96] Validating kernel config
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-148-generic
DOCKER_VERSION: 18.06.1-ce
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module "configs": output - "", err - exit status 1
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.18]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.0.18 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 51.503514 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
[bootstraptoken] using token: 5f1nyz.351cet8vt4g2ix78
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.0.18:6443 --token 5f1nyz.351cet8vt4g2ix78 --discovery-token-ca-cert-hash sha256:d105d049cf090f7814473e5554b79e09cd13e4acfd8a56b09754ba9181d08fd8
Waiting for api server to startup
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
- 初始化集羣網絡:
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
輸出如下:
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
- 執行下列初始化命令:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
- 根據master節點上的提示,在新的web終端上執行:
kubeadm join 192.168.0.18:6443 --token 5f1nyz.351cet8vt4g2ix78 --discovery-token-ca-cert-hash sha256:d105d049cf090f7814473e5554b79e09cd13e4acfd8a56b09754ba9181d08fd8
輸出如下:
[preflight] running pre-flight checks
[WARNING DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING RequiredIPVSKernelModulesAvailable]: error getting required builtin kernel modules: exit status 1(cut: /lib/modules/4.4.0-166-generic/modules.builtin: No such file or directory
)
[WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
I1117 14:09:02.416363 7243 kernel_validator.go:81] Validating kernel version
I1117 14:09:02.419283 7243 kernel_validator.go:96] Validating kernel config
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-166-generic
DOCKER_VERSION: 18.06.1-ce
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module "configs": output - "", err - exit status 1
[WARNING Port-10250]: Port 10250 is in use
[discovery] Trying to connect to API Server "192.168.0.28:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.28:6443"
[discovery] Requesting info from "https://192.168.0.28:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.28:6443"
[discovery] Successfully established connection with API Server "192.168.0.28:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
多開幾個實例,重複執行第四步,即可向Kubernetes集羣中增加節點。
此時在master節點上執行kubectl get nodes
查看節點所有節點狀態:
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 19m v1.11.3
node2 Ready <none> 2m v1.11.3
node3 Ready <none> 1m v1.11.3
創建nginx deployment
[node1 ~]$ curl https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml > nginx-app.yaml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 497 100 497 0 0 1252 0 --:--:-- --:--:-- --:--:-- 1255
[node1 ~]$
[node1 ~]$ kubectl apply -f nginx-app.yaml
service/my-nginx-svc created
deployment.apps/my-nginx created
此時查看nodes和pods:
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 29m v1.11.3
node2 Ready <none> 11m v1.11.3
node3 Ready <none> 11m v1.11.3
[node1 ~]$
[node1 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-67594d6bf6-2cbbz 1/1 Running 0 1m
my-nginx-67594d6bf6-r2p6w 1/1 Running 0 1m
my-nginx-67594d6bf6-vjqn4 1/1 Running 0 1m
參考來源:
- https://github.com/cncf/toc/blob/master/DEFINITION.md
- https://zh.wikipedia.org/wiki/%E5%BE%AE%E6%9C%8D%E5%8B%99
- https://zh.wikipedia.org/zh-cn/%E5%AE%A3%E5%91%8A%E5%BC%8F%E7%B7%A8%E7%A8%8B
- https://zhuanlan.zhihu.com/p/34445114
- https://labs.play-with-k8s.com/