四, 跨語言微服務框架 - Istio官方示例(超時控制,熔斷器,流量複製)

基礎的Istio環境已經搭建完成,我們需要開始瞭解Istio提供作爲微服務網格的各種機制,也就是本文標題的(超時控制,熔斷器,流量複製,速率控制)官方很給力的準備的實例項目也不需要大家自己編寫demo來進行測試,那就來時跑跑看吧.

附上:

喵了個咪的博客:w-blog.cn

Istio官方地址:https://preliminary.istio.io/zh

Istio中文文檔:https://preliminary.istio.io/zh/docs/

PS : 此處基於當前最新istio版本1.0.3版本進行搭建和演示

一. 超時控制

在真正的請求過程中我們常常會給對應的服務給一個超時時間來保證足夠的用戶體驗,通過硬編碼的方式當然不理想,Istio提供對應的超時控制方式:

1. 先還原所有的路由配置:

kubectl apply -n istio-test -f istio-1.0.3/samples/bookinfo/networking/virtual-service-all-v1.yaml

可以在路由規則的 timeout 字段中來給 http 請求設置請求超時。缺省情況下,超時被設置爲 15 秒鐘,本文任務中,會把 reviews 服務的超時設置爲一秒鐘。爲了能觀察設置的效果,還需要在對 ratings 服務的調用中加入兩秒鐘的延遲。

2. 先全部指向到v2版本

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v2
EOF

3. 在對 ratings 服務的調用中加入兩秒鐘的延遲:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ratings
spec:
  hosts:
  - ratings
  http:
  - fault:
      delay:
        percent: 100
        fixedDelay: 2s
    route:
    - destination:
        host: ratings
        subset: v1
EOF

4. 接下來在目的爲 reviews:v2 服務的請求加入一秒鐘的請求超時:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v2
    timeout: 0.5s
EOF

在1秒鐘之後就會返回(即使超時配置爲半秒,響應需要1秒的原因是因爲服務中存在硬編碼重試productpage,因此它reviews在返回之前調用超時服務兩次。)

二, 熔斷器

在微服務有重要的服務也有不重要的服務,雖然可以通過K8S控制CPU消耗,但是這種基本控制力度是沒法滿足對於併發請求數的控制,比如A服務限制併發數100,B服務限制10,這個時候就可以通過併發數來限制,無需通過CPU這種不準確的限制方式

1. 運行測試程序

> kubectl apply -n istio-test -f istio-1.0.3/samples/httpbin/httpbin.yaml

2. 創建一個 目標規則,針對 httpbin 服務設置斷路器:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 1
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutiveErrors: 1
      interval: 1s
      baseEjectionTime: 3m
      maxEjectionPercent: 100
EOF

3. 這裏要使用一個簡單的負載測試客戶端,名字叫 fortio。這個客戶端可以控制連接數量、併發數以及發送 HTTP 請求的延遲。使用這一客戶端,能夠有效的觸發前面在目標規則中設置的熔斷策略。

> kubectl apply -n istio-test -f istio-1.0.3/samples/httpbin/sample-client/fortio-deploy.yaml
> FORTIO_POD=$(kubectl get -n istio-test pod | grep fortio | awk '{ print $1 }')
> kubectl exec -n istio-test -it $FORTIO_POD  -c fortio /usr/local/bin/fortio -- load -curl  http://httpbin:8000/get
HTTP/1.1 200 OK
server: envoy
date: Wed, 07 Nov 2018 06:52:32 GMT
content-type: application/json
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 365
x-envoy-upstream-service-time: 113

{
  "args": {}, 
  "headers": {
    "Content-Length": "0", 
    "Host": "httpbin:8000", 
    "User-Agent": "istio/fortio-1.0.1", 
    "X-B3-Sampled": "1", 
    "X-B3-Spanid": "a708e175c6a077d1", 
    "X-B3-Traceid": "a708e175c6a077d1", 
    "X-Request-Id": "62d09db5-550a-9b81-80d9-6d8f60956386"
  }, 
  "origin": "127.0.0.1", 
  "url": "http://httpbin:8000/get"
}

4. 在上面的熔斷設置中指定了 maxConnections: 1 以及 http1MaxPendingRequests: 1。這意味着如果超過了一個連接同時發起請求,Istio 就會熔斷,阻止後續的請求或連接,嘗試觸發下熔斷機制。

> kubectl exec -n istio-test -it $FORTIO_POD  -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
06:54:16 I logger.go:97> Log level is now 3 Warning (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0)
Ended after 96.058168ms : 20 calls. qps=208.21
Aggregated Function Time : count 20 avg 0.0084172288 +/- 0.004876 min 0.000583248 max 0.016515793 sum 0.168344576
# range, mid point, percentile, count
>= 0.000583248 <= 0.001 , 0.000791624 , 5.00, 1
> 0.001 <= 0.002 , 0.0015 , 25.00, 4
> 0.006 <= 0.007 , 0.0065 , 30.00, 1
> 0.007 <= 0.008 , 0.0075 , 35.00, 1
> 0.008 <= 0.009 , 0.0085 , 55.00, 4
> 0.009 <= 0.01 , 0.0095 , 65.00, 2
> 0.01 <= 0.011 , 0.0105 , 75.00, 2
> 0.011 <= 0.012 , 0.0115 , 80.00, 1
> 0.012 <= 0.014 , 0.013 , 85.00, 1
> 0.014 <= 0.016 , 0.015 , 95.00, 2
> 0.016 <= 0.0165158 , 0.0162579 , 100.00, 1
# target 50% 0.00875
# target 75% 0.011
# target 90% 0.015
# target 99% 0.0164126
# target 99.9% 0.0165055
Sockets used: 7 (for perfect keepalive, would be 2)
Code 200 : 15 (75.0 %)
Code 503 : 5 (25.0 %)
Response Header Sizes : count 20 avg 172.7 +/- 99.71 min 0 max 231 sum 3454
Response Body/Total Sizes : count 20 avg 500.7 +/- 163.8 min 217 max 596 sum 10014
All done 20 calls (plus 0 warmup) 8.417 ms avg, 208.2 qps

這裏可以看到,幾乎所有請求都通過了。Istio-proxy 允許存在一些誤差

Code 200 : 15 (75.0 %)
Code 503 : 5 (25.0 %)

5. 接下來把併發連接數量提高到 3:

> kubectl exec -n istio-test -it $FORTIO_POD  -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
06:55:28 I logger.go:97> Log level is now 3 Warning (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 30 calls: http://httpbin:8000/get
Starting at max qps with 3 thread(s) [gomax 4] for exactly 30 calls (10 per thread + 0)
Ended after 59.921126ms : 30 calls. qps=500.66
Aggregated Function Time : count 30 avg 0.0052897259 +/- 0.006496 min 0.000633091 max 0.024999538 sum 0.158691777
# range, mid point, percentile, count
>= 0.000633091 <= 0.001 , 0.000816546 , 16.67, 5
> 0.001 <= 0.002 , 0.0015 , 63.33, 14
> 0.002 <= 0.003 , 0.0025 , 66.67, 1
> 0.008 <= 0.009 , 0.0085 , 73.33, 2
> 0.009 <= 0.01 , 0.0095 , 80.00, 2
> 0.01 <= 0.011 , 0.0105 , 83.33, 1
> 0.011 <= 0.012 , 0.0115 , 86.67, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 1
> 0.014 <= 0.016 , 0.015 , 93.33, 1
> 0.02 <= 0.0249995 , 0.0224998 , 100.00, 2
# target 50% 0.00171429
# target 75% 0.00925
# target 90% 0.014
# target 99% 0.0242496
# target 99.9% 0.0249245
Sockets used: 22 (for perfect keepalive, would be 3)
Code 200 : 10 (33.3 %)
Code 503 : 20 (66.7 %)
Response Header Sizes : count 30 avg 76.833333 +/- 108.7 min 0 max 231 sum 2305
Response Body/Total Sizes : count 30 avg 343.16667 +/- 178.4 min 217 max 596 sum 10295
All done 30 calls (plus 0 warmup) 5.290 ms avg, 500.7 qps

這時候會觀察到,熔斷行爲按照之前的設計生效了,只有 33.3% 的請求獲得通過,剩餘請求被斷路器攔截了

我們可以查詢 istio-proxy 的狀態,獲取更多相關信息:

> kubectl exec -n istio-test -it $FORTIO_POD  -c istio-proxy  -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending

最後清理規則和服務

kubectl delete -n istio-test destinationrule httpbin
kubectl delete -n istio-test deploy httpbin fortio-deploy
kubectl delete -n istio-test svc httpbin

三, 流量複製

之前在流量控制裏面有提到分流的概覽V1和V2都承擔50%的流量,但是還有一種場景下會用到Istio流量複製的功能,它是一個以儘可能低的風險爲生產帶來變化的強大的功能.

當我們需要發佈一個我們不太確認的程序的時候我們希望它可以先運行一段時間看看穩定性,但是並不想讓用戶訪問這個不穩定的服務,此時流量複製就起到作用了,流量複製可以吧100%請求給到V1版本並且在其中抽取10%的請求也發送給V2一份但是並不關心它的返回.

1. 我們先創建兩個版本的httpbin服務用於實驗

httpbin-v1

> kubectl apply -n istio-test -f - <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
        ports:
        - containerPort: 80
EOF

httpbin-v2:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v2
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
        ports:
        - containerPort: 80
EOF

httpbin Kubernetes service:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
EOF

啓動 sleep 服務,這樣就可以使用 curl 來請求了:

> kubectl apply -n istio-test -f - <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: sleep
    spec:
      containers:
      - name: sleep
        image: tutum/curl
        command: ["/bin/sleep","infinity"]
        imagePullPolicy: IfNotPresent
EOF

默認情況下,Kubernetes 在 httpbin 服務的兩個版本之間進行負載均衡。在此步驟中會更改該行爲,把所有流量都路由到 v1。

創建一個默認路由規則,將所有流量路由到服務的 v1 :

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
    - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
      weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: httpbin
spec:
  host: httpbin
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
EOF

向服務發送一些流量:

> export SLEEP_POD=$(kubectl get -n istio-test pod -l app=sleep -o jsonpath={.items..metadata.name})
> kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool
{
    "headers": {
        "Accept": "*/*",
        "Content-Length": "0",
        "Host": "httpbin:8000",
        "User-Agent": "curl/7.35.0",
        "X-B3-Sampled": "1",
        "X-B3-Spanid": "8e32159d042d8a75",
        "X-B3-Traceid": "8e32159d042d8a75"
    }
}

查看 httpbin pods 的 v1 和 v2 日誌。您可以看到 v1 的訪問日誌和 v2 爲 的日誌:

> export V1_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name})
> kubectl logs -n istio-test -f $V1_POD -c httpbin
127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0"
> export V2_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name})
> kubectl logs -n istio-test -f $V2_POD -c httpbin
<none>

2. 鏡像流量到 v2運行規則

> kubectl apply -n istio-test -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
    - httpbin
  http:
  - route:
    - destination:
        host: httpbin
        subset: v1
      weight: 100
    mirror:
      host: httpbin
      subset: v2
EOF

此路由規則將 100% 的流量發送到 v1 。最後一節指定鏡像到 httpbin:v2 服務。當流量被鏡像時,請求將通過其主機/授權報頭髮送到鏡像服務附上 -shadow。例如,將 cluster-1 變爲 cluster-1-shadow。

此外,重點注意這些被鏡像的請求是“即發即棄”的,也就是說這些請求引發的響應是會被丟棄的。

3. 發送流量再次嘗試

> kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool

4. 這樣就可以看到 v1 和 v2 中都有了訪問日誌。v2 中的訪問日誌就是由鏡像流量產生的,這些請求的實際目標是 v1:

> kubectl logs -n istio-test -f $V1_POD -c httpbin
127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0"
127.0.0.1 - - [07/Nov/2018:07:26:58 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0"
kubectl logs -n istio-test -f $V2_POD -c httpbin
127.0.0.1 - - [07/Nov/2018:07:28:37 +0000] "GET /headers HTTP/1.1" 200 281 "-" "curl/7.35.0"

5. 清理

istioctl delete -n istio-test virtualservice httpbin
istioctl delete -n istio-test destinationrule httpbin
kubectl delete -n istio-test deploy httpbin-v1 httpbin-v2 sleep
kubectl delete -n istio-test svc httpbin
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章