Kong Kubernetes-Native 實戰

前言

Kong is a cloud-native, fast, scalable, and distributed Microservice Abstraction Layer (also known as an API Gateway or API Middleware). Made available as an open-source project in 2015, its core values are high performance and extensibility.
Actively maintained, Kong is widely used in production at companies ranging from startups to Global 5000 as well as government organizations.

Kong是目前社區最流行的雲原生API網關。高性能可擴展兩大特性使得Kong被各大廠商廣泛使用

在深入Kong使用前有必要對Kong的作用進行概述:

If you are building for the web, mobile, or IoT (Internet of Things) you will likely end up needing common functionality to run your actual software. Kong can help by acting as a gateway (or a sidecar) for microservices requests while providing load balancing, logging, authentication, rate-limiting, transformations, and more through plugins.

也即在進行微服務開發時,我們需要一些公共的特性和功能,例如:日誌、負載均衡、認證以及Rate limiting等。而Kong(API網關)便充當着這個角色,使服務與這些公共功能解耦,讓開發者更加專注於自身的服務開發和運維,從這些繁瑣的外圍事情中解脫出來。更直觀的對比如下:
在這裏插入圖片描述
在舊的服務管理體制下,各個服務需要各自開發具有相同功能的諸如日誌、認證以及Rate limiting等模塊,不僅增加了開發者負擔也增加了整個系統的冗餘度;而對比Kong(API網關)作爲這些公共服務的統一接入層,所有外圍服務均由Kong實現,整個系統結構清晰且易維護

Kong

這裏我們從Kong Admin API爲切入點深入Kong的使用

一、Kong Admin API

By default Kong listens on the following ports:

  • :8000 on which Kong listens for incoming HTTP traffic from your clients, and forwards it to your upstream services.
  • :8443 on which Kong listens for incoming HTTPS traffic. This port has a similar behavior as the :8000 port, except that it expects HTTPS traffic only. This port can be disabled via the configuration file.
  • :8001 on which the Admin API used to configure Kong listens.
  • :8444 on which the Admin API listens for HTTPS traffic.

如圖:
在這裏插入圖片描述

  • 1、Proxy端口(8000 or 8443)用於代理後端服務
  • 2、Admin端口(8001 or 8444)用於管理Kong配置,對Kong配置進行CRUD操作(Konga就是利用Admin API實現的GUI)

二、Kong Configuration Mode

在詳細介紹Kong具體使用之前,我們先介紹一下Kong的兩種使用模式:

  • DB-less mode:使用declarative configuration,所有配置存放於一個配置文件中(YAML or JSON格式),不需要使用數據庫,而修改配置的方法有兩種:
    • 1、靜態——在kong初始化時指定declarative_config文件路徑:
      $ export KONG_DATABASE=off
      $ export KONG_DECLARATIVE_CONFIG=kong.yml
      $ kong start -c kong.conf
      
    • 2、動態——在kong運行期間,調用Kong Admin API:
      $ http :8001/config config=@kong.yml
      
    另外,由於是採用declarative configuration設計,所以只支持Read-Only Admin API,也即:只支持GET;不支持POST, PATCH, PUT or DELETE等Methods
  • DB mode: 使用imperative configuration,需要使用數據庫(PostgreSQL or Cassandra),並通過Kong Admin API對配置進行CRUD操作

這兩種模式各有優缺點,如下:

  • DB-less mode

    • Pros:
      • 1、無需使用數據庫,減少了對數據庫的依賴,減少部署&運維成本
    • Cons:
      • 1、由於採用declarative configuration設計,更新規則必須全量更新,重置整個配置文件,無法做到局部更新(調用Kong Admin API/config)
      • 2、不支持Konga對Kong的管理
      • 3、插件兼容性較差,無法支持所有Kong插件,詳情見Plugin Compatibility
  • DB mode

    • Pros:
      • 1、支持調用Kong Admin API CRUD,支持局部更新
      • 2、支持Konga對Kong的管理
      • 3、插件兼容性好,可以支持所有Kong插件
    • Cons:
      • 1、需要使用數據庫,增加了對數據庫的依賴,增加部署&運維成本

三、Kong Used As HTTP Proxy

由於Kong DB mode更加便於舉例說明,這裏我們採用Kong DB mode展示如何使用Kong代理HTTP請求

首先介紹一下Kong Proxy幾個關鍵概念:

  • client: Refers to the downstream client making requests to Kong’s proxy port.
  • upstream service: Refers to your own API/service sitting behind Kong, to which client requests/connections are forwarded.
  • Service: Service entities, as the name implies, are abstractions of each of your own upstream services. Examples of Services would be a data transformation microservice, a billing API, etc.
  • Route: This refers to the Kong Routes entity. Routes are entrypoints into Kong, and defining rules for a request to be matched, and routed to a given Service.
  • Target: A target is an ip address/hostname with a port that identifies an instance of a backend service. Every upstream can have many targets, and the targets can be dynamically added. Changes are effectuated on the fly.
  • Plugin: This refers to Kong “plugins”, which are pieces of business logic that run in the proxying lifecycle. Plugins can be configured through the Admin API - either globally (all incoming traffic) or on specific Routes and Services.

舉一個例子對上述概念進行說明:

一個典型的 Nginx 配置:

upstream testUpstream {
    server localhost:3000 weight=100;
}

server {
    listen  80;
    location /test {
        proxy_pass http://testUpstream;
    }
}

轉換爲Kong Admin API請求如下:

# configurate service
curl -X POST http://localhost:8001/services --data "name=test" --data "host=testUpstream"
# configurate route
curl -X POST http://localhost:8001/routes --data "paths[]=/test" --data "service.id=92956672-f5ea-4e9a-b096-667bf55bc40c"
# configurate upstream
curl -X POST http://localhost:8001/upstreams --data "name=testUpstream"
# configurate target
curl -X POST http://localhost:8001/upstreams/testUpstream/targets --data "target=localhost:3000" --data "weight=100"

從這個例子可以看出:

  • Service:Kong服務抽象層,可以直接映射到一個物理服務,也可以指向一個Upstream來做到負載均衡
  • Route:Kong路由抽象層,負責將實際請求映射到相應的Service
  • Upstream:後端服務抽象,主要用於負載均衡
  • Target:代表了Upstream中的一個後端服務,是 ip(hostname) + port 的抽象

也即訪問鏈路:Route => Service => Upstream => Target

下面給一個Kong Used As HTTP Proxy的例子,如下:

# step1: create nginx service
$ cat << EOF > nginx-svc.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
EOF

$ kubectl apply -f nginx-svc.yml
deployment.apps/nginx created
service/nginx created

$ kubectl get svc 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
nginx        ClusterIP   172.28.255.197   <none>        80/TCP    5h18m

# step2: create kong nginx service
$ curl -s -X POST --url http://172.28.255.207:8001/services/ \
> -d 'name=nginx' \
> -d 'protocol=http' \
> -d 'host=nginxUpstream' \
> -d 'port=80' \
> -d 'path=/' \
> | python -m json.tool
{
    "client_certificate": null,
    "connect_timeout": 60000,
    "created_at": 1580560293,
    "host": "nginxUpstream",
    "id": "14100336-f5d2-48ef-a720-d341afceb466",
    "name": "nginx",
    "path": "/",
    "port": 80,
    "protocol": "http",
    "read_timeout": 60000,
    "retries": 5,
    "tags": null,
    "updated_at": 1580560293,
    "write_timeout": 60000
}

# step3: create kong nginx route
$ curl -s -X POST --url http://172.28.255.207:8001/services/nginx/routes \
> -d 'name=nginx' \
> -d 'hosts[]=nginx-test.duyanghao.com' \
> -d 'paths[]=/' \
> -d 'strip_path=true' \
> -d 'preserve_host=true' \
> -d 'protocols[]=http' \
> | python -m json.tool
{
    "created_at": 1580560619,
    "destinations": null,
    "headers": null,
    "hosts": [
        "nginx-test.duyanghao.com"
    ],
    "https_redirect_status_code": 426,
    "id": "bb678485-0b3e-4e8a-9a46-3e5464fedffc",
    "methods": null,
    "name": "nginx",
    "paths": [
        "/"
    ],
    "preserve_host": true,
    "protocols": [
        "http"
    ],
    "regex_priority": 0,
    "service": {
        "id": "14100336-f5d2-48ef-a720-d341afceb466"
    },
    "snis": null,
    "sources": null,
    "strip_path": true,
    "tags": null,
    "updated_at": 1580560619
}

# step4: create kong nginx upstream
$ curl -s -X POST --url http://172.28.255.207:8001/upstreams \
> -d 'name=nginxUpstream' \
> | python -m json.tool
{
    "algorithm": "round-robin",
    "created_at": 1580560763,
    "hash_fallback": "none",
    "hash_fallback_header": null,
    "hash_on": "none",
    "hash_on_cookie": null,
    "hash_on_cookie_path": "/",
    "hash_on_header": null,
    "healthchecks": {
        "active": {
            "concurrency": 10,
            "healthy": {
                "http_statuses": [
                    200,
                    302
                ],
                "interval": 0,
                "successes": 0
            },
            "http_path": "/",
            "https_sni": null,
            "https_verify_certificate": true,
            "timeout": 1,
            "type": "http",
            "unhealthy": {
                "http_failures": 0,
                "http_statuses": [
                    429,
                    404,
                    500,
                    501,
                    502,
                    503,
                    504,
                    505
                ],
                "interval": 0,
                "tcp_failures": 0,
                "timeouts": 0
            }
        },
        "passive": {
            "healthy": {
                "http_statuses": [
                    200,
                    201,
                    202,
                    203,
                    204,
                    205,
                    206,
                    207,
                    208,
                    226,
                    300,
                    301,
                    302,
                    303,
                    304,
                    305,
                    306,
                    307,
                    308
                ],
                "successes": 0
            },
            "type": "http",
            "unhealthy": {
                "http_failures": 0,
                "http_statuses": [
                    429,
                    500,
                    503
                ],
                "tcp_failures": 0,
                "timeouts": 0
            }
        }
    },
    "id": "a4c88440-bd50-48f1-8926-527d02abc4a2",
    "name": "nginxUpstream",
    "slots": 10000,
    "tags": null
}

# step5: create kong nginx target
$ curl -s -X POST --url http://172.28.255.207:8001/upstreams/nginxUpstream/targets -d 'target=172.28.255.197:80' | python -m json.tool
{
    "created_at": 1580570363.097,
    "id": "09871dc5-5ede-4ea3-b232-e52501be071a",
    "target": "172.28.255.197:80",
    "upstream": {
        "id": "a4c88440-bd50-48f1-8926-527d02abc4a2"
    },
    "weight": 100
}

# Test forward through Kong
$ curl -s -X GET \
    --url http://172.28.255.9:8000/ \
    --header 'Host: nginx-test.duyanghao.com'
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

可以看到通過Kong Admin API創建Service、Route、Upstream以及Target等配置後,Kong成功地代理了nginx service的請求

另外,Kong除了可以代理HTTP請求,還可以作爲WebSocket、TCP以及gRPC Proxy,詳見Kong proxy

四、Kong Plugins

You’ve probably heard that Kong is built on Nginx, leveraging its stability and efficiency. But how is this possible exactly?
To be more precise, Kong is a Lua application running in Nginx and made possible by the lua-nginx-module. Instead of compiling Nginx with this module, Kong is distributed along with OpenResty, which already includes lua-nginx-module. OpenResty is not a fork of Nginx, but a bundle of modules extending its capabilities.
This sets the foundations for a pluggable architecture, where Lua scripts (referred to as ”plugins”) can be enabled and executed at runtime. Because of this, we like to think of Kong as a paragon of microservice architecture: at its core, it implements database abstraction, routing and plugin management. Plugins can live in separate code bases and be injected anywhere into the request lifecycle, all in a few lines of code.

由於Kong是OpenResty 應用程序,天然支持Lua插件,我們可以給Kong Service或者Route等屬性添加插件,來達到更多定製化需求,如下:

# add plugin for kong nginx service
$ curl -X POST http://172.28.255.207:8001/services/nginx/plugins \
--data "name=rate-limiting" \
--data "config.second=50" \
| python -m json.tool
{
    "config": {
        "day": null,
        "fault_tolerant": true,
        "hide_client_headers": false,
        "hour": null,
        "limit_by": "consumer",
        "minute": null,
        "month": null,
        "policy": "cluster",
        "redis_database": 0,
        "redis_host": null,
        "redis_password": null,
        "redis_port": 6379,
        "redis_timeout": 2000,
        "second": 50,
        "year": null
    },
    "consumer": null,
    "created_at": 1580567002,
    "enabled": true,
    "id": "ce629c6f-046a-45fa-bb0a-2e6aaea70a83",
    "name": "rate-limiting",
    "protocols": [
        "grpc",
        "grpcs",
        "http",
        "https"
    ],
    "route": null,
    "run_on": "first",
    "service": {
        "id": "14100336-f5d2-48ef-a720-d341afceb466"
    },
    "tags": null
}

# Test forward through Kong
$ curl -v -s -X GET \
    --url http://172.28.255.9:8000/ \
    --header 'Host: nginx-test.duyanghao.com'

< HTTP/1.1 200 OK
< Content-Type: text/html; charset=UTF-8
< Content-Length: 612
< Connection: keep-alive
< Server: nginx/1.15.12
< Date: Sat, 01 Feb 2020 14:33:29 GMT
< Vary: Accept-Encoding
< Last-Modified: Tue, 16 Apr 2019 13:08:19 GMT
< ETag: "5cb5d3c3-264"
< Accept-Ranges: bytes
< X-RateLimit-Remaining-second: 49
< X-RateLimit-Limit-second: 50
< X-Kong-Upstream-Latency: 1
< X-Kong-Proxy-Latency: 11
< Via: kong/1.4.2
< 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

# add plugin for kong nginx route
$ curl -X POST http://172.28.255.207:8001/routes/nginx/plugins \
--data "name=rate-limiting" \
--data "config.second=50" \
| python -m json.tool
{
    "config": {
        "day": null,
        "fault_tolerant": true,
        "hide_client_headers": false,
        "hour": null,
        "limit_by": "consumer",
        "minute": null,
        "month": null,
        "policy": "cluster",
        "redis_database": 0,
        "redis_host": null,
        "redis_password": null,
        "redis_port": 6379,
        "redis_timeout": 2000,
        "second": 50,
        "year": null
    },
    "consumer": null,
    "created_at": 1580567880,
    "enabled": true,
    "id": "f4e18187-b24e-437c-95e3-485589f0e326",
    "name": "rate-limiting",
    "protocols": [
        "grpc",
        "grpcs",
        "http",
        "https"
    ],
    "route": {
        "id": "bb678485-0b3e-4e8a-9a46-3e5464fedffc"
    },
    "run_on": "first",
    "service": null,
    "tags": null
}

# Test forward through Kong
$ curl -v -s -X GET \
    --url http://172.28.255.9:8000/ \
    --header 'Host: nginx-test.duyanghao.com'

< HTTP/1.1 200 OK
< Content-Type: text/html; charset=UTF-8
< Content-Length: 612
< Connection: keep-alive
< Server: nginx/1.15.12
< Date: Sat, 01 Feb 2020 14:54:14 GMT
< Vary: Accept-Encoding
< Last-Modified: Tue, 16 Apr 2019 13:08:19 GMT
< ETag: "5cb5d3c3-264"
< Accept-Ranges: bytes
< X-RateLimit-Remaining-second: 49
< X-RateLimit-Limit-second: 50
< X-Kong-Upstream-Latency: 1
< X-Kong-Proxy-Latency: 6
< Via: kong/1.4.2
< 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Kong-ingress-controller

上述Kong Used As HTTP Proxy給出了使用Admin API配置Kong來訪問Kubernetes nginx service的例子,這種方法屬於Kong原生的方式,並沒有針對Kubernetes進行定製。如果像上述例子一樣採用Kong Admin API對Kubernetes service進行訪問設置,則需要我們自己開發對應的Kong API CRUD邏輯。而Kong官方爲了解決這個共性問題(也是爲了擴展Kong Cloud-Native的使用場景)創建了kubernetes-ingress-controller項目

一、Kong-ingress-controller Principle

Kong-ingress-controller採用標準的Kubernetes Operator(CRDs + Controller)模式編寫,按組件分爲:

  • Kong, the core proxy that handles all the traffic
  • Controller, a process that syncs the configuration from Kubernetes to Kong

整個架構類似nginx-ingress-controller(Nginx + Controller),原理如下:
在這裏插入圖片描述Controller會監控Kubernetes集羣中資源的變化,然後更新Kong相應的配置使其代理Kubernetes服務,這裏面就存在Kubernetes資源與Kong資源對應的關係,如下:
在這裏插入圖片描述

  • An Ingress resource in Kubernetes defines a set of rules for proxying traffic. These rules corresponds to the concept or Route in Kong.
  • A Service inside Kubernetes is a way to abstract an application that is running on a set of pods. This maps to two objects in Kong: Service and Upstream. The service object in Kong holds the information on the protocol to use to talk to the upstream service and various other protocol specific settings. The Upstream object defines load balancing and healthchecking behavior.
  • Pods associated with a Service in Kubernetes map as a Target belonging to the Upstream (the upstream corresponding to the Kubernetes Service) in Kong. Kong load balances across the Pods of your service. This means that all requests flowing through Kong are not directed via kube-proxy but directly to the pod.

也即Kong巧妙地將Kubernetes Ingress轉化爲Kong Route;Kubernetes Service轉化爲Kong Service&Upstream;Kubernetes Pod轉化爲Kong Target。最終實現Kong代理Kubernetes服務

二、Kong-ingress-controller Install&Usage

我們按照官方指引安裝kong-ingress-controller,如下:

# install kong-ingress-controller
$ kubectl apply -f https://bit.ly/k4k8s
namespace/kong created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
configmap/kong-server-blocks created
service/kong-admin created
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created

# kubectl get all -nkong
NAME                                READY   STATUS    RESTARTS   AGE
pod/ingress-kong-7875999c56-cbfzs   2/2     Running   1          2m59s

NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/kong-admin                ClusterIP   172.71.71.166   <none>        8001/TCP                     2m59s
service/kong-proxy                NodePort    172.71.138.94   <none>        80:8000/TCP,443:8443/TCP   2m59s
service/kong-validation-webhook   ClusterIP   172.71.9.238    <none>        443/TCP                      2m59s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-kong   1/1     1            1           2m59s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-kong-7875999c56   1         1         1       2m59s

安裝完kong-ingress-controller後,我們創建nginx svc&ingress,如下:

# step1: create nginx svc&ingress
$ cat << EOF > nginx-svc.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1beta1 
kind: Ingress
metadata:
  name: nginx
spec:
  rules:
  - host: nginx-test.duyanghao.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
EOF

$ kubectl apply -f nginx-svc.yml
deployment.apps/nginx created
service/nginx created
ingress.networking.k8s.io/nginx created

創建完nginx service&ingress後,我們訪問nginx如下:

# Test forward through Kong
$ curl -s -X GET \
    --url http://172.71.138.94:8000/ \
    --header 'Host: nginx-test.duyanghao.com'
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

可以看到,相比Kong原生的用法,這裏沒有調用Kong Admin API創建各種Kong屬性,而只是添加了一個Kubernetes Ingress,就可以直接訪問服務了,簡單且實用

三、Kong-ingress-controller CRDs

Kong Ingress Controller performs more than just proxying the traffic coming into a Kubernetes cluster. It is possible to configure plugins, load balancing, health checking and leverage all that Kong offers in a standalone installation.

Kong-ingress-controller原則上可以完成Kong所涵蓋的所有特性。爲此,Kong-ingress-controller給出了四種CRDs對功能進行擴展,如下:

這裏主要介紹KongIngress與KongPlugin CRDs的用法:

Kong-ingress-controller CRDs-KongIngress

The Ingress resource in Kubernetes is a fairly narrow and ambiguous API, and doesn’t offer resources to describe the specifics of proxying. To overcome this limitation, KongIngress Custom Resource is used as an “extension" to the existing Ingress API to provide fine-grained control over proxy behavior. In other words, KongIngress works in conjunction with the existing Ingress resource and extends it. It is not meant as a replacement for the Ingress resource in Kubernetes. Using KongIngress, all properties of Upstream, Service and Route entities in Kong related to an Ingress resource can be modified.

Once a KongIngress resource is created, you can use the configuration.konghq.com
annotation to associate the KongIngress resource with an Ingress or a Service
resource:

  • When the annotation is added to the Ingress resource, the routing
    configurations are updated, meaning all routes associated with the annotated
    Ingress are updated to use the values defined in the KongIngress's route
    section.
  • When the annotation is added to a Service resource in Kubernetes,
    the corresponding Service and Upstream in Kong are updated to use the
    proxy and upstream blocks as defined in the associated
    KongIngress resource.

The below diagram shows how the resources are linked
with one another:
在這裏插入圖片描述也即:KongIngress用於擴展Kubernetes Ingress資源功能,從而更加細粒度控制代理行爲。這裏舉例進行說明:

# Example1: Use KongIngress with Ingress resource
# Install a dummy service
$ cat << EOF > echo-service.yaml
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: echo
  name: echo
spec:
  ports:
  - port: 8080
    name: high
    protocol: TCP
    targetPort: 8080
  - port: 80
    name: low
    protocol: TCP
    targetPort: 8080
  selector:
    app: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: echo
  name: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: echo
    spec:
      containers:
      - image: gcr.io/kubernetes-e2e-test-images/echoserver:2.2
        name: echo
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
        resources: {}
EOF
$ kubectl apply -f echo-service.yaml   
service/echo created
deployment.apps/echo created

# Setup Ingress
$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo
spec:
  rules:
  - http:
      paths:
      - path: /foo
        backend:
          serviceName: echo
          servicePort: 80
" | kubectl apply -f -
ingress.extensions/demo created

# Let's test
$ curl -i http://172.71.138.94:8000/foo
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Sun, 02 Feb 2020 07:42:28 GMT
Server: echoserver
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 1
Via: kong/1.4.2



Hostname: echo-85fb7989cc-kk7r6

Pod Information:
        node name:      vm-xxx-centos
        pod name:       echo-85fb7989cc-kk7r6
        pod namespace:  default
        pod IP: 172.70.0.21

Server values:
        server_version=nginx: 1.12.2 - lua: 10010

Request Information:
        client_address=172.70.0.17
        method=GET
        real path=/
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://172.71.138.94:8080/

# Kong will strip the path defined in the Ingress rule before proxying the request to the service. This can be seen in the real path value in the above response.
# We can configure Kong to not strip out this path and to only respond to GET requests for this particular Ingress rule.
# create a KongIngress resource
$ echo "apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: sample-customization
route:
  methods:
  - GET
  strip_path: false" | kubectl apply -f -
kongingress.configuration.konghq.com/sample-customization created

# Now, let's associate this KongIngress resource with our Ingress resource using the configuration.konghq.com annotation.
$ kubectl patch ingress demo -p '{"metadata":{"annotations":{"configuration.konghq.com":"sample-customization"}}}'
ingress.extensions/demo patched

# Now, Kong will proxy only GET requests on /foo path and not strip away /foo:
$ curl -s http://172.71.138.94:8000/foo -X POST
{"message":"no Route matched with those values"}

$ curl -i http://172.71.138.94:8000/foo
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Sun, 02 Feb 2020 07:54:39 GMT
Server: echoserver
X-Kong-Upstream-Latency: 0
X-Kong-Proxy-Latency: 0
Via: kong/1.4.2



Hostname: echo-85fb7989cc-kk7r6

Pod Information:
        node name:      vm-xxx-centos
        pod name:       echo-85fb7989cc-kk7r6
        pod namespace:  default
        pod IP: 172.70.0.21

Server values:
        server_version=nginx: 1.12.2 - lua: 10010

Request Information:
        client_address=172.70.0.17
        method=GET
        real path=/foo
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://172.71.138.94:8080/foo

# Example2: Use KongIngress with Service resource
# KongIngress can be used to change load-balancing, health-checking and other proxy behaviours in Kong.
# Next, we are going to tweak two settings:
# 1、Configure Kong to hash the requests based on IP address of the client.
# 2、Configure Kong to proxy all the request on /foo to /bar.

# Let's create a KongIngress resource with these settings
$ echo 'apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: demo-customization
upstream:
  hash_on: ip
proxy:
  path: /bar/' | kubectl apply -f -
kongingress.configuration.konghq.com/demo-customization created

# Now, let's associate this KongIngress resource to the echo service.
$ kubectl patch service echo -p '{"metadata":{"annotations":{"configuration.konghq.com":"demo-customization"}}}'
service/echo patched

# Let's test this now
$ curl -i http://172.71.138.94:8000/foo
HTTP/1.1 200 OK
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Date: Sun, 02 Feb 2020 08:11:42 GMT
Server: echoserver
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 0
Via: kong/1.4.2



Hostname: echo-85fb7989cc-kk7r6

Pod Information:
        node name:      vm-xxx-centos
        pod name:       echo-85fb7989cc-kk7r6
        pod namespace:  default
        pod IP: 172.70.0.21

Server values:
        server_version=nginx: 1.12.2 - lua: 10010

Request Information:
        client_address=172.70.0.17
        method=GET
        real path=/bar/foo
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://172.71.138.94:8080/bar/foo

# Real path received by the upstream service (echo) is now changed to /bar/foo
# Also, now all the requests will be sent to the same upstream pod:
$ curl -s 172.71.138.94:8000/foo | grep "pod IP"
	pod IP:	172.70.0.21
$ curl -s 172.71.138.94:8000/foo | grep "pod IP"
	pod IP:	172.70.0.21
$ curl -s 172.71.138.94:8000/foo | grep "pod IP"
	pod IP:	172.70.0.21

Kong-ingress-controller CRDs-KongPlugin

Kong is designed around an extensible plugin architecture and comes with a wide variety of plugins already bundled inside it.
These plugins can be used to modify the request/response or impose restrictions
on the traffic.

Once this resource is created, the resource needs to be associated with an Ingress, Service, or KongConsumer resource in Kubernetes. For more details, please read the reference documentation on KongPlugin.

The below diagram shows how the KongPlugin resource can be linked to an Ingress, Service, or KongConsumer:
在這裏插入圖片描述在這裏插入圖片描述
也即:KongPlugin用於給Kubernetes服務提供Kong插件功能。這裏舉例進行說明:

# start httpbin service.
$ cat << EOF > httpbin.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        name: httpbin
        ports:
        - containerPort: 80
EOF

$ kubectl apply -f httpbin.yaml 
service/httpbin created
deployment.apps/httpbin created

# Setup Ingress rules
$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo2
spec:
  rules:
  - http:
      paths:
      - path: /baz
        backend:
          serviceName: httpbin
          servicePort: 80
" | kubectl apply -f -
ingress.extensions/demo2 created

# Let's test
$ curl -i http://172.71.138.94:8000/baz/status/200
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: gunicorn/19.9.0
Date: Sun, 02 Feb 2020 09:19:11 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 0
Via: kong/1.4.2

# Configuring plugins on Ingress resource
# First, we will create a KongPlugin resource
$ echo '
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: add-response-header
config:
  add:
    headers:
    - "demo: injected-by-kong"
plugin: response-transformer
' | kubectl apply -f -
kongplugin.configuration.konghq.com/add-response-header created

# Next, we will associate it with our Ingress rules
$ kubectl patch ingress demo2 -p '{"metadata":{"annotations":{"plugins.konghq.com":"add-response-header"}}}'
ingress.extensions/demo2 patched

# Here, we are asking Kong Ingress Controller to execute the response-transformer plugin whenever a request matching the Ingress rule is processed
# Let's test it out
$ curl -i http://172.71.138.94:8000/baz/status/200
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: gunicorn/19.9.0
Date: Sun, 02 Feb 2020 09:22:44 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
demo:  injected-by-kong
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 0
Via: kong/1.4.2

# As can be seen in the output, the demo header is injected by Kong when the request matches the Ingress rules defined in the demo Ingress resource.

注意區分與原生Kong添加插件在使用上的差別,更多例子詳見using-kongplugin-resource

四、 Kong-ingress-controller High-Availability

Kong Ingress Controller is designed to be reasonably easy to operate and be highly available, meaning, when some expected failures do occur, the Controller should be able to continue to function with minimum possible service disruption.

Kong Ingress Controller is composed of two parts: 1. Kong, which handles the rquests, 2. Controller, which configures Kong dynamically.

Kong itself can be deployed in a Highly available manner by deploying multiple instances (or pods). Kong nodes are state-less, meaning a Kong pod can be terminated and restarted at any point of time.

The controller itself can be stateful or stateless, depending on if a database is being used or not.

If a database is not used, then the Controller and Kong are deployed as colocated containers in the same pod and each controller configures the Kong container that it is running with.

For cases when a database is necessary, the Controllers can be deployed on multiple zones to provide redudancy. In such a case, a leader election process will elect one instance as a leader, which will manipulate Kong’s configuration.

根據Kong-ingress-controller High-availability and Scaling給出的說明,可以得出高可用方案如下:

Kong DB-less Mode

在DB-less Mode下,各個Pod(Controller + Kong)之間數據獨立(state-less),因此部署Kong-ingress-controller副本數(Replicas) > 1即可實現高可用,如下:

$ kubectl get deploy -nkong
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
ingress-kong   1/1     1            1           8h
$ kubectl scale --replicas=2 deploy/ingress-kong -nkong
deployment.apps/ingress-kong scaled
$ kubectl get deploy -nkong
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
ingress-kong   2/2     2            2           8h
$ kubectl get pods -nkong
NAME                            READY   STATUS    RESTARTS   AGE
ingress-kong-7875999c56-cbfzs   2/2     Running   1          8h
ingress-kong-7875999c56-r44bn   2/2     Running   1          34s

方案圖如下:在這裏插入圖片描述

Kong DB Mode

在DB Mode下,Kong需要共享數據庫。而Kong本身無狀態,可以擴展;Controller實現了Leader選舉機制保證同一時刻只有一個副本維護Kong配置(調用Kong Admin API),因此高可用方案如下:

  • 1、Kong&Controller副本數(Replicas) > 1
  • 2、DB高可用

方案圖如下:在這裏插入圖片描述

Conclusion

API Gateway是雲原生應用中必不可少的組件,而Kong由於其具有的高性能和高可擴展性成爲目前社區中最流行的API網關。本文從Kong API爲切入點介紹了Kong的基本使用——Used As HTTP Proxy。而Kong雖然可以通過調用Admin API來完成代理行爲,但是對於Kubernetes應用來說還是顯得太過繁瑣與笨重,Kong officials爲了解決這個共性問題創建了Kong-ingress-controller項目。Kong-ingress-controller採用標準的Kubernetes Operator設計開發,以Kubernetes-Native方式使Kong代理Kubernetes服務,同時利用四種CRDs對功能進行擴展,以完成Kong原生所涵蓋的所有特性,可以說是API Gateway Kubernetes-Native的最佳實踐。最後本文給出了Kong-ingress-controller的高可用方案,這在企業生產環境中是非常重要且有必要的

Refs

歡迎關注我的公衆號
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章