Kafka Bridge的功能
我們可以使用Kafka Bridge將內部和外部HTTP客戶端與您Kafka集羣集成在一起。
- 內部客戶端是與Kafka Bridge本身在同一Kubernetes集羣中運行的基於容器的HTTP客戶端。內部客戶端可以訪問KafkaBridge定製資源中定義的主機和端口上的Kafka Bridge。
- 外部客戶端是在Kubernetes羣集之外運行並部署Kafka Bridge的HTTP客戶端。外部客戶端可以通過OpenShift的Route,LoadBalance服務或K8s的Ingress訪問KafkaBridge。
配置Kafka Bridge
本文的操作是《OpenShift 4 之部署Strimzi Operator運行Kafka應用》
- 在OpenShift Console上的Strimzi Operator中使用缺省配置創建一個Kafka Bridge,然後查看KafkaBridge對象。
$ oc get KafkaBridge
NAME DESIRED REPLICAS
my-bridge 1
- 根據Kafka Bridge的Service生成Route。除了OpenShift Route外,還可將Kafka Bridge的Service通過LoadBalance或Ingress對外提供訪問入口。
$ oc expose svc my-bridge-bridge-service
route.route.openshift.io/my-bridge-bridge-service exposed
測試驗證Kafka Bridge
發消息
- 設置KAFKA_BRIDGE和KAFKA_TOPIC環境變量。
$ KAFKA_BRIDGE=$(oc get route my-bridge-bridge-service -o template --template '{{.spec.host}}')
$ KAFKA_TOPIC=my-topic
- 執行命令,確認kafka Bridge狀態正常(返回“HTTP/1.1 200 OK”的結果)。
$ curl -v http://$KAFKA_BRIDGE/healthy
** About to connect() to my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com port 80 (#0)
** Trying 52.204.55.24...
** Connected to my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com (52.204.55.24) port 80 (#0)
> GET /healthy HTTP/1.1
> User-Agent: curl/7.29.0
> Host: my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 0
< Set-Cookie: 93b1d08256cbf837e3463c0bba903028=0da6748548970e857b22c45232a307b1; path=/; HttpOnly
< Cache-control: private
<
** Connection #0 to host my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com left intact
- 執行命令,通過curl向Kafka Topic發送測試消息。
$ curl -X POST \
http://$KAFKA_BRIDGE/topics/$KAFKA_TOPIC \
-H 'content-type: application/vnd.kafka.json.v2+json' \
-d '{
"records": [
{
"key": "key-1",
"value": "value-1"
},
{
"key": "key-2",
"value": "value-2"
}
]
}'
- 運行Kafka接收程序,驗證可以收到測試消息(“value-1"和"value-2”)。
$ KAFKA_TOPIC=${1:-'my-topic'}
$ KAFKA_CLUSTER_NS=${2:-'kafka'}
$ KAFKA_CLUSTER_NAME=${3:-'my-cluster'}
$ oc -n $KAFKA_CLUSTER_NS run kafka-consumer -ti \
--image=strimzi/kafka:0.15.0-kafka-2.3.1 \
--rm=true --restart=Never \
-- bin/kafka-console-consumer.sh \
--bootstrap-server $KAFKA_CLUSTER_NAME-$KAFKA_CLUSTER_NS-bootstrap:9092 \
--topic $KAFKA_TOPIC --from-beginning
"value-1"
"value-2"
注意:如果運行後報“Error from server (AlreadyExists): pods “kafka-consumer” already exists”,則需要先手動刪除名爲kafka-consumer的Pod,然後再運行。
收消息
爲了實現通過Kafka Bridge使用HTTP方式接收Kafka Topic的消息,我們需要先通過Consumer Group訂閱消息。
- 設置KAFKA_CONSUMER_GROUP環境變量。
$ KAFKA_BRIDGE=$(oc get route my-bridge-bridge-service -o template --template '{{.spec.host}}')
$ KAFKA_CONSUMER_GROUP=my-group
- 創建一個名爲my-consumer的Consumer,並將其加入到名爲my-group的Consumer Group。
$ curl -X POST http://$KAFKA_BRIDGE/consumers/$KAFKA_CONSUMER_GROUP \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"name": "my-consumer",
"format": "json",
"auto.offset.reset": "earliest",
"enable.auto.commit": false
}'
成功運行結果是:
{
"instance_id":"my-consumer",
"base_uri":"http://my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com:80/consumers/my-group/instances/my-consumer"
}
- 執行命令爲my-consumer的Consume配置訂閱,訂閱目標是名爲my-topic的Kafka Topic。如果運行沒有返回錯誤即設置成功。
$ curl -X POST http://$KAFKA_BRIDGE/consumers/$KAFKA_CONSUMER_GROUP/instances/my-consumer/subscription \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"topics": [
"my-topic"
]
}'
- 用以上任一種方法向my-topic中順序發送測試字符:1,2,3,4。
- 運行命令,通過Kafka的my-consumer獲得my-topic中的測試消息。
$ curl -X GET http://$KAFKA_BRIDGE/consumers/$KAFKA_CONSUMER_GROUP/instances/my-consumer/records \
-H 'accept: application/vnd.kafka.json.v2+json'
獲得的執行結果(已經部分格式化):
[
{"topic":"my-topic","key":null,"value":3,"partition":8,"offset":0},
{"topic":"my-topic","key":null,"value":4,"partition":9,"offset":0},
{"topic":"my-topic","key":null,"value":1,"partition":2,"offset":0},
{"topic":"my-topic","key":null,"value":2,"partition":3,"offset":0}
]