K8s-Ingress高可用IP透传方案

简介:

  k8s暴露服务的方式有以下几种:

  1. Proxy + clusterIP
  2. NodePort
  3. LoadBalancer
  4. Ingress

 

  1. Proxy + ClusterIP

自定义代理 + 集群内存IP

 

有一些场景下,你得使用 Kubernetes 的 proxy 模式来访问你的服务:

•     由于某些原因,你需要调试你的服务,或者需要直接通过笔记本电脑去访问它们。

•     容许内部通信,展示内部仪表盘等。

这种方式要求我们运行 kubectl 作为一个未认证的用户,因此我们不能用这种方式把服务暴露到 internet 或者在生产环境使用。

 

  1.  NodePort (当前采用方式):

NodePort 服务是引导外部流量到你的服务的最原始方式。NodePort,正如这个名字所示,在所有节点(虚拟机)上开放一个特定端口,任何发送到该端口的流量都被转发到对应服务。

这种方式的优点是: 简单快速;

缺点是:

1 通过这种方式暴露服务,相当于在给 k8s 环境打孔,需要所有工作节点都开放对外访问的端口,随着应用的增加,集群的端口会被大量消耗且很难管理,更麻烦的问题是无法制作边缘服务器,所有的服务器都是边缘节点。

2 由于通过 nodePort 方式暴露服务,请求是通过集群内部第二跳做到请求转发的:

第二跳会通过 SNAT的方式替换请求头,导致集群内部应用无法获取用户真实 IP,开发网关无法根据IP进行流量控制。

 

3  LoadBalancer方式

LoadBalancer 服务是暴露服务到 internet 的标准方式。在 GKE 上,这种方式会启动一个 Network Load Balancer[2],它将给你一个单独的 IP 地址,转发所有流量到你的服务。

这个方式的最大缺点是每一个用 LoadBalancer 暴露的服务都会有它自己的 IP 地址,每个用到的 LoadBalancer 都需要付费,这将是非常昂贵的。

 

4  Ingress

 

Ingress 可能是暴露服务的最强大方式,但同时也是最复杂的。Ingress 控制器有各种类型,包括 Google Cloud Load Balancer, Nginx,Contour,Istio,等等。它还有各种插件,比如 cert-manager[5],它可以为你的服务自动提供 SSL 证书。

 

优点: 功能强大、免费;

缺点: 搭建复杂

 

当前我们的需求:

  1. 系统面向第三方系统需要控制访问的真实 IP,需要做到 IP 透传
  2. 保证高可用
  3. 蓝绿部署需要做到流量切换,需要使用到Ingress的部分

 

 高可用Ingress 架构如下:

 

安装步骤:
1、    申请VIP虚拟IP,并将虚拟IP映射到DNS上,如果你没有添加公司内网 DNS的权限,又需要在本地进行测试,可以在本地计算机上,更改 host 文件,将虚拟IP和域名的映射添加上
2、    在三台工作节点分别安装 keepAlive

#安装keepalive
yum install -y keepalived

配置:/etc/keepalived/keepalived.conf
备份:
cd /etc/keepalived
cp keepalived.conf keepalived_BK.conf


第一台机器执行:
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     xxxxxx
   }
   notification_email_from xxxxxxx
   smtp_server 10.xx.xx.xx
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.xx.xx.xx
    }
}
EOF

第二台机器执行:
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     xxxxxxx
   }
   notification_email_from xxxxxxx
   smtp_server 10.xx.xx.xx
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state SLAVE
    interface eth0
    virtual_router_id 52
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.xx.xx.xx
    }
}
EOF


#启动keepalive
systemctl start keepalived && systemctl enable keepalived

#验证:
#1、 ping 虚拟IP
#2、 systemctl status keepalived
#3、 ip a 查看虚拟IP是否绑定到物理网卡
#4、 systemctl stop/start keepalived 停止100优先级的节点,查看虚拟IP是否有漂移到其他物理机的物理网卡上,重启后查看是否飘回

 
3、    给工作节点加上label标签
kubectl label node node1 labelName=ingress-controller
kubectl label node node2 labelName=ingress-controller

4、    在三台工作节点上安装nginx-ingress-controller: mandotory.yaml 修改点如下:

(1) Depolyment 修改为 Damonset 守护

(2) pod 选择器修改到打上标签的node

(3) 镜像地址修改

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
        labelName: ingress-controller
      containers:
        - name: nginx-ingress-controller
          image: siriuszg/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---


5、     创建可以透传真实IP的service: 注意 externalTrafficPolicy 为 local

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30030
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      nodePort: 30031
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  externalTrafficPolicy: Local


6、     创建对应服务的service,注意类型由NodePort转换为ClusterIP
 

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: test-kube-dev
  name: test-deploy
  labels:
    app: test-deploy
spec:
  replicas: 1
  template:
    metadata:
      name: test
      labels:
        app: test-dev
    spec:
      containers:
        - name: test
          image: xxxxxxxxxxxx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
      restartPolicy: Always
      imagePullSecrets:
        - name: harbor-secret-name
  selector:
    matchLabels:
      app: test-dev
---
apiVersion: v1
kind: Service
metadata:
  namespace: test-kube-dev
  name: test-service
spec:
  selector:
    app: test-dev
  ports:
    - name: tomcatport
      port: 8080
      targetPort: 8080
  type: ClusterIP
  sessionAffinity: ClientIP


7、    创建对应service的Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  namespace: test-kube-dev
  name: test-dev-ing
#  annotations:
#    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: ingress.xxxx.com.cn
    http:
      paths:
      # 老版本服务
      - path: /test
        backend:
          serviceName: test-service
          servicePort: 8080

 
8、    部署应用之后,验证是否可以正常访问,以及IP透传是否生效:
访问地址为: 虚拟IP域名:ingress的Service端口/path

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章