二進制部署K8S集羣

參考視頻:https://ke.qq.com/course/276857

1. 部署步驟

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-DOefRWEg-1587051417376)(_v_images/20200118102300674_20120.png)]

2. 環境規劃

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-anFFNT6u-1587051417381)(_v_images/20200118102821410_16104.png =800x)]

master——172.16.38.208
node1——172.16.38.174
node2——172.16.38.234

2.1. 修改主機名

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

2.2. 添加主機名解析

echo "172.16.38.174 node1" >>/etc/hosts
echo "172.16.38.234 node1" >>/etc/hosts

2.3. 關閉selinux和防火牆

systemctl disable firewalld
systemctl stop firewalld

2.4. 節點安裝docker,配置國內鏡像加速源

3. 自籤TLS證書

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-f4lgU5LP-1587051417384)(_v_images/20200118110504464_1327.png =800x)]

3.1. 安裝證書生成工具cfssl

mkdir ssl
cd ssl/
wget https://pkg.cfssl.ort/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x ./*
mv cfssl_linux-amd64 /usr/local/bin/cfssl            #生成證書
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson    #支持json
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo    #查看證書信息
#cfssl print-defaults config > config.json            #生成證書模板
#cfssl print-defaults csr > csr.json                  #生成證書的基本信息

3.2. 生成CA證書

vim ca-config.json

{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

vim ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

生成ca證書

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

3.3. 生成Server證書

vim server-csr.json

{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.16.38.208",
      "172.16.38.174",
      "172.16.38.234",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json |cfssljson -bare server

3.4. 生成admin證書

vim admin-csr.json

{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json |cfssljson -bare admin

3.5. 生成kube-proxy證書

{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json |cfssljson -bare kube-proxy

3.6. 刪除不用的文件

ls |grep -v pem |xargs rm -rf

4. 部署etcd集羣

mkdir -p /opt/kubernetes/{bin,cfg,ssl}

4.1. 下載etcd-v3.2.29二進制包(Master節點操作)

下載地址:https://github.com/etcd-io/etcd/releases/

tar xvf etcd-v3.2.29-linux-amd64.tar.gz
mv etcd-v3.2.29-linux-amd64/{etcd,etcdctl} /opt/kubernetes/bin/
cp ssl/ca*.pem ssl/server*.pem /opt/kubernetes/ssl/

4.2. 寫etcd的配置文件

cat > /opt/kubernetes/cfg/etcd << EOF

EOF

4.3. 寫etcd.service

cat > /usr/lib/systemd/system/etcd.service << EOF

EOF

4.4. 啓動etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd    #會卡住,按Ctrl+c結束
ps -ef |grep etcd

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-WOHJwoFJ-1587051417390)(_v_images/20200328154347905_27918.png =800x)]

4.5. 配置ssh免密登錄

ssh-keygen -t rsa    #一路回車
ssh-copy-id [email protected]    #拷貝公鑰
ssh-copy-id [email protected]

4.6. 將文件拷貝到兩個節點

rsync -avzP /opt/kubernetes node1:/opt/
rsync -avzP /opt/kubernetes node2:/opt/
scp /usr/lib/systemd/system/etcd.service node1:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service node2:/usr/lib/systemd/system/

4.7. 在節點上修改etcd配置文件

node1節點

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.38.174:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.38.174:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.38.174:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.38.174:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.38.208:2380,etcd02=https://172.16.38.174:2380,etcd03=https://172.16.38.234:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

systemctl start etcd
node2節點

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.38.234:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.38.234:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.38.234:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.38.234:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.38.208:2380,etcd02=https://172.16.38.174:2380,etcd03=https://172.16.38.234:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

systemctl start etcd

4.8. 添加環境變量(master)

echo "PATH=$PATH:/opt/kubernetes/bin" >>/etc/profile
source /etc/profile

4.9. 驗證集羣

cd /opt/kubernetes/ssl

etcdctl --ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379" \
cluster-health

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-5a5db6vf-1587051417392)(_v_images/20200328160947734_4472.png =800x)]

5. 部署Flannel網絡

5.1. 下載flannel二進制包,並複製到節點

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
tar xvf flannel-v0.9.1-linux-amd64.tar.gz
scp flanneld mk-docker-opts.sh node1:/opt/kubernetes/bin/
scp flanneld mk-docker-opts.sh node2:/opt/kubernetes/bin/

5.2. 寫入分配的子網段到etcd,供flanneld使用

cd /opt/kubernetes/ssl/
etcdctl --ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan" }}'

5.3. 編寫flanneld配置文件(在節點操作)

cat > /opt/kubernetes/cfg/flanneld << EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF

5.4. 編寫flanneld.service配置文件

cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

5.5. 修改docker.service配置文件

修改兩個地方

EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-QOwCAu8Y-1587051417395)(_v_images/20200328191026365_8401.png =800x)]

5.6. 啓動服務

systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

5.7. 查看網卡(docker0和flannel.1處於同一網絡)

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-JE0gZNy3-1587051417397)(_v_images/20200328191130459_24377.png =800x)]

5.8. 將配置文件拷入另一節點,執行相同的操作

scp cfg/flanneld 172.16.38.234:/opt/kubernetes/cfg/
scp /usr/lib/systemd/system/{docker.service,flanneld.service} 172.16.38.234:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

6. 部署master節點組件

6.1. master二進制包下載:

wget https://storage.googleapis.com/kubernetes-release/release/v1.14.2/kubernetes-server-linux-amd64.tar.gz
tar xvf kubernetes-server-linux-amd64.tar.gz
mv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/*

6.2. 創建TLS Bootstrapping Token

cd /opt/kubernetes/cfg/
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

6.3. apiserver.sh腳本

vim /opt/kubernetes/bin/apiserver.sh

#!/bin/bash

MASTER_ADDRESS=${1:-"192.168.1.195"}
ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--insecure-bind-address=127.0.0.1 \\
--bind-address=${MASTER_ADDRESS} \\
--insecure-port=8080 \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

6.4. controller-manager.sh腳本

vim /opt/kubernetes/bin/controller-manager.sh

#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.10.10.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

6.5. scheduler.sh腳本

vim /opt/kubernetes/bin/scheduler.sh

#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

6.6. 執行腳本

cd /opt/kubernetes/bin/
chmod +x *.sh
./apiserver.sh 172.16.38.208 https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379
./controller-manager.sh
./scheduler.sh

6.7. 查看master集羣狀態

[root@master bin]# kubectl get cs

在這裏插入圖片描述

7. 創建node節點的kubeconfig文件

7.1. 指定訪問入口

export KUBE_APISERVER="https://172.16.38.208:6443"

7.2. 創建kubelet kubeconfig

7.2.1. 設置集羣參數

cd /opt/kubernetes/ssl
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

7.2.2. 設置客戶端認證參數

kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

7.2.3. 設置上下文參數

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

7.2.4. 設置默認上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

7.3. 創建kube-proxy kubeconfig

7.3.1. 創建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

cp /root/ssl/kube-proxy* /opt/kubernetes/ssl/
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

7.4. 將bootstrap.kubeconfig kube-proxy.kubeconfig拷貝到所有節點

mv bootstrap.kubeconfig kube-proxy.kubeconfig ../cfg/
scp ../cfg/{bootstrap.kubeconfig,kube-proxy.kubeconfig} node1:/opt/kubernetes/cfg/
scp ../cfg/{bootstrap.kubeconfig,kube-proxy.kubeconfig} node2:/opt/kubernetes/cfg/

8. 部署node節點

8.1. 添加角色權限

[root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

8.2. 將kubelet kube-proxy發送到node節點

[root@master ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node1:/opt/kubernetes/bin/
[root@master ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node2:/opt/kubernetes/bin/

8.3. 在node1上操作,編寫kubelet.sh腳本

vim /opt/kubernetes/bin/kubelet.sh

#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.196"}
DNS_SERVER_IP=${2:-"10.10.10.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

8.4. 編寫proxy.sh腳本

vim /opt/kubernetes/bin/proxy.sh

#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.200"}

cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=${NODE_ADDRESS} \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

8.5. 執行腳本

chmod +x *.sh
./kubelet.sh 172.16.38.174 10.10.10.2
./proxy.sh 172.16.38.174

8.6. 查看csr列表

[root@master ~]# kubectl get csr

在這裏插入圖片描述

8.7. 授權後狀態變爲Approved

[root@master ~]# kubectl  certificate approve node-csr-x4F5fniCL-kj0F_Dl-g2RKUWESv3kKC6nS7J-ZrE81U

8.8. 將腳本發送到node2節點

[root@node1 bin]# scp kubelet.sh proxy.sh 172.16.38.234:/opt/kubernetes/bin/

8.9. 去node2上執行腳本

[root@node2 bin]# ./kubelet.sh 172.16.38.234 10.10.10.2
[root@node2 bin]# ./proxy.sh 172.16.38.234

8.10. 到master上授權

[root@master ~]# kubectl get csr
[root@master ~]# kubectl  certificate approve node-csr-XjrmhFhj9gGdryQGduOvlA3eJ0THSXWiyRbcTpjyUeo

8.11. 查看node集羣節點信息

[root@master ~]# kubectl get nodes

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-O6Cm0BtC-1587051417405)(_v_images/20200329170858190_7216.png =800x)]

9. 運行一個測試實例檢查集羣狀態

kubectl run nginx --image=nginx --replicas=3

pod正在創建,會去拉鏡像,如果下載慢,可以更換國內源
在這裏插入圖片描述
查看pod跑在哪個Node上

[root@master ~]# kubectl get pod -o wide

在這裏插入圖片描述

9.1. 暴露端口使外部可以訪問

kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
kubectl get svc

88爲內部Node訪問端口,44353爲外部訪問端口
在這裏插入圖片描述
[root@node1 bin]# curl 10.10.10.31:88
在這裏插入圖片描述

9.2. 從外部通過瀏覽器訪問(訪問任意節點均可)

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章