K8s之Pod資源管理及創建Harbor私有鏡像倉庫(含鏡像拉取操作,中途含排錯)

pod是k8s管理的最小單元

pod中有多個容器,現實生產環境中只有一個容器


特點:

1.最小部署單元
2.一組容器的集合
3.一個Pod中的容器共享網絡命令空間
4.Pod是短暫的


Pod容器分類:

1:infrastructure container 基礎容器(透明的過程,用戶無感知)

維護整個Pod網絡空間

node節點操作
`查看容器的網絡`
[root@node1 ~]# cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.18.148 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"    #提示網絡組件鏡像會從阿里雲上進行下載

`每次創建Pod時候就會創建,與Pod對應的,對於用戶是透明的`
[root@node1 ~]# docker ps
CONTAINER ID        IMAGE                                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
......此處省略多行
54d9e6ec3c02        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"
#網絡組件會被自動加載成一個組件提供出去
`結論:基礎容器在創建時,一定會去創建一個網絡容器`

2:initcontainers 初始化容器

pod在進行創建時一定會被執行當中的初始化initcontainers,在老版本中執行時不會區分前後順序(在系統進行加載時PID號數字越小,優先級別越高,越先被啓動),隨着雲平臺的改進,啓動模式改爲主機形式,分隔出的初始化容器會被優先加載,在初始化容器加載完成之後後面的業務容器才能正常接着運行


3:container 業務容器(並行啓動)

官方網站:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

示例:

Init containers in use

This example defines a simple Pod that has two init containers. The first waits for myservice, and the second waits for mydb. Once both init containers complete, the Pod runs the app container from its spec section.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
鏡像拉取策略(image PullPolicy)

IfNotPresent:默認值,鏡像在宿主機上不存在時才拉取

Always:每次創建Pod都會重新拉取一次鏡像

Never:Pod永遠不會主動拉取這個鏡像

官方網站:https://kubernetes.io/docs/concepts/containers/images

示例:

Verify by creating a pod that uses a private image, e.g.:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: $PRIVATE_IMAGE_NAME
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF
master1上操作
[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
my-nginx-d55b94fd-kc2gl             1/1     Running   0          40h
my-nginx-d55b94fd-tkr42             1/1     Running   0          40h
nginx-6c94d899fd-8pf48              1/1     Running   0          2d15h
nginx-deployment-5477945587-f5dsm   1/1     Running   0          2d14h
nginx-deployment-5477945587-hmgd2   1/1     Running   0          2d14h
nginx-deployment-5477945587-pl2hn   1/1     Running   0          2d14h

[root@master1 ~]# kubectl edit deployment/my-nginx
......此處省略多行
    spec:
      containers:
      - image: nginx:1.15.4
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

[root@master1 ~]# cd demo/
[root@master1 demo]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
[root@master1 demo]# kubectl create -f pod1.yaml    #進行創建
pod/mypod created
此時會出現CrashLoopBackOff創建之後又關閉的狀態提示
`失敗的狀態的原因是因爲命令啓動衝突`
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx:1.14     #同時更改一下版本nginx:1.14
      imagePullPolicy: Always
#刪除最後一行的command: [ "echo", "SUCCESS" ]語句

`刪除原有的資源`
[root@master1 demo]# kubectl delete -f pod1.yaml
pod "mypod" deleted

`更新資源`
[root@master1 demo]# kubectl apply -f pod1.yaml
pod/mypod created
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
mypod                               1/1     Running   0          3m26s

`查看分配節點`
[root@master1 demo]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE
mypod         1/1     Running   0          4m45s   172.17.40.5   192.168.18.145   <none>
#此時172.17.40.5段,對應的是node2節點的192.168.18.145地址

`到node2上查看指定的應用是否部署到指定節點上`
[root@node2 ~]# curl -I 172.17.40.5
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Sat, 15 Feb 2020 04:11:53 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes

搭建Harbor私有倉庫

此時再開啓一臺新的虛擬機:CentOS 7-2 192.168.18.134(可以將網卡設置爲靜態IP)

`部署docker引擎`
[root@harbor ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y
[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@harbor ~]# yum install -y docker-ce
[root@harbor ~]# systemctl stop firewalld.service
[root@harbor ~]# setenforce 0
[root@harbor ~]# systemctl start docker.service
[root@harbor ~]# systemctl enable docker.service

`檢查相關進程開啓情況`
[root@harbor ~]# ps aux | grep docker
root       4913  0.8  3.6 565612 68884 ?        Ssl  12:23   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root       5095  0.0  0.0 112676   984 pts/1    R+   12:23   0:00 grep --color=auto docker

`鏡像加速服務`
[root@harbor ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"]
}
EOF
[root@harbor ~]# systemctl daemon-reload
[root@harbor ~]# systemctl restart docker

`網絡優化部分`
[root@harbor ~]# echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
[root@harbor ~]# service network restart
Restarting network (via systemctl):                        [  確定  ]
[root@harbor ~]# systemctl restart docker
----------

[root@harbor ~]# mkdir /aaa
[root@harbor ~]# mount.cifs //192.168.0.105/rpm /aaa
Password for root@//192.168.0.105/rpm:
[root@harbor ~]# cd /aaa/docker/
[root@harbor docker]# cp docker-compose /usr/local/bin/
[root@harbor docker]# cd /usr/local/bin/
[root@harbor bin]# ls
docker-compose
[root@harbor bin]# docker-compose -v
docker-compose version 1.21.1, build 5a3f1a3
[root@harbor bin]# cd /aaa/docker/
[root@harbor docker]# tar zxvf harbor-offline-installer-v1.2.2.tgz -C /usr/local/
[root@harbor docker]# cd /usr/local/harbor/
[root@harbor harbor]# ls
common                     docker-compose.yml     harbor.v1.2.2.tar.gz  NOTICE
docker-compose.clair.yml   harbor_1_1_0_template  install.sh            prepare
docker-compose.notary.yml  harbor.cfg             LICENSE               upgrade

`配置Harbor參數文件`
[root@harbor harbor]# vim harbor.cfg
5 hostname = 192.168.18.134     #5行改爲自己本機的IP地址
59 harbor_admin_password = Harbor12345      #此行爲默認賬號和密碼不要忘記,登陸時要用
#修改完成後按Esc退出插入模式,輸入:wq保存退出
[root@harbor harbor]# ./install.sh
......此處省略多行
Creating harbor-log ... done
Creating harbor-adminserver ... done
Creating harbor-db          ... done
Creating registry           ... done
Creating harbor-ui          ... done
Creating nginx              ... done
Creating harbor-jobservice  ... done
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at http://192.168.18.134.
For more details, please visit https://github.com/vmware/harbor .

第一步:登錄Harbor私有倉庫

在宿主機瀏覽器地址欄中輸入:192.168.18.134,輸入默認的賬戶admin,密碼Harbor12345,就可以點擊登錄

在這裏插入圖片描述

第二步:新建項目並設爲私有

在項目界面點擊"+項目"添加新項目,輸入項目名稱,點擊創建,然後點擊新項目左側的三個小點,將項目設爲私有

在這裏插入圖片描述

在這裏插入圖片描述


兩個node節點配置連接私有倉庫(注意後面的逗號要添加)

`node2節點`
[root@node2 ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"],     #末尾要有,
  "insecure-registries":["192.168.18.134"]                          #添加這行
}
[root@node2 ~]# systemctl restart docker

`node2節點`
[root@node1 ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"],     #末尾要有,
  "insecure-registries":["192.168.18.134"]                          #添加這行
}
[root@node1 ~]# systemctl restart docker

第三步:節點上登錄harbor私有倉庫

`node2節點:`
[root@node2 ~]# docker login 192.168.18.134
Username: admin     #輸入賬戶admin
Password:           #輸入密碼:Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded     #此時成功登錄

`下載tomcat鏡像並打標籤推送:``
[root@node2 ~]# docker pull tomcat
......此處省略多行
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest
[root@node2 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
tomcat                                                            latest              aeea3708743f        3 days ago          529MB
[root@node2 ~]# docker tag tomcat 192.168.18.134/project/tomcat     #打標籤的過程
[root@node2 ~]# docker push 192.168.18.134/project/tomcat           #上傳鏡像
此時在harbor私倉界面就能看到推送上去的tomcat鏡像

在這裏插入圖片描述


問題:如果我們想使用另一個節點node1去拉取私倉中的tomcar鏡像就會出現error報錯,提示被拒絕(也就是需要登陸)

[root@node1 ~]# docker pull 192.168.18.134/project/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 192.168.18.134/project/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied       #提示出錯,缺少倉庫的憑據

`node1節點下載tomcat鏡像`
[root@node1 ~]# docker pull tomcat:8.0.52
[root@node1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
tomcat                                                            8.0.52              b4b762737ed4        19 months ago       356MB

第四步:master1上操作

[root@master1 demo]# vim tomcat01.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      containers:
      - name: my-tomcat
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: my-tomcat

`創建`
[root@master1 demo]# kubectl create -f tomcat01.yaml
deployment.extensions/my-tomcat created
service/my-tomcat created
`查看資源`
[root@master1 demo]# kubectl get pods,deploy,svc
NAME                                    READY   STATUS    RESTARTS   AGE
pod/my-nginx-d55b94fd-kc2gl             1/1     Running   1          2d
pod/my-nginx-d55b94fd-tkr42             1/1     Running   1          2d
`pod/my-tomcat-57667b9d9-8bkns`         1/1     Running   0          84s
`pod/my-tomcat-57667b9d9-kcddv`         1/1     Running   0          84s
pod/mypod                               1/1     Running   1          8h
pod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d
pod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          2d23h

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/my-nginx           2         2         2            2           2d
`deployment.extensions/my-tomcat`        2         2         2            2           84s
deployment.extensions/nginx              1         1         1            1           8d
deployment.extensions/nginx-deployment   3         3         3            3           2d23h

NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/kubernetes         ClusterIP   10.0.0.1     <none>        443/TCP          10d
service/my-nginx-service   NodePort    10.0.0.210   <none>        80:40377/TCP     2d
`service/my-tomcat          NodePort    10.0.0.86    <none>        8080:41860/TCP   84s`
service/nginx-service      NodePort    10.0.0.242   <none>        80:40422/TCP     3d10h
#內部端口8080,對外端口41860

[root@master1 demo]# kubectl get ep
NAME               ENDPOINTS                                 AGE
kubernetes         192.168.18.128:6443,192.168.18.132:6443   10d
my-nginx-service   172.17.32.4:80,172.17.40.3:80             2d
`my-tomcat          172.17.32.6:8080,172.17.40.6:8080         5m29s`
nginx-service      172.17.40.5:80                            3d10h
#此時my-tomcat被分配到了後面兩個節點上去
驗證:在宿主機瀏覽器中輸入192.168.18.148:41860和192.168.18.145:41860這兩個節點地址加對外暴露端口號,查看是否都可以訪問tomcat的主頁

在這裏插入圖片描述

在這裏插入圖片描述

`驗證可以成功訪問之後我們先把資源刪除,後面使用私有倉庫中的鏡像進行創建`
[root@master1 demo]# kubectl delete -f tomcat01.yaml
deployment.extensions "my-tomcat" deleted
service "my-tomcat" deleted

問題處理:

`如果遇到處於Terminating狀態的無法刪除的資源`
[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS        RESTARTS   AGE
my-tomcat-57667b9d9-8bkns         1/1     `Terminating`   0          84s
my-tomcat-57667b9d9-kcddv         1/1     `Terminating`   0          84s

#這種情況下可以使用強制刪除命令
`格式:kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]`

[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-8bkns --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-8bkns" force deleted

[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-kcddv --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-kcddv" force deleted

[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
pod/mypod                               1/1     Running   1          8h
pod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d
pod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          2d23h
pod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          2d23h

第五步:node1上操作(之前登陸過Harbor倉庫的節點)

我們需要先刪除我們之前上傳到私有倉庫的額project/tomcat鏡像

在這裏插入圖片描述

node2中之前打標籤的鏡像也需要刪除:
[root@node2 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
192.168.18.134/project/tomcat                                     latest              aeea3708743f        3 days ago          529MB

[root@node2 ~]# docker rmi 192.168.18.134/project/tomcat
Untagged: 192.168.18.134/project/tomcat:latest
Untagged: 192.168.18.134/project/tomcat@sha256:8ffa1b72bf611ac305523ed5bd6329afd051c7211fbe5f0b5c46ea5fb1adba46

`鏡像打標籤`
[root@node2 ~]# docker tag tomcat:8.0.52 192.168.18.134/project/tomcat
`上傳鏡像到Harbor`
[root@node2 ~]# docker push 192.168.18.134/project/tomcat
#此時我們就可以在私有倉庫中看到新上傳的鏡像了

`查看登陸憑據`
[root@node2 ~]# cat .docker/config.json
{
        "auths": {
                "192.168.18.134": {     #訪問的IP地址
                        "auth": "YWRtaW46SGFyYm9yMTIzNDU="      #驗證
                }
        },
        "HttpHeaders": {                #頭部信息
                "User-Agent": "Docker-Client/19.03.5 (linux)"
        }
`生成非換行形式的驗證碼`
[root@node2 ~]# cat .docker/config.json | base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=   

特別注意:此時下載次數爲0,一會我們使用私有倉庫中的鏡像進行資源的創建,那麼拉取的過程必定會下載鏡像,應當數值會有變化


第六步:master1中創建安全組件的yaml文件

[root@master1 demo]# vim registry-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: registry-pull-secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=
type: kubernetes.io/dockerconfigjson

`創建secret資源`
[root@master1 demo]# kubectl create -f registry-pull-secret.yaml
secret/registry-pull-secret created
`查看secret資源`
[root@master1 demo]# kubectl get secret
NAME                   TYPE                                  DATA   AGE
default-token-pbr9p    kubernetes.io/service-account-token   3      10d
`registry-pull-secret   kubernetes.io/dockerconfigjson        1      25s`

[root@master1 demo]# vim tomcat01.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      imagePullSecrets:             #證書拉取的憑據
      - name: registry-pull-secret  #名稱
      containers:
      - name: my-tomcat
        image: 192.168.18.134/project/tomcat    #鏡像的下載位置做此修改
        ports:
        - containerPort: 80
......以下省略多行
#修改完成後按Esc退出插入模式,輸入:wq保存退出
`創建tomcat01資源`
[root@master1 demo]# kubectl create -f tomcat01.yaml
deployment.extensions/my-tomcat created
service/my-tomcat created

[root@master1 demo]# kubectl get pods,deploy,svc,ep
NAME                                    READY   STATUS    RESTARTS   AGE
pod/my-nginx-d55b94fd-kc2gl             1/1     Running   1          2d1h
pod/my-nginx-d55b94fd-tkr42             1/1     Running   1          2d1h
`pod/my-tomcat-7c5b6db486-bzjlv`        1/1     Running   0          56s
`pod/my-tomcat-7c5b6db486-kw8m4`        1/1     Running   0          56s
pod/mypod                               1/1     Running   1          9h
pod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d1h
pod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          3d
pod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          3d
pod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          3d

NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/my-nginx           2         2         2            2          2d1h
`deployment.extensions/my-tomcat`        2         2         2            2           56s
deployment.extensions/nginx              1         1         1            1           8d
deployment.extensions/nginx-deployment   3         3         3            3           3d

NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/kubernetes         ClusterIP   10.0.0.1     <none>        443/TCP          10d
service/my-nginx-service   NodePort    10.0.0.210   <none>        80:40377/TCP     2d1h
`service/my-tomcat`        NodePort    10.0.0.235   <none>        8080:43654/TCP   56s
service/nginx-service      NodePort    10.0.0.242   <none>        80:40422/TCP     3d11h
#對外端口爲43654
NAME                         ENDPOINTS                                 AGE
endpoints/kubernetes         192.168.18.128:6443,192.168.18.132:6443   10d
endpoints/my-nginx-service   172.17.32.4:80,172.17.40.3:80             2d1h
`endpoints/my-tomcat`        172.17.32.6:8080,172.17.40.6:8080         56s
endpoints/nginx-service      172.17.40.5:80                            3d11h

接下來我們需要驗證的就是資源加載沒有任何問題的情況下,鏡像資源是否來自我們的Harbor私有倉庫呢?

這裏就需要關注我們私有倉庫中鏡像的下載數了

在這裏插入圖片描述

結果:這時顯示下載數由之前的0變爲2,這就說明我們創建的兩個資源鏡像是從私有倉庫中下載的!

我們再使用宿主機的瀏覽器驗證192.168.18.148:43654和192.168.18.145:43654這兩個節點地址還是可以訪問tomcat的主頁

在這裏插入圖片描述

在這裏插入圖片描述


以上實驗實現了Harbor私有倉庫搭配創建Pod資源!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章