GaussDB T分佈式集羣部署以及升級指南

本文用四節點部署GaussDB T 1.0.1分佈式集羣,部署完成後再將其升級到1.0.2版本(直接安裝1.0.2版本,在安裝過程中會遇到segment fault報錯,目前尚未解決)。前期操作系統準備工作參考之前的幾篇文章。

1、部署分佈式集羣

1.1 節點信息

各節點信息如下表所示:
GaussDB T分佈式集羣部署以及升級指南

1.2 集羣參數文件

根據實際情況修改集羣參數,或者通過database manager工具生成,內容如下:

[root@onas db]# vi clusterconfig.xml 
<?xml version="1.0" encoding="UTF-8"?><ROOT>
 <CLUSTER>
  <PARAM name="clusterName" value="GT100"/>
  <PARAM name="nodeNames" value="hwd08,hwd09,hwd10,hwd11"/>
  <PARAM name="gaussdbAppPath" value="/opt/huawei/gaussdb/app"/>
  <PARAM name="gaussdbLogPath" value="/var/log/huawei/gaussdb"/>
  <PARAM name="archiveLogPath" value="/opt/huawei/gaussdb/arch_log"/>
  <PARAM name="redoLogPath" value="/opt/huawei/gaussdb/redo_log"/>
  <PARAM name="tmpMppdbPath" value="/opt/huawei/gaussdb/temp"/>
  <PARAM name="gaussdbToolPath" value="/opt/huawei/gaussdb/gaussTools/wisequery"/>
  <PARAM name="datanodeType" value="DN_ZENITH_HA"/>
  <PARAM name="WhetherDoFailoverAuto" value="OFF"/>
  <PARAM name="clusterType" value="mutil-AZ"/>
  <PARAM name="coordinatorType" value="CN_ZENITH_ZSHARDING"/>
  <PARAM name="SetDoubleIPForETCD" value="false"/>
 </CLUSTER>
 <DEVICELIST>
  <DEVICE sn="1000001">
   <PARAM name="name" value="hwd08"/>
   <PARAM name="azName" value="AZ1"/>
   <PARAM name="azPriority" value="1"/>
   <PARAM name="backIp1" value="192.168.120.29"/>
   <PARAM name="sshIp1" value="192.168.120.29"/>
   <PARAM name="innerManageIp1" value="192.168.120.29"/>
   <PARAM name="cmsNum" value="1"/>
   <PARAM name="cmServerPortBase" value="21000"/>
   <PARAM name="cmServerListenIp1" value="192.168.120.29,192.168.120.30,192.168.120.31"/>
   <PARAM name="cmServerHaIp1" value="192.168.120.29,192.168.120.30,192.168.120.31"/>
   <PARAM name="cmServerlevel" value="1"/>
   <PARAM name="cmServerRelation" value="hwd08,hwd09,hwd10"/>
   <PARAM name="cmDir" value="/opt/huawei/gaussdb/data/data_cm"/>
   <PARAM name="dataNum" value="1"/>
   <PARAM name="dataPortBase" value="40000"/>
   <PARAM name="dataNode1" value="/opt/huawei/gaussdb/data_db/dn1,hwd09,/opt/huawei/gaussdb/data_db/dn1,hwd10,/opt/huawei/
gaussdb/data_db/dn1"/>
   <PARAM name="quorumAny1" value="1"/>
   <PARAM name="gtsNum" value="1"/>
   <PARAM name="gtsPortBase" value="7000"/>
   <PARAM name="gtsDir1" value="/opt/huawei/gaussdb/data/gts,hwd09,/opt/huawei/gaussdb/data/gts"/>
   <PARAM name="cooNum" value="1"/>
   <PARAM name="cooPortBase" value="8000"/>
   <PARAM name="cooListenIp1" value="192.168.120.29"/>
   <PARAM name="cooDir1" value="/opt/huawei/gaussdb/data/data_cn"/>
   <PARAM name="etcdNum" value="1"/>
   <PARAM name="etcdListenPort" value="2379"/>
   <PARAM name="etcdHaPort" value="2380"/>
   <PARAM name="etcdListenIp1" value="192.168.120.29"/>
   <PARAM name="etcdHaIp1" value="192.168.120.29"/>
   <PARAM name="etcdDir1" value="/opt/huawei/gaussdb/data_etcd1/data"/>
  </DEVICE>
  <DEVICE sn="1000002">
   <PARAM name="name" value="hwd09"/>
   <PARAM name="azName" value="AZ1"/>
   <PARAM name="azPriority" value="1"/>
   <PARAM name="backIp1" value="192.168.120.30"/>
   <PARAM name="sshIp1" value="192.168.120.30"/>
   <PARAM name="innerManageIp1" value="192.168.120.30"/>
   <PARAM name="dataNum" value="1"/>
   <PARAM name="dataPortBase" value="40000"/>
   <PARAM name="dataNode1" value="/opt/huawei/gaussdb/data_db/dn2,hwd10,/opt/huawei/gaussdb/data_db/dn2,hwd11,/opt/huawei/
gaussdb/data_db/dn2"/>
   <PARAM name="quorumAny1" value="1"/>
   <PARAM name="cooNum" value="1"/>
   <PARAM name="cooPortBase" value="8000"/>
   <PARAM name="cooListenIp1" value="192.168.120.30"/>
   <PARAM name="cooDir1" value="/opt/huawei/gaussdb/data/data_cn"/>
   <PARAM name="etcdNum" value="1"/>
   <PARAM name="etcdListenPort" value="2379"/>
   <PARAM name="etcdHaPort" value="2380"/>
   <PARAM name="etcdListenIp1" value="192.168.120.30"/>
   <PARAM name="etcdHaIp1" value="192.168.120.30"/>
   <PARAM name="etcdDir1" value="/opt/huawei/gaussdb/data_etcd1/data"/>
  </DEVICE>
  <DEVICE sn="1000003">
   <PARAM name="name" value="hwd10"/>
   <PARAM name="azName" value="AZ1"/>
   <PARAM name="azPriority" value="1"/>
   <PARAM name="backIp1" value="192.168.120.31"/>
   <PARAM name="sshIp1" value="192.168.120.31"/>
   <PARAM name="innerManageIp1" value="192.168.120.31"/>
   <PARAM name="dataNum" value="1"/>
   <PARAM name="dataPortBase" value="40000"/>
   <PARAM name="dataNode1" value="/opt/huawei/gaussdb/data_db/dn3,hwd11,/opt/huawei/gaussdb/data_db/dn3,hwd08,/opt/huawei/
gaussdb/data_db/dn3"/>
   <PARAM name="quorumAny1" value="1"/>
   <PARAM name="cooNum" value="1"/>
   <PARAM name="cooPortBase" value="8000"/>
   <PARAM name="cooListenIp1" value="192.168.120.31"/>
   <PARAM name="cooDir1" value="/opt/huawei/gaussdb/data/data_cn"/>
   <PARAM name="etcdNum" value="1"/>
   <PARAM name="etcdListenPort" value="2379"/>
   <PARAM name="etcdHaPort" value="2380"/>
   <PARAM name="etcdListenIp1" value="192.168.120.31"/>
   <PARAM name="etcdHaIp1" value="192.168.120.31"/>
   <PARAM name="etcdDir1" value="/opt/huawei/gaussdb/data_etcd1/data"/>
  </DEVICE>
  <DEVICE sn="1000004">
   <PARAM name="name" value="hwd11"/>
   <PARAM name="azName" value="AZ1"/>
   <PARAM name="azPriority" value="1"/>
   <PARAM name="backIp1" value="192.168.120.49"/>
   <PARAM name="sshIp1" value="192.168.120.49"/>
   <PARAM name="innerManageIp1" value="192.168.120.49"/>
   <PARAM name="cooNum" value="1"/>
   <PARAM name="cooPortBase" value="8000"/>
   <PARAM name="cooListenIp1" value="192.168.120.49"/>
   <PARAM name="cooDir1" value="/opt/huawei/gaussdb/data/data_cn"/>
  </DEVICE>
 </DEVICELIST>
</ROOT>

1.3 準備安裝用戶及環境

將安裝包解壓後,使用gs_preinstall準備好安裝環境,如下:

[root@hwd08 script]# ./gs_preinstall -U omm -G dbgrp -X /mnt/Huawei/db/clusterconfig.xml 
Parsing the configuration file.
Successfully parsed the configuration file.
Installing the tools on the local node.
Successfully installed the tools on the local node.
Are you sure you want to create trust for root (yes/no)? yes
Please enter password for root.
Password: 
Creating SSH trust for the root permission user.
Checking network information.
All nodes in the network are Normal.
Successfully checked network information.
Creating SSH trust.
Creating the local key file.
Successfully created the local key files.
Appending local ID to authorized_keys.
Successfully appended local ID to authorized_keys.
Updating the known_hosts file.
Successfully updated the known_hosts file.
Appending authorized_key on the remote node.
Successfully appended authorized_key on all remote node.
Checking common authentication file content.
Successfully checked common authentication content.
Distributing SSH trust file to all node.
Successfully distributed SSH trust file to all node.
Verifying SSH trust on all hosts.
Successfully verified SSH trust on all hosts.
Successfully created SSH trust.
Successfully created SSH trust for the root permission user.
All host RAM is consistent
Pass over configuring LVM
Distributing package.
Successfully distributed package.
Are you sure you want to create the user[omm] and create trust for it (yes/no)? yes
Please enter password for cluster user.
Password: 
Please enter password for cluster user again.
Password: 
Creating [omm] user on all nodes.
Successfully created [omm] user on all nodes.
Installing the tools in the cluster.
Successfully installed the tools in the cluster.
Checking hostname mapping.
Successfully checked hostname mapping.
Creating SSH trust for [omm] user.
Please enter password for current user[omm].
Password: 
Checking network information.
All nodes in the network are Normal.
Successfully checked network information.
Creating SSH trust.
Creating the local key file.
Successfully created the local key files.
Appending local ID to authorized_keys.
Successfully appended local ID to authorized_keys.
Updating the known_hosts file.
Successfully updated the known_hosts file.
Appending authorized_key on the remote node.
Successfully appended authorized_key on all remote node.
Checking common authentication file content.
Successfully checked common authentication content.
Distributing SSH trust file to all node.
Successfully distributed SSH trust file to all node.
Verifying SSH trust on all hosts.
Successfully verified SSH trust on all hosts.
Successfully created SSH trust.
Successfully created SSH trust for [omm] user.
Checking OS version.
Successfully checked OS version.
Creating cluster's path.
Successfully created cluster's path.
Setting SCTP service.
Successfully set SCTP service.
Set and check OS parameter.
Successfully set NTP service.
Setting OS parameters.
Successfully set OS parameters.
Set and check OS parameter completed.
Preparing CRON service.
Successfully prepared CRON service.
Preparing SSH service.
Successfully prepared SSH service.
Setting user environmental variables.
Successfully set user environmental variables.
Configuring alarms on the cluster nodes.
Successfully configured alarms on the cluster nodes.
Setting the dynamic link library.
Successfully set the dynamic link library.
Fixing server package owner.
Successfully fixed server package owner.
Create logrotate service.
Successfully create logrotate service.
Setting finish flag.
Successfully set finish flag.
check time consistency(maximum execution time 10 minutes).
Time consistent is running(1/20)...
Time consistent has been completed.
Preinstallation succeeded.

1.4 執行安裝

首先切換到omm用戶,對操作系統進行檢查,如果有報錯,根據報錯信息檢查並修復:

[root@hwd08 script]# su - omm
[omm@hwd08 ~]$ gs_checkos -i A12 -h hwd08,hwd09,hwd10,hwd11 -X /mnt/Huawei/db/clusterconfig.xml
Checking items
    A12.[ Time consistency status ]                             : Normal
Total numbers:1. Abnormal numbers:0. Warning numbers:0.

如果無報錯,執行下面的腳本進行安裝:

[omm@hwd08 ~]$ gs_install -X /mnt/Huawei/db/clusterconfig.xml 
Parsing the configuration file.
Check preinstall on every node.
Successfully checked preinstall on every node.
Creating the backup directory.
Successfully created the backup directory.
Check the time difference between hosts in the cluster.
Installing the cluster.
Installing applications on all nodes.
Successfully installed APP.
Distribute etcd communication keys.
Successfully distrbute etcd communication keys.
Initializing cluster instances
.4861s
Initializing cluster instances is completed.
Configuring standby datanode.
...................1309s
Successfully configure datanode.
Cluster installation is completed.
.Configuring.
Load cluster configuration file.
Configuring the cluster.
Successfully configuring the cluster.
Configuration is completed.
Start cm agent.
Successfully start cm agent and ETCD in cluster.
Starting the cluster.
==============================================
..32s
Successfully starting the cluster.
==============================================

根據實際環境,這個安裝過程耗時不等,這裏耗時了半個小時。安裝完成後,執行下面的命令驗證集羣狀態:

[omm@hwd08 ~]$ gs_om  -t status
Set output to terminal.
--------------------------------------------------------------------Cluster Status--------------------------------------------------------------------
az_state :      single_az
cluster_state : Normal
balanced :      true
----------------------------------------------------------------------AZ Status-----------------------------------------------------------------------
AZ:AZ1                ROLE:primary            STATUS:ONLINE      
---------------------------------------------------------------------Host Status----------------------------------------------------------------------
HOST:hwd08            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.29
HOST:hwd09            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.30
HOST:hwd10            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.31
HOST:hwd11            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.49
----------------------------------------------------------------Cluster Manager Status----------------------------------------------------------------
INSTANCE:CM1          ROLE:slave              STATUS:ONLINE       HOST:hwd08            ID:601
INSTANCE:CM2          ROLE:slave              STATUS:ONLINE       HOST:hwd09            ID:602
INSTANCE:CM3          ROLE:primary            STATUS:ONLINE       HOST:hwd10            ID:603
---------------------------------------------------------------------ETCD Status----------------------------------------------------------------------
INSTANCE:ETCD1        ROLE:follower           STATUS:ONLINE       HOST:hwd08            ID:701      PORT:2379         DataDir:/opt/huawei/gaussdb/data_etcd1/data
INSTANCE:ETCD2        ROLE:follower           STATUS:ONLINE       HOST:hwd09            ID:702      PORT:2379         DataDir:/opt/huawei/gaussdb/data_etcd1/data
INSTANCE:ETCD3        ROLE:leader             STATUS:ONLINE       HOST:hwd10            ID:703      PORT:2379         DataDir:/opt/huawei/gaussdb/data_etcd1/data
----------------------------------------------------------------------CN Status-----------------------------------------------------------------------
INSTANCE:cn_401       ROLE:no role            STATUS:ONLINE       HOST:hwd08            ID:401      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
INSTANCE:cn_402       ROLE:no role            STATUS:ONLINE       HOST:hwd09            ID:402      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
INSTANCE:cn_403       ROLE:no role            STATUS:ONLINE       HOST:hwd10            ID:403      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
INSTANCE:cn_404       ROLE:no role            STATUS:ONLINE       HOST:hwd11            ID:404      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
----------------------------------------------------------------------GTS Status----------------------------------------------------------------------
INSTANCE:GTS1         ROLE:primary            STATUS:ONLINE       HOST:hwd08            ID:441      PORT:7000         DataDir:/opt/huawei/gaussdb/data/gts
INSTANCE:GTS2         ROLE:standby            STATUS:ONLINE       HOST:hwd09            ID:442      PORT:7000         DataDir:/opt/huawei/gaussdb/data/gts
---------------------------------------------------------Instances Status in Group (group_1)----------------------------------------------------------
INSTANCE:DB1_1        ROLE:primary            STATUS:ONLINE       HOST:hwd08            ID:1        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn1
INSTANCE:DB1_2        ROLE:standby            STATUS:ONLINE       HOST:hwd09            ID:2        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn1
INSTANCE:DB1_3        ROLE:standby            STATUS:ONLINE       HOST:hwd10            ID:3        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn1
---------------------------------------------------------Instances Status in Group (group_2)----------------------------------------------------------
INSTANCE:DB2_4        ROLE:primary            STATUS:ONLINE       HOST:hwd09            ID:4        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn2
INSTANCE:DB2_5        ROLE:standby            STATUS:ONLINE       HOST:hwd10            ID:5        PORT:40042        DataDir:/opt/huawei/gaussdb/data_db/dn2
INSTANCE:DB2_6        ROLE:standby            STATUS:ONLINE       HOST:hwd11            ID:6        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn2
---------------------------------------------------------Instances Status in Group (group_3)----------------------------------------------------------
INSTANCE:DB3_9        ROLE:standby            STATUS:ONLINE       HOST:hwd08            ID:9        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn3
INSTANCE:DB3_7        ROLE:primary            STATUS:ONLINE       HOST:hwd10            ID:7        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn3
INSTANCE:DB3_8        ROLE:standby            STATUS:ONLINE       HOST:hwd11            ID:8        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn3
-----------------------------------------------------------------------Manage IP----------------------------------------------------------------------
HOST:hwd08            IP:192.168.120.29
HOST:hwd09            IP:192.168.120.30
HOST:hwd10            IP:192.168.120.31
HOST:hwd11            IP:192.168.120.49
-------------------------------------------------------------------Query Action Info------------------------------------------------------------------
HOSTNAME: hwd08     TIME: 2020-03-19 14:04:51.763762
------------------------------------------------------------------------Float Ip------------------------------------------------------------------
HOST:hwd10    DB3_7:192.168.120.31    IP:
HOST:hwd08    DB1_1:192.168.120.29    IP:
HOST:hwd09    DB2_4:192.168.120.30    IP:

2、升級集羣

GaussDB T分佈式集羣是通過執行gs_upgradectl命令對數據庫版本進行升級的,升級有兩種方式:離線升級和在線滾動升級。對於GaussDB T 1.0.1版本僅支持離線升級,離線升級類型也僅支持離線二進制升級和離線小版本升級兩種類型。對於1.0.2版本的升級,要求升級源版本和升級所使用的目標版本必須同時滿足【升級版本規則】和【升級白名單】時,纔可執行升級操作。
GaussDB T 1.0.1版本僅支持離線升級,需要注意以下事項:(升級前,請嚴格閱讀以下相關注意事項,並確認)

  • 集羣內所有節點Python版本要求3.7.2及以上。
  • 集羣用戶互信正常。
  • 升級前需保證集羣健康狀態正常,所有CN、DN和GTS的狀態正常。
  • 僅支持離線升級,離線升級僅支持離線小版本升級和離線二進制升級。
  • 僅支持從低版本往高版本升級。
  • 離線升級前請停止所有業務,且保證升級過程中沒有任何業務在執行。
  • 離線升級前必須確保GTS組和各DN組中主備實例的信息完全同步,且保持升級前長時間穩定同步。
  • 升級操作不能和主機替換、擴容、節點替換等其他om側操作同時執行。
  • 升級期間不能直接使用cm命令對集羣進行操作,比如switchover等。
  • 升級前需要在目標安裝包解壓出的script目錄下執行前置腳本gs_preinstall。
  • 離線升級過程中如果已經升級到集羣拉起操作(離線二進制升級)或DN正常拉起操作時(離線小版本升級),則不可執行回滾操作。
  • 升級命令中傳入的xml配置文件必須與當前運行集羣的配置結構完全相同。
  • 二進制升級後,未執行commit-upgrade進行升級提交的情況下仍可以執行回滾操作,如果驗證確認升級成功後,可以執行commit-upgrade命令以刪除升級臨時文件。
  • 小版本升級最後,如果CN和DN均處於READ ONLY模式下,則可以執行回滾;如果CN或主DN處於READ WRITE模式,則不可執行回滾。其中,CN、DN的模式可通過查詢DV_DATABASE的OPEN_STATUS字段獲取。如果驗證確認升級成功,可以執行commit-upgrade命令以刪除升級臨時文件。
  • 升級命令執行成功後,如果已執行commit-upgrade命令進行升級提交,則無法再通過調用回滾接口auto-rollback或binary-rollback、systable-rollback回退到老版本。

2.1 安裝python3

另外,GaussDB T 1.0.2要求python版本爲3.7.2及以上,如果系統沒有python3環境,請先安裝配置python3環境再操作。各個集羣節點都要進行安裝操作。

[root@hwd08 ~]# tar -zxvf Python-3.8.1.tgz 
[root@hwd08 Python-3.8.1]# ./configure
[root@hwd08 Python-3.8.1]# make && make install

2.2 數據庫版本檢查

[omm@hwd08 ~]$ rlwrap zsql omm/[email protected]:8000 -q 
SQL> select *from dv_version;

VERSION                                                         
----------------------------------------------------------------
GaussDB_100_1.0.1T1.B002 Release 3d95f6d                        
ZENGINE                                                         
3d95f6d                                                         

3 rows fetched.

2.3 準備軟件包

[root@hwd08 ~]# mkdir /opt/software/newversion;cd /opt/software/newversion
[root@hwd08 newversion]# tar -xzf GaussDB_T_1.0.2-REDHAT7.5-X86.tar.gz 
[root@hwd08 newversion]# tar -xzf GaussDB_T_1.0.2-CLUSTER-REDHAT-64bit.tar.gz 

2.4 升級預檢查

[root@hwd08 newversion]# cd script/
[root@hwd08 script]# ./gs_preinstall -U omm -G dbgrp  -X /mnt/Huawei/db/clusterconfig.xml  --alarm-type=1 --operation=upgrade
Parsing the configuration file.
Successfully parsed the configuration file.
Do preinstall for upgrade.
Check environment for upgrade preinstall.
Successfully check environment for upgrade preinstall.
Installing the tools on the local node.
Successfully installed the tools on the local node.
Distributing package.
Successfully distributed package.
Check old environment on all nodes.
Successfully check old environment on all nodes.
Installing the tools in the cluster.
Successfully installed the tools in the cluster.
Creating conninfo directory.
Successfully created conninfo directory.
Fixing server package owner.
Successfully fixed server package owner.
Add sudo permission for omm.
Successfully add sudo permission for omm
Preinstallation succeeded.

2.5 升級類型檢查

以omm用戶,執行gs_upgradectl命令檢查升級類型。systable-upgrade爲離線小版本升級,binary-upgrade爲離線二進制升級。本次升級爲離線小版本升級。

[omm@hwd08 ~]$ gs_upgradectl -t upgrade-type -X /mnt/Huawei/db/clusterconfig.xml
Checking upgrade type.
Successfully checked upgrade type.
Upgrade type: systable-upgrade.

2.6 執行升級

[omm@hwd08 ~]$ gs_upgradectl -t offline-upgrade -X /mnt/Huawei/db/clusterconfig.xml
Performing systable-upgrade.
Checking zengine parameters.
Successfully check zengine parameters.
Checking cluster health.
Successfully checked cluster health.
Checking database status.
Successfully checked database status.
Checking space for backup files.
Check need size of [/opt/huawei/gaussdb/temp/binary_upgrade] in [hwd08] is [61329113088 Byte].
Check need size of [/opt/huawei/gaussdb/temp/binary_upgrade] in [hwd09] is [61329113088 Byte].
Check need size of [/opt/huawei/gaussdb/temp/binary_upgrade] in [hwd10] is [61329113088 Byte].
Check need size of [/opt/huawei/gaussdb/temp/binary_upgrade] in [hwd11] is [46265270272 Byte].
Successfully checked space for backup files.
Change the primary dn and cn to read only status.
Successfully changed the primary dn and cn to read only status.
Checking database read only status.
Successfully checked database read only status.
Checking sync info for dns.
Successfully checking sync info for dns.
Generating upgrade sql file.
Successfully generated upgrade sql file.
Generating combined upgrade sql file.
Successfully generated combined upgrade sql file.
Backing up current application and configurations.
Checking ztools path in each host.
Successfully checked ztools path in each host.
Successfully backed up current application and configurations.
Successfully record protection mode
Saving system tabls path.
Successfully saved system tables path.
Saving redo log file path.
Successfully saved redolog file path.
Saving undo log file path.
Successfully saved redolog file path.
Stopping the cluster.
Successfully stopped the cluster.
Stopping the etcd and agent.
Successfully stopped the etcd and agent.
Update etcd keys.
Successfully update etcd keys.
Starting all dns to open status for backuping system cntl and redolog.
Successfully started all dns to open status for backuping system cntl and redolog.
Backing up current system tables.
Successfully backed up system tables.
Backing up current cntl files.
Successfully backed up cntl files.
Backing up current redolog files.
Successfully backed up redolog files.
Backing up current undolog files.
Successfully backed up undolog files.
Shutdowning all dns for backuping system cntl and redolog.
Successfully shutdown all dns for backuping system cntl and redolog.
Upgrading application.
Successfully upgraded application.
Starting the restrict mode cluster.
Successfully started the restrict mode cluster.
Upgrading the system table
Successfully upgraded the system table
Shutting down the restrict mode cluster
Successfully shut down the restrict mode cluster
Starting the cns, dns to open status.
Successfully started the cns, dns to open status.
Converting the cns, primary dns to read write status.
Successfully converted the cns, primary dns to read write status.
Shutting down the open status cns, dns.
Successfully shutted down the open status cns, dns.
Starting the etcd.
Successfully started the etcd.
Loading the json.
Successfully loaded the json.
Starting cm agent.
Successfully started cm agent.
Starting the cluster.
Successfully started the cluster.
Commit systable-upgrade succeeded.

2.7 升級後版本確認以及集羣狀態檢查

[omm@hwd08 gaussdb]$ gs_om -V
gs_om GaussDB_T_1.0.2 build XXXX compiled at 2020-02-22 08:17:40
[omm@hwd08 ~]$ gs_om -t status
Set output to terminal.
--------------------------------------------------------------------Cluster Status--------------------------------------------------------------------
az_state :      single_az
cluster_state : Normal
balanced :      true
----------------------------------------------------------------------AZ Status-----------------------------------------------------------------------
AZ:AZ1                ROLE:primary            STATUS:ONLINE      
---------------------------------------------------------------------Host Status----------------------------------------------------------------------
HOST:hwd08            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.29
HOST:hwd09            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.30
HOST:hwd10            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.31
HOST:hwd11            AZ:AZ1                  STATUS:ONLINE       IP:192.168.120.49
----------------------------------------------------------------Cluster Manager Status----------------------------------------------------------------
INSTANCE:CM1          ROLE:slave              STATUS:ONLINE       HOST:hwd08            ID:601
INSTANCE:CM2          ROLE:slave              STATUS:ONLINE       HOST:hwd09            ID:602
INSTANCE:CM3          ROLE:primary            STATUS:ONLINE       HOST:hwd10            ID:603
---------------------------------------------------------------------ETCD Status----------------------------------------------------------------------
INSTANCE:ETCD1        ROLE:follower           STATUS:ONLINE       HOST:hwd08            ID:701      PORT:2379         DataDir:/opt/huawei/gaussdb/data_etcd1/data
INSTANCE:ETCD2        ROLE:leader             STATUS:ONLINE       HOST:hwd09            ID:702      PORT:2379         DataDir:/opt/huawei/gaussdb/data_etcd1/data
INSTANCE:ETCD3        ROLE:follower           STATUS:ONLINE       HOST:hwd10            ID:703      PORT:2379         DataDir:/opt/huawei/gaussdb/data_etcd1/data
----------------------------------------------------------------------CN Status-----------------------------------------------------------------------
INSTANCE:cn_401       ROLE:no role            STATUS:ONLINE       HOST:hwd08            ID:401      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
INSTANCE:cn_402       ROLE:no role            STATUS:ONLINE       HOST:hwd09            ID:402      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
INSTANCE:cn_403       ROLE:no role            STATUS:ONLINE       HOST:hwd10            ID:403      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
INSTANCE:cn_404       ROLE:no role            STATUS:ONLINE       HOST:hwd11            ID:404      PORT:8000         DataDir:/opt/huawei/gaussdb/data/data_cn
----------------------------------------------------------------------GTS Status----------------------------------------------------------------------
INSTANCE:GTS1         ROLE:primary            STATUS:ONLINE       HOST:hwd08            ID:441      PORT:7000         DataDir:/opt/huawei/gaussdb/data/gts
INSTANCE:GTS2         ROLE:standby            STATUS:ONLINE       HOST:hwd09            ID:442      PORT:7000         DataDir:/opt/huawei/gaussdb/data/gts
---------------------------------------------------------Instances Status in Group (group_1)----------------------------------------------------------
INSTANCE:DB1_1        ROLE:primary            STATUS:ONLINE       HOST:hwd08            ID:1        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn1
INSTANCE:DB1_2        ROLE:standby            STATUS:ONLINE       HOST:hwd09            ID:2        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn1
INSTANCE:DB1_3        ROLE:standby            STATUS:ONLINE       HOST:hwd10            ID:3        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn1
---------------------------------------------------------Instances Status in Group (group_2)----------------------------------------------------------
INSTANCE:DB2_4        ROLE:primary            STATUS:ONLINE       HOST:hwd09            ID:4        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn2
INSTANCE:DB2_5        ROLE:standby            STATUS:ONLINE       HOST:hwd10            ID:5        PORT:40042        DataDir:/opt/huawei/gaussdb/data_db/dn2
INSTANCE:DB2_6        ROLE:standby            STATUS:ONLINE       HOST:hwd11            ID:6        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn2
---------------------------------------------------------Instances Status in Group (group_3)----------------------------------------------------------
INSTANCE:DB3_9        ROLE:standby            STATUS:ONLINE       HOST:hwd08            ID:9        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn3
INSTANCE:DB3_7        ROLE:primary            STATUS:ONLINE       HOST:hwd10            ID:7        PORT:40000        DataDir:/opt/huawei/gaussdb/data_db/dn3
INSTANCE:DB3_8        ROLE:standby            STATUS:ONLINE       HOST:hwd11            ID:8        PORT:40021        DataDir:/opt/huawei/gaussdb/data_db/dn3
-----------------------------------------------------------------------Manage IP----------------------------------------------------------------------
HOST:hwd08            IP:192.168.120.29
HOST:hwd09            IP:192.168.120.30
HOST:hwd10            IP:192.168.120.31
HOST:hwd11            IP:192.168.120.49
-------------------------------------------------------------------Query Action Info------------------------------------------------------------------
HOSTNAME: hwd08     TIME: 2020-03-20 08:24:57.429450
------------------------------------------------------------------------Float Ip------------------------------------------------------------------
HOST:hwd08    DB1_1:192.168.120.29    IP:
HOST:hwd09    DB2_4:192.168.120.30    IP:
HOST:hwd10    DB3_7:192.168.120.31    IP:

到此,已完成GaussDB T 1.0.1分佈式集羣的升級。

2.8 升級後健康檢查

使用下面的命令對集羣做一次健康檢查,如下:

[omm@hwd08 gaussdb]$ gs_upgradectl -t postcheck -X /mnt/Huawei/db/clusterconfig.xml --upgrade-type=offline-upgrade
Starting check.
Checking cluster health.
Successfully checked cluster health.
Warning: REPL_AUTH is not TRUE for all instances, please use gs_gucZenith tool to set it.
Finished check.
Check result: OK. All check items is normal.

2.9 升級失敗後回滾

如果在升級過程中失敗需要回滾,則以omm用戶,執行gs_upgradectl命令回退。如下:

[omm@hwd08 gaussdb]$ gs_upgradectl -t offline-rollback -X /mnt/Huawei/db/clusterconfig.xml
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章