Redis-cluster羣集操作步驟(主從切換、新增、刪除主從節點)

1.進入集羣客戶端

任意選一個redis節點,進入redis 所在目錄

cd /redis 所在目錄/src/

./redis-cli -h 本地節點的ip -p redis的端口號 -a 密碼

[root@mysql-db01 ~]# redis-cli -h 10.0.0.51 -p 6379
10.0.0.51:6379>

 

2.查看集羣中各個節點狀態

集羣(cluster)
cluster info 打印集羣的信息
cluster nodes 列出集羣當前已知的所有節點(node),以及這些節點的相關信息

節點(node)
cluster meet <ip> <port> 將ip和port所指定的節點添加到集羣當中,讓它成爲集羣的一份子
cluster forget <node_id> 從集羣中移除node_id指定的節點
cluster replicate <node_id> 將當前節點設置爲node_id指定的節點的從節點
cluster saveconfig 將節點的配置文件保存到硬盤裏面
cluster slaves <node_id> 列出該slave節點的master節點
cluster set-config-epoch 強制設置configEpoch


槽(slot)
cluster addslots <slot> [slot ...] 將一個或多個槽(slot)指派(assign)給當前節點
cluster delslots <slot> [slot ...] 移除一個或多個槽對當前節點的指派
cluster flushslots 移除指派給當前節點的所有槽,讓當前節點變成一個沒有指派任何槽的節點
cluster setslot <slot> node <node_id> 將槽slot指派給node_id指定的節點,如果槽已經指派給另一個節點,那麼先讓另一個節點刪除該槽,然後再進行指派
cluster setslot <slot> migrating <node_id> 將本節點的槽slot遷移到node_id指定的節點中
cluster setslot <slot> importing <node_id> 從node_id 指定的節點中導入槽slot到本節點
cluster setslot <slot> stable 取消對槽slot的導入(import)或者遷移(migrate)
鍵(key)
cluster keyslot <key> 計算鍵key應該被放置在哪個槽上
cluster countkeysinslot <slot> 返回槽slot目前包含的鍵值對數量
cluster getkeysinslot <slot> <count> 返回count個slot槽中的鍵
其它
cluster myid 返回節點的ID
cluster slots 返回節點負責的slot
cluster reset 重置集羣,慎用

 

進入到redis客戶端後,運行如下命令,查看集羣中節點狀態

10.0.0.51:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_sent:23695
cluster_stats_messages_received:23690
10.0.0.51:6379> cluster nodes
e2cfd53b8083539d1a4546777d0a81b036ddd82a 10.0.0.70:6384 slave f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad(主節點爲:10.0.0.51:6380) 0 1510021756842 6 connected
857a5132c844d695c002f94297f294f8e173e393 10.0.0.51:6379 myself,master - 0 0 1 connected 0-5460
e4394d43cf18aa00c0f6833f6f498ba286b55ca1 10.0.0.70:6382 master - 0 1510021759865 4 connected 5461-10922
16eca138ce2767fd8f9d0c8892a38de0a042a355 10.0.0.70:6383 slave 857a5132c844d695c002f94297f294f8e173e393 0 1510021757849 5 connected
f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad 10.0.0.51:6380 master - 0 1510021754824 2 connected 10923-16383 ##紅色字體可以看出只有主節點會被分配哈希槽
d14e2f0538dc6925f04d1197b57f44ccdb7c683a 10.0.0.51:6381 slave e4394d43cf18aa00c0f6833f6f498ba286b55ca1 0 1510021758855 4 connected
10.0.0.51:6379>

可以查看到主從關係,以及節點的健康程度

 

3.寫入記錄

set key value                               ##只有擁有哈希槽的節點才能被寫入數據,這就意味着只有主節點才能寫數據

[root@mysql-db01 src]# redis-cli -h 10.0.0.51 -p 6380
10.0.0.51:6380> get mao
(nil)
10.0.0.51:6380> set mao 123
OK

 

4.主節點和備節點切換

在需要的slaves節點上執行命令:CLUSTER FAILOVER

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.51 -p 6380 ###切換主備需要先進入備節點,然後在備節點中切換到主節點
10.0.0.51:6380> cluster failover
(error) ERR You should send CLUSTER FAILOVER to a slave
10.0.0.51:6380> exit
[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.70 -p 6383
10.0.0.70:6383> cluster failover ##切換到主節點
OK
10.0.0.70:6383> cluster nodes
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511223574993 6 connected
92dfe8ab12c47980dcc42508672de62bae4921b1 10.0.0.70:6383 myself,master - 0 0 8 connected 500-5460
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 slave 92dfe8ab12c47980dcc42508672de62bae4921b1 0 1511223577007 8 connected
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1511223578014 7 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1511223572983 2 connected 15464-16383
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 master - 0 1511223573988 7 connected 0-499 5461-15463
10.0.0.70:6383>

 

5.讀取記錄

get key

10.0.0.51:6380> get mao
"123"
10.0.0.51:6380>

 

6.新加入master 節點

在將redis實例添加到集羣之前,一定要確保這個redis實例沒有存儲過數據,也不能持久化的數據文件,否則在添加的時候會報錯的!

節點的維護需要使用redis-trib.rb 工具,而不是redis-cli客戶端,退出客戶端,使用如下命令

/redis所在目錄/src/redis-trib.rb add-node 新節點ip:端口號  集羣中任意節點ip:端口號

新加入slave節點

/redis所在目錄/src/redis-trib.rb add-node --slave --master-id 主節點的id(用redis-cli,使用cluster node查看) 新節點ip:端口號  集羣中任意節點ip:端口號

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-trib.rb add-node 10.0.0.70:6383 10.0.0.51:6380
>>> Adding node 10.0.0.70:6383 to cluster 10.0.0.51:6380
>>> Performing Cluster Check (using node 10.0.0.51:6380)
M: c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380
slots:500-5460,15464-16383 (5881 slots) master
2 additional replica(s)
S: 2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379
slots: (0 slots) slave
replicates c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c
M: 2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382
slots:0-499,5461-15463 (10503 slots) master
1 additional replica(s)
S: c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381
slots: (0 slots) slave
replicates 2da5edfcbb1abc2ed799789cb529309c70cb769e
S: 777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384
slots: (0 slots) slave
replicates c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.0.0.70:6383 to make it join the cluster.
[OK] New node added correctly.
[root@mysql-db01 ~]#

 

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.70 -p 6383
10.0.0.70:6383> cluster nodes
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511226605993 9 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1511226604987 9 connected 500-5460 15464-16383
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1511226603979 7 connected
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 master - 0 1511226600944 7 connected 0-499 5461-15463
8c6534cbfbd2b5453ab4c90c7724a75d55011c27 10.0.0.70:6383 myself,master - 0 0 0 connected  ##這裏可以看出10.0.0.70:6383已經加入了集羣中
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511226602958 9 connected
10.0.0.70:6383>

 

 

7.爲slave節點重新分配master

登錄從節點的redis-cli

執行如下命令

cluster replicate 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052

後邊的id爲新的master的id

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.70 -p 6383
10.0.0.70:6383> cluster nodes
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511226605993 9 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1511226604987 9 connected 500-5460 15464-16383
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1511226603979 7 connected
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 master - 0 1511226600944 7 connected 0-499 5461-15463
8c6534cbfbd2b5453ab4c90c7724a75d55011c27 10.0.0.70:6383 myself,master - 0 0 0 connected
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511226602958 9 connected
##上面可以看出來10.0.0.70:6384和10.0.0.51:6379都是10.0.0.51:6380的從節點,我們接下來將10.0.0.51變成10.0.0.70:6383的從節點

10.0.0.70:6383> exit
[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.51 -p 6379
10.0.0.51:6379> cluster replicate 8c6534cbfbd2b5453ab4c90c7724a75d55011c27
OK

 

10.0.0.51:6379> cluster nodes
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 myself,slave 8c6534cbfbd2b5453ab4c90c7724a75d55011c27 0 0 1 connected
8c6534cbfbd2b5453ab4c90c7724a75d55011c27 10.0.0.70:6383 master - 0 1510054872161 0 connected
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 master - 0 1510054875185 7 connected 0-499 5461-15463
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1510054876192 7 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1510054874177 9 connected 500-5460 15464-16383
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1510054875688 9 connected
10.0.0.51:6379>

 

 

8.分配哈希槽

/redis所在目錄/src/redis-trib.rb reshard 新節點ip:端口號

節點添加到我們的集羣中了,但是他沒有分配哈希槽,沒有分配哈希槽的話表示就沒有存儲數據的能力,所以我們需要將其他主節點上的哈希槽分配到這個節點上(相當於到菜市場賣菜,但是攤位已經被佔了,這時候就需要從其他人的位置分出一個地方)。

爲新的master重新分配slot。

/data/redis-3.2.8/src/redis-trib.rb reshard 10.0.0.70:6383

 

接下來就會詢問我們需要借用多少個哈希槽(這個數值隨意填,本文我們填1000),完以後,接下來的接收node id 就是我們剛創建的節點id(10.0.0.70:6383)。然後讓我們輸入源節點,如果這裏我們輸入all的話,他會隨機的從所有的節點中抽取多少個(如1000)作爲新節點的哈希槽。我們輸入all以後,會出下下圖所示東西,表示hash槽正在移動。

 

移動完以後,我們進入客戶端,執行cluster nodes 命令,查看集羣節點的狀態,我們會看到原來沒有的哈希槽現在有了,這樣我們一個新的節點就添加好了。

reshard是redis cluster另一核心功能,它通過遷移哈希槽來達到負載勻衡和可擴展目的。

10.0.0.51:6379> cluster nodes
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 myself,slave 8c6534cbfbd2b5453ab4c90c7724a75d55011c27 0 0 1 connected
8c6534cbfbd2b5453ab4c90c7724a75d55011c27 10.0.0.70:6383 master - 0 1510055392655 10 connected 0-857 5461-5601 ##這裏我們就可以看出來已經分配了1000個哈希槽
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 master - 0 1510055391648 7 connected 5602-15463
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1510055393661 7 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1510055394667 9 connected 858-5460 15464-16383
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1510055395675 9 connected
10.0.0.51:6379>

 

9.刪除從節點

刪除節點也分兩種,一種是主節點,一種是從節點。在從節點中,我們沒有分配哈希槽,所以刪除很簡單,我們直接執行下面語句即可

/redis所在目錄/src/redis-trib.rb del-node 從節點ip:從節點端口號 從節點的id號

[root@mysql-db01 src]# /data/redis-3.2.8/src/redis-trib.rb del-node 10.0.0.51:6381 d14e2f0538dc6925f04d1197b57f44ccdb7c683a
>>> Removing node d14e2f0538dc6925f04d1197b57f44ccdb7c683a from cluster 10.0.0.51:6381
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@mysql-db01 src]# /data/redis-3.2.8/src/redis-trib.rb del-node 10.0.0.70:6384 e2cfd53b8083539d1a4546777d0a81b036ddd82a
>>> Removing node e2cfd53b8083539d1a4546777d0a81b036ddd82a from cluster 10.0.0.70:6384
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@mysql-db01 src]#

 

10.刪除主節點

而在刪除主節點的時候,因爲在主節點中存放着數據,所以我們在刪除之前,要把這些數據遷移走,並且把該節點上的哈希槽分配到其他主節點上。

如果主節點下有slave節點,將slave節點分配給其他master或刪除

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.70 -p 6383
10.0.0.70:6383> cluster replicate f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad ###將該節點的主節點分配到10.0.0.51:6380
OK
10.0.0.70:6383> cluster nodes
16eca138ce2767fd8f9d0c8892a38de0a042a355 10.0.0.70:6383 myself,slave f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad 0 0 5 connected
857a5132c844d695c002f94297f294f8e173e393 10.0.0.51:6379 master - 0 1511203901122 1 connected 4386-5460
f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad 10.0.0.51:6380 master - 0 1511203906160 7 connected 0-4385 5461-16383
10.0.0.70:6383> exit

a.遷移主節點下的slot(槽)

/redis所在目錄/src/redis-trib.rb reshard 要刪除的主節點的ip:端口號

要刪除的節點必須是空的,也就是不能緩存任何數據,否則會出現下面刪除不成功。對於非空節點,在刪除之前需要重新分片,將緩存的數據轉移到別的節點。

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-trib.rb del-node 10.0.0.51:6379 857a5132c844d695c002f94297f294f8e173e393
>>> Removing node 857a5132c844d695c002f94297f294f8e173e393 from cluster 10.0.0.51:6379
[ERR] Node 10.0.0.51:6379 is not empty! Reshard data away and try again.
[root@mysql-db01 ~]#

 現在開始遷移數據:

[root@mysql-db01 conf]# /data/redis-3.2.8/src/redis-trib.rb reshard 10.0.0.51:6380(需要遷移哈希槽的redis主節點)
>>> Performing Cluster Check (using node 10.0.0.51:6380)
M: c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: 2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379
slots:0-5460 (5463 slots) master
1 additional replica(s)
M: 2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 92dfe8ab12c47980dcc42508672de62bae4921b1 10.0.0.70:6383
slots: (0 slots) slave
replicates 2f003cfd139ae4f2bbdac40b0055b46bdff96e0a
S: c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381
slots: (0 slots) slave
replicates 2da5edfcbb1abc2ed799789cb529309c70cb769e
S: 777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384
slots: (0 slots) slave
replicates c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 5461
##輸入待刪除主節點的slots(比如我們要把10.0.0.51:6380的數據遷移,可以上面紅色字體可以看出)
What is the receiving node ID?
##輸入接受哈希槽的node ID(10.0.0.51:6379的node id)
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: ##從哪裏移動數據槽 (這裏輸入10.0.0.51:6380的node id)
source node #1: ##輸入done 即可。
...........
Do you want to proceed with the proposed reshard plan (yes/no)?yes

[root@mysql-db01 conf]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.51 -p 6380
10.0.0.51:6380> cluster nodes
92dfe8ab12c47980dcc42508672de62bae4921b1 10.0.0.70:6383 slave 2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 0 1510037366081 5 connected
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 myself,master - 0 0 1 connected 500-5460
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 master - 0 1510037366583 7 connected 0-499 5461-11422
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1510037364065 7 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1510037363056 2 connected ##沒有了數據槽
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1510037365075 6 connected

當我們在reshard時,出現下面錯誤,下面提供解決方案:

 

b.刪除主節點

/redis所在目錄/src/redis-trib.rb del-node 主節點ip:主節點端口號 主節點的id號(待刪除節點)

通過上面的數據遷移後,我們就可以刪除出節點了。

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.70 -p 6383
Could not connect to Redis at 10.0.0.70:6383: Connection refused
Could not connect to Redis at 10.0.0.70:6383: Connection refused
not connected> exit
[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-cli -h 10.0.0.70 -p 6382
10.0.0.70:6382> cluster nodes
2f003cfd139ae4f2bbdac40b0055b46bdff96e0a 10.0.0.51:6379 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511225889772 9 connected
2da5edfcbb1abc2ed799789cb529309c70cb769e 10.0.0.70:6382 myself,master - 0 0 7 connected 0-499 5461-15463
c0e1784f0359f986972c1f9a0d9788f3d69e6c99 10.0.0.51:6381 slave 2da5edfcbb1abc2ed799789cb529309c70cb769e 0 1511225890780 7 connected
c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 10.0.0.51:6380 master - 0 1511225891285 9 connected 500-5460 15464-16383
777c9eab94812d13d8b9dc768460dcf1316283f1 10.0.0.70:6384 slave c93b6d1edd6bc4c69d48f9f49e75c2c7f0d1a70c 0 1511225891788 9 connected
10.0.0.70:6382>

 

11.檢查集羣所有節點是否正常

/redis所在目錄/src/redis-trib.rb check 集羣任意節點ip:節點端口號

[root@mysql-db01 ~]# /data/redis-3.2.8/src/redis-trib.rb check 10.0.0.70:6382
>>> Performing Cluster Check (using node 10.0.0.70:6382)
M: e4394d43cf18aa00c0f6833f6f498ba286b55ca1 10.0.0.70:6382
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 857a5132c844d695c002f94297f294f8e173e393 10.0.0.51:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 16eca138ce2767fd8f9d0c8892a38de0a042a355 10.0.0.70:6383
slots: (0 slots) slave
replicates 857a5132c844d695c002f94297f294f8e173e393
S: d14e2f0538dc6925f04d1197b57f44ccdb7c683a 10.0.0.51:6381
slots: (0 slots) slave
replicates e4394d43cf18aa00c0f6833f6f498ba286b55ca1
S: e2cfd53b8083539d1a4546777d0a81b036ddd82a 10.0.0.70:6384
slots: (0 slots) slave
replicates f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad
M: f1f6e93e625e8e0cef0da1b3dfe0a1ea8191a1ad 10.0.0.51:6380
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@mysql-db01 ~]#

 以上都是本人測試過後總結,可放心使用,如有不明白,可留言。

 

轉自
redis查看節點信息 redis集羣查看節點
https://blog.51cto.com/u_16099213/6622290

 

cluster info信息說明

127.0.0.1:7000> cluster info 
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:8
cluster_size:7
cluster_current_epoch:9

集羣的整體state 爲ok  這裏的 cluster_current_epoch:9 加了1

cluster_slots_assigned:16384   #已分配的槽
cluster_slots_ok:16384              #槽的狀態是ok的數目
cluster_slots_pfail:0                    #可能失效的槽的數目
cluster_slots_fail:0                      #已經失效的槽的數目
cluster_known_nodes:6             #集羣中節點個數
cluster_size:3                              #集羣中設置的分片個數
cluster_current_epoch:15          #集羣中的currentEpoch總是一致的,currentEpoch越高,代表節點的配置或者操作越新,集羣中最大的那個node epoch
cluster_my_epoch:12                 #當前節點的config epoch,每個主節點都不同,一直遞增, 其表示某節點最後一次變成主節點或獲取新slot所有權的邏輯時間.
cluster_stats_messages_sent:270782059
cluster_stats_messages_received:270732696

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章