1. redis集羣
1.1. 集羣原理
1.1.1. redis-cluster架構圖
架構細節:
(1)所有的redis節點彼此互聯(PING-PONG機制),內部使用二進制協議優化傳輸速度和帶寬.
(2)節點的fail是通過集羣中超過半數的節點檢測失效時才生效.
(3)客戶端與redis節點直連,不需要中間proxy層.客戶端不需要連接集羣所有節點,連接集羣中任何一個可用節點即可
(4)redis-cluster把所有的物理節點映射到[0-16383]slot上,cluster 負責維護node<->slot<->value
Redis 集羣中內置了 16384 個哈希槽,當需要在 Redis 集羣中放置一個 key-value 時,redis 先對 key 使用 crc16 算法算出一個結果,然後把結果對 16384 求餘數,這樣每個 key 都會對應一個編號在 0-16383 之間的哈希槽,redis 會根據節點數量大致均等的將哈希槽映射到不同的節點
1.1.2. redis-cluster投票:容錯
(1)領着投票過程是集羣中所有master參與,如果半數以上master節點與master節點通信超過(cluster-node-timeout),認爲當前master節點掛掉.
(2):什麼時候整個集羣不可用(cluster_state:fail)?
a:如果集羣任意master掛掉,且當前master沒有slave.集羣進入fail狀態,也可以理解成集羣的slot映射[0-16383]不完成時進入fail狀態. ps : redis-3.0.0.rc1加入cluster-require-full-coverage參數,默認關閉,打開集羣兼容部分失敗.
b:如果集羣超過半數以上master掛掉,無論是否有slave集羣進入fail狀態.
ps:當集羣不可用時,所有對集羣的操作做都不可用,收到((error) CLUSTERDOWN The cluster is down)錯誤
1.2 集羣結構
集羣中有三個節點的集羣,每個節點有一主一備。需要6臺虛擬機。
搭建一個僞分佈式的集羣,使用6個redis實例來模擬。
1.3 集羣的搭建
需要先安裝redis單機版 不懂的參考此文章 http://blog.csdn.net/xzk821648509/article/details/780770531.4 ruby環境
redis集羣管理工具redis-trib.rb依賴ruby環境,首先需要安裝ruby環境:
安裝ruby
yum install ruby
yum install rubygems
redis集羣管理工具redis-trib.rb
[root@bogon ~]# cd redis-3.0.0
[root@bogon redis-3.0.0]# cd src
[root@bogon src]# ll *.rb
-rwxrwxr-x.1 root root 48141 Apr 1 07:01 redis-trib.rb腳本需要的ruby包:http://download.csdn.net/download/xzk821648509/9992904
需要上傳到linux服務。
安裝ruby的包:
gem install redis-3.0.0.gem
[root@bogon ~]# gem install redis-3.0.0.gem
1.5 集羣的搭建
在 /usr/local/ 下創建一個redis-cluster文件夾進入單機版的redis目錄 複雜bin目錄下所有的文件到redis-cluster,並修改名字 創建6個實例
[root@localhost redis]# cp bin/ ../redis-cluster/redis01
[root@localhost redis]# cp bin/ ../redis-cluster/redis02
[root@localhost redis]# cp bin/ ../redis-cluster/redis03
[root@localhost redis]# cp bin/ ../redis-cluster/redis04
[root@localhost redis]# cp bin/ ../redis-cluster/redis05
[root@localhost redis]# cp bin/ ../redis-cluster/redis06
複製完後,要修改每一個的配置文件
1.修改端口號(可以隨便定,但不要和其它服務重複 我這裏是 7001~7006)
2.打開cluster-enable前面的註釋
3.把創建集羣的ruby腳本複製到redis-cluster目錄下。
[root@localhost src]# cp *.rb /usr/local/redis-cluster/
4.啓動6個redis實例 (可以創建一個腳本)
[root@localhost redis-cluster]# vi startall.sh
cd redis01
./redis-server redis.conf
cd ..
cd redis02
./redis-server redis.conf
cd ..
cd redis03
./redis-server redis.conf
cd ..
cd redis04
./redis-server redis.conf
cd ..
cd redis05
./redis-server redis.conf
cd ..
cd redis06
./redis-server redis.conf
cd ..
:wq 保存
[root@localhost redis-cluster]# chmod +x startall.sh
[root@localhost redis-cluster]# ./startall.sh
執行
[root@localhost redis-cluster]# ps aux | gerp redis 此命令是查看6個redis實例是否開啓
5.創建集羣
[root@localhost redis-cluster]# ./redis-trib.rb create --replicas 1192.168.25.153:7001 192.168.25.153:7002 192.168.25.153:7003 192.168.25.153:7004192.168.25.153:7005 192.168.25.153:7006
ip地址是虛擬機的ip地址 後面是redis的端口號
>>> Creating cluster
Connecting to node 192.168.25.153:7001: OK
Connecting to node 192.168.25.153:7002: OK
Connecting to node 192.168.25.153:7003: OK
Connecting to node 192.168.25.153:7004: OK
Connecting to node 192.168.25.153:7005: OK
Connecting to node 192.168.25.153:7006: OK
>>> Performing hash slotsallocation on 6 nodes...
Using 3 masters:
192.168.25.153:7001
192.168.25.153:7002
192.168.25.153:7003
Adding replica 192.168.25.153:7004 to192.168.25.153:7001
Adding replica 192.168.25.153:7005 to192.168.25.153:7002
Adding replica 192.168.25.153:7006 to192.168.25.153:7003
M: 5a8523db7e12ca600dc82901ced06741b3010076192.168.25.153:7001
slots:0-5460 (5461 slots) master
M: bf6f0929044db485dea9b565bb51e0c917d20a53192.168.25.153:7002
slots:5461-10922 (5462 slots) master
M: c5e334dc4a53f655cb98fa3c3bdef8a808a693ca192.168.25.153:7003
slots:10923-16383 (5461 slots) master
S: 2a61b87b49e5b1c84092918fa2467dd70fec115f192.168.25.153:7004
replicates 5a8523db7e12ca600dc82901ced06741b3010076
S: 14848b8c813766387cfd77229bd2d1ffd6ac8d65192.168.25.153:7005
replicates bf6f0929044db485dea9b565bb51e0c917d20a53
S: 3192cbe437fe67bbde9062f59d5a77dabcd0d632192.168.25.153:7006
replicates c5e334dc4a53f655cb98fa3c3bdef8a808a693ca
Can I set the above configuration? (type'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different configepoch to each node
>>> Sending CLUSTER MEET messagesto join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check(using node 192.168.25.153:7001)
M: 5a8523db7e12ca600dc82901ced06741b3010076192.168.25.153:7001
slots:0-5460 (5461 slots) master
M: bf6f0929044db485dea9b565bb51e0c917d20a53192.168.25.153:7002
slots:5461-10922 (5462 slots) master
M: c5e334dc4a53f655cb98fa3c3bdef8a808a693ca192.168.25.153:7003
slots:10923-16383 (5461 slots) master
M: 2a61b87b49e5b1c84092918fa2467dd70fec115f192.168.25.153:7004
slots: (0 slots) master
replicates 5a8523db7e12ca600dc82901ced06741b3010076
M: 14848b8c813766387cfd77229bd2d1ffd6ac8d65192.168.25.153:7005
slots: (0 slots) master
replicates bf6f0929044db485dea9b565bb51e0c917d20a53
M: 3192cbe437fe67bbde9062f59d5a77dabcd0d632192.168.25.153:7006
slots: (0 slots) master
replicates c5e334dc4a53f655cb98fa3c3bdef8a808a693ca
[OK] All nodes agree about slotsconfiguration.
>>> Check for open slots...
>>> Check slots coverage...
[OK]All 16384 slots covered.