redis緩存架構詳解(六)- redis 主從架構-redis replication的讀寫分離架構

接上節文章,本節主要講解在項目中,如何部署redis replication的讀寫分離架構

4.5. 在項目中部署redis replication的讀寫分離架構

​ 之前幾講都是在鋪墊各種redis replication的原理和知識,包括主從複製、讀寫分離,但我們怎麼去使用,怎麼去搭建主從複製、讀寫分離的環境呢?

​ 一主一從,往主節點去寫,在從節點去讀,可以讀到,主從架構就搭建成功了。

4.5.1.啓用複製,部署slave node

配置主從複製,redis 默認就是主庫,只需要配置從庫。配置方法:

1.修改配置文件(永久生效,但需要重起服務才能生效)

在redis.conf(根據上文redis安裝部署,配置文件是redis_6379)配置文件中配置主master的ip:post, 即可完成主從複製的目的。

slaveof 192.168.92.120 6379

2.命令行配置:(馬上生效,不需要重啓服務,一旦重起服務失效)也可以使用slaveof命令

命令行設置,設置從庫:

192.168.92.120 :6379> slaveof  192.168.4.51 6351 	

4.5.2.強制讀寫分離

基於主從複製架構,實現讀寫分離

redis slave node只讀,默認開啓,slave-read-only

在這裏插入圖片描述

開啓了只讀的redis slave node,會拒絕所有的寫操作,這樣可以強制搭建成讀寫分離的架構

4.5.3.集羣安全認證

master上啓用安全認證:requirepass

 requirepass foobared

slave的配置文件中配置master連接認證:masterauth

masterauth futurecloud

如下redis.conf slaveof 配置:

slaveof 192.168.92.120 6379

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
masterauth futurecloud

4.5.4.讀寫分離架構的測試

先啓動主節點,cache01上的redis實例
再啓動從節點,cache02上的redis實例

剛纔我調試了一下,redis slave node一直說沒法連接到主節點的6379的端口

在搭建生產環境的集羣的時候,不要忘記修改一個配置,bind

bind 127.0.0.1 -> 本地開發調試模式,就只能127.0.0.1本地才能訪問到6379的端口

每個redis.conf中的bind .

bind 127.0.0.1 -> bind自己的ip地址

在每個節點上都打開6379端口

iptables -A INPUT -ptcp --dport  6379 -j ACCEPT

redis-cli -h ipaddr
info replication

在主上寫,在從上讀,如下,配置好cache02作爲redis 的slave後,啓動cache02上的redis,cache01上的redis作爲master,自動將數據複製到cache02上:

[root@cache02 redis-4.0.1]# redis-cli -a futurecloud shutdown
[root@cache02 redis-4.0.1]# service redis start
Starting Redis server...
[root@cache02 redis-4.0.1]# 7720:C 24 Apr 23:02:11.953 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7720:C 24 Apr 23:02:11.953 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=7720, just started
7720:C 24 Apr 23:02:11.953 # Configuration loaded

[root@cache02 redis-4.0.1]# 
[root@cache02 redis-4.0.1]# ps -ef|grep redis
root       7721      1  0 23:02 ?        00:00:00 /usr/local/redis-4.0.1/bin/redis-server 0.0.0.0:6379
root       7728   2576  0 23:02 pts/0    00:00:00 grep --color=auto redis
[root@cache02 redis-4.0.1]# 
[root@cache02 redis-4.0.1]# 
[root@cache02 redis-4.0.1]# redis-cli -a futurecloud 
127.0.0.1:6379> get k1
"value1"
127.0.0.1:6379> get k2
"value2"
127.0.0.1:6379> get k3
"value3"
127.0.0.1:6379> get k4
"v4"
127.0.0.1:6379> 

4.5.5. redis replication架構進行QPS壓測以及水平擴容

對以上搭建好的redis做壓測,檢測搭建的redis replication的性能和QPS。

我們使用redis自己提供的redis-benchmark壓測工具。

1、對redis讀寫分離架構進行壓測,單實例寫QPS+單實例讀QPS

redis-3.2.8/src

./redis-benchmark -h 192.168.92.120

-c <clients>       Number of parallel connections (default 50)
-n <requests>      Total number of requests (default 100000)
-d <size>          Data size of SET/GET value in bytes (default 2)

根據自己的業務,配置高峯期的訪問量,一般在高峯期,瞬時最大用戶量會達到10萬+。

-c 100000,-n 10000000,-d 50

====== PING_INLINE ======
  100000 requests completed in 1.28 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.78% <= 1 milliseconds
99.93% <= 2 milliseconds
99.97% <= 3 milliseconds
100.00% <= 3 milliseconds
78308.54 requests per second

====== PING_BULK ======
  100000 requests completed in 1.30 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.87% <= 1 milliseconds
100.00% <= 1 milliseconds
76804.91 requests per second

====== SET ======
  100000 requests completed in 2.50 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

5.95% <= 1 milliseconds
99.63% <= 2 milliseconds
99.93% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40032.03 requests per second

====== GET ======
  100000 requests completed in 1.30 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.73% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
76628.35 requests per second

====== INCR ======
  100000 requests completed in 1.90 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

80.92% <= 1 milliseconds
99.81% <= 2 milliseconds
99.95% <= 3 milliseconds
99.96% <= 4 milliseconds
99.97% <= 5 milliseconds
100.00% <= 6 milliseconds
52548.61 requests per second

====== LPUSH ======
  100000 requests completed in 2.58 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

3.76% <= 1 milliseconds
99.61% <= 2 milliseconds
99.93% <= 3 milliseconds
100.00% <= 3 milliseconds
38684.72 requests per second

====== RPUSH ======
  100000 requests completed in 2.47 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

6.87% <= 1 milliseconds
99.69% <= 2 milliseconds
99.87% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40469.45 requests per second

====== LPOP ======
  100000 requests completed in 2.26 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

28.39% <= 1 milliseconds
99.83% <= 2 milliseconds
100.00% <= 2 milliseconds
44306.60 requests per second

====== RPOP ======
  100000 requests completed in 2.18 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

36.08% <= 1 milliseconds
99.75% <= 2 milliseconds
100.00% <= 2 milliseconds
45871.56 requests per second

====== SADD ======
  100000 requests completed in 1.23 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.94% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
81168.83 requests per second

====== SPOP ======
  100000 requests completed in 1.28 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.80% <= 1 milliseconds
99.96% <= 2 milliseconds
99.96% <= 3 milliseconds
99.97% <= 5 milliseconds
100.00% <= 5 milliseconds
78369.91 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
  100000 requests completed in 2.47 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

15.29% <= 1 milliseconds
99.64% <= 2 milliseconds
99.94% <= 3 milliseconds
100.00% <= 3 milliseconds
40420.37 requests per second

====== LRANGE_100 (first 100 elements) ======
  100000 requests completed in 3.69 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

30.86% <= 1 milliseconds
96.99% <= 2 milliseconds
99.94% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
27085.59 requests per second

====== LRANGE_300 (first 300 elements) ======
  100000 requests completed in 10.22 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.03% <= 1 milliseconds
5.90% <= 2 milliseconds
90.68% <= 3 milliseconds
95.46% <= 4 milliseconds
97.67% <= 5 milliseconds
99.12% <= 6 milliseconds
99.98% <= 7 milliseconds
100.00% <= 7 milliseconds
9784.74 requests per second

====== LRANGE_500 (first 450 elements) ======
  100000 requests completed in 14.71 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.00% <= 1 milliseconds
0.07% <= 2 milliseconds
1.59% <= 3 milliseconds
89.26% <= 4 milliseconds
97.90% <= 5 milliseconds
99.24% <= 6 milliseconds
99.73% <= 7 milliseconds
99.89% <= 8 milliseconds
99.96% <= 9 milliseconds
99.99% <= 10 milliseconds
100.00% <= 10 milliseconds
6799.48 requests per second

====== LRANGE_600 (first 600 elements) ======
  100000 requests completed in 18.56 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.00% <= 2 milliseconds
0.23% <= 3 milliseconds
1.75% <= 4 milliseconds
91.17% <= 5 milliseconds
98.16% <= 6 milliseconds
99.04% <= 7 milliseconds
99.83% <= 8 milliseconds
99.95% <= 9 milliseconds
99.98% <= 10 milliseconds
100.00% <= 10 milliseconds
5387.35 requests per second

====== MSET (10 keys) ======
  100000 requests completed in 4.02 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.01% <= 1 milliseconds
53.22% <= 2 milliseconds
99.12% <= 3 milliseconds
99.55% <= 4 milliseconds
99.70% <= 5 milliseconds
99.90% <= 6 milliseconds
99.95% <= 7 milliseconds
100.00% <= 8 milliseconds
24869.44 requests per second
使用4核4G內存搭建的專用集羣,redis單個節點讀請求QPS在5萬左右,兩個redis從節點,所有的讀請求打到兩臺機器上去,承載整個集羣讀QPS在10萬+。

4.6. redis replication怎麼做到高可用性

什麼是99.99%高可用?

系統架構上必須要保證99.99%的高可用性。
99.99%的公式:系統可用的時間 / 系統故障的時間
如在一年365天時間內,系統能保證99.99%的時間都可以對外提供服務,那就是高可用性。

redis replication架構,當master node掛掉,就不能對外提供寫業務,沒有了數據,就不能同步數據到slave node,從而導致不能提供讀業務,此時就會出現大量的請求到數據庫,從而出現緩存雪崩現象,嚴重情況整個系統就處於癱瘓。

使用哨兵sentinel cluster 實現 redis replication 高可用,具體講解請看下篇文章。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章