介紹
這是一篇高度集中火力的生產環境中的mysql5.7一主多從以及結合spring boot進行讀寫分離的最全文檔。主末筆者還會給出一個完整的基於spring boot的使用aop特性做完整項目讀寫分離的全代碼與演示。
本文涉及技術點如下:
- mysql5.7.30+版本
- spring boot
- AOP
- haproxy
- keep alive
1.目標
mySQL層1主掛多從
安裝3臺mysql服務器,三臺服務分佈如下:
mySQL master: 192.168.2.101
mySQL slaver1: 192.168.2.102
mySQL slaver2: 192.168.2.103
HAProxy層主備
對外以22306端口反向代理2個mySQL的slaver
ha master: 192.168.2.102
ha slaver: 192.168.2.103
Keepalived層主備以及虛出一個ip叫192.168.2.201來供應用層調用
應用層無感知虛ip下掛幾臺mysql
keepalived master: 192.168.2.102
keepalived slaver: 192.168.2.103
keepalived出來的vip:192.168.2.201
設計spring boot應用程序框架內內置aop切片根據service層方法名決定讀寫路由
設計一個spring boot,用aop+auto config來實現根據xxx.service.方法名,自動讓應用層所有的get/select方法走slaver, add/update/del/insert/delete等方法走master。
該框架內可以實現至少2個基於druid的jdbc數據源,一個叫master, 一個叫slaver。數據庫層讀寫分離需要做到程序員無感。
2.安裝三臺mySQL服務器
centos7.x(此處我用的是centos7.4)安裝mysql5.7過程
第一步,安裝mysql5.7 yum源:
wget http://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm
yum localinstall mysql57-community-release-el7-8.noarch.rpm
第二步,驗證yum源是否正確:
yum repolist enabled | grep "mysql.*-community.*”
第三步,安裝mysql:
yum install mysql-community-server
第四步,設置開機啓動
systemctl enable mysqld
systemctl start mysqld
把3臺mysql先按照上述方式全部裝完,正常啓動後
第五步,設置遠程root登錄
grep 'temporary password' /var/log/mysqld.log
我們可以看到,這邊有一個隨機密碼在:is generated for root@localhost:後,這個是mysql5.7開始在安裝後默認生成的一個初始root密碼,我們把它複製一下。
然後用它來登錄我們剛裝好的mysql
mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY '111111';
注意:mysql5.7默認安裝了密碼安全檢查插件(validate_password),默認密碼檢查策略要求密碼必須包含:大小寫字母、數字和特殊符號,並且長度不能少於8位。否則會提示ERROR 1819 (HY000): Your password does not satisfy the current policy requirements錯誤,如下圖所示:
因爲是演練環境,我們可以暫時讓我們的mysql5.7的root密碼變得簡單,因此我們先讓這個默認的密碼策略變得極其簡單
set global validate_password_policy=0;
set global validate_password_length=0;
接下去我們繼續添加遠程用
ALTER USER 'root'@'localhost' IDENTIFIED BY '111111';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '111111' WITH GRANT OPTION;
FLUSH PRIVILEGES;
把三臺mySQL都按照上述步驟設好了,然後拿mySQL客戶端都驗證一下可登錄性操作。
第六步,設置時間同步
我們把192.168.2.101這臺master設置成時間服務器,然後把192.168.2.102以及192.168.2.103設成192.168.2.101的被同步子時間節點。
在192.168.2.101上做如下設置(本次實驗環境,因此我們把時間服務器設成ali的時間服務器,地址如下:ntp1.aliyun.co
yum -y install ntp
vim /etc/ntp.conf
#註釋掉所有server *.*.*的指向,新添加一條可連接的ntp服務器 我使用的是阿里雲的NTP服務器
server ntp1.aliyun.com ymklinux1
#在其他節點上把ntp指向master服務器地址即可
server 192.168.2.101 ymklinux1
#安裝完成後設置ntp開機啓動並啓動ntp
systemctl enable ntpd
systemctl start ntpd
#查看狀態
systemctl status ntpd
注意:各時間子節點設完同步時間服務後要記得運行一下這條命令
|
第七步,開始配置三個mysql上的my.cnf文件
它位於/etc/my.cnf文件這個位置。每個mysql裏的my.cnf文件使得其內容相同即
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html
[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
port=3306
character-set-server=utf8
innodb_buffer_pool_size=2G
max_connections = 800
max_allowed_packet = 128M
max_heap_table_size = 256M
tmp_table_size = 256M
innodb_buffer_pool_chunk_size = 256M
innodb_buffer_pool_instances = 8
innodb_thread_concurrency = 4
#(核心交易系統設置爲1,默認爲1,其他2或者0)
innodb_flush_log_at_trx_commit=2
核心參數講解:
- character-set-server=utf8 字符集設成UTF8
- innodb_buffer_pool_size=2G mysql核心innodb引擎參數之緩衝池,這個值一般可以爲OS的70%,它和oracle的share_pool_size功能一樣,設至大小將直接影響你的查詢效率,因爲是演示環境,因此不用太大,夠用就行。
- max_connections = 800 數據庫最大連接數
- max_allowed_packet = 128M 一次查詢返回的結果集可允許的最大廈size
- max_heap_table_size = 256M 先說下tmp_table_size吧:它規定了內部內存臨時表的最大值,每個線程都要分配。(實際起限制作用的是tmp_table_size和max_heap_table_size的最小值。)如果內存臨時表超出了限制,MySQL就會自動地把它轉化爲基於磁盤的MyISAM表,存儲在指定的tmpdir目錄下,默認:優化查詢語句的時候,要避免使用臨時表,如果實在避免不了的話,要保證這些臨時表是存在內存中的。如果需要的話並且你有很多group by語句,並且你有很多內存,增大tmp_table_size(和max_heap_table_size)的值。這個變量不適用與用戶創建的內存表(memory table).
- tmp_table_size = 256M
- innodb_buffer_pool_chunk_size = 128M 此處我們使用默認配置
- innodb_buffer_pool_instances = 8 CPU數量
- innodb_thread_concurrency =8 不要超過CPU核數
- innodb_flush_log_at_trx_commit=2 #(核心交易系統設置爲1,默認爲1,其他2或者0),
0代表:log buffer將每秒一次地寫入log file中,並且log file的flush(刷到磁盤)操作同時進行。該模式下在事務提交的時候,不會主動觸發寫入磁盤的操作。
1代表:每次事務提交時MySQL都會把log buffer的數據寫入log file,並且flush(刷到磁盤)中去,該模式爲系統默認(因此會保留每一份redo日誌)
2代表:每次事務提交時MySQL都會把log buffer的數據寫入log file,但是flush(刷到磁盤)操作並不會同時進行。該模式下,MySQL會每秒執行一次 flush(刷到磁盤)操作。該模式速度較快,也比0安全,只有在操作系統崩潰或者系統斷電的情況下,上一秒鐘所有事務數據纔可能丟失。
3個mySQL都配置完後,記得全部重啓Linux,下面我們要進入mySQL的1主多從的搭建了。
mySQL主從搭建
Master配置
修改my.cnf,在最後一行加入如下幾行。
此處,演示環境我們對ecom這個schema做主從同步
#主從同步配置
server-id=1
log-bin=/var/lib/mysql/mysql-bin
binlog-do-db=ecom
validate_password=off
注意:
- ID一定要爲阿拉伯數字,血淚教訓;
- 另外注意一點,這邊爲了演練環境方便,我把密碼策略都給設成“傻瓜式”了。真式生產環境一定要小心;
Slaver1-192.168.2.102中追加的配置
此處我們對slaver開啓了read_only=1,即在從上就不允許發生寫操作。
server-id=2
log-bin=/var/lib/mysql/mysql-bin
relay-log-index=slave-relay-bin.index
relay-log=slave-relay-bin
replicate-do-db=ecom
log-slave-updates
slave-skip-errors=all
read_only=1
validate_password=off
Slaver2-192.168.2.103中追加的配置
server-id=3
log-bin=/var/lib/mysql/mysql-bin
relay-log-index=slave-relay-bin.index
relay-log=slave-relay-bin
replicate-do-db=ecom
log-slave-updates
slave-skip-errors=all
read_only=1
validate_password=off
創建用於複製主從數據的mysql用戶
2個Slaver設完後記得重啓mysql,記下來我們開始在master上創建一個mySQL用戶,它用於同步主從用的。
set global validate_password_policy=0;
set global validate_password_length=0;
create user 'repl' identified by '111111';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.2.%' identified by '111111';
flush privileges;
使用master的binlog給到Slaver用於“追平”master上的數據
在master所在的centos7上,輸入以下命令,把ecom這個schema全量導出。
mysqldump -u ecom -p 111111 -B -F -R -x ecom|gzip > /opt/mysql_backup/ecom_$(date +%Y%m%d_%H%M%S).sql.gz
然後把它copy到兩個slaver上-192.168.2.102和192.168.2.103上
接下來要做的事都是相同的
gzip -d ymkmysql_20200411_215500.sql.gz
mysql -u root -p < ymkmysql_20200411_215500.sql
在兩個slaver上追平主庫後,開始進入真正的數據庫級別的主從同步了。
先跑去master節點上,通過sqlclient
show master status;
記下這2個值來,然後我們跑到192.168.2.102上以及192.168.2.103上都做同樣的一件事
stop slave;
reset slave;
change master to master_host='192.168.2.101',master_port=3306,master_user='repl',master_password='111111',master_log_file='mysql-bin.000013',master_log_pos=154;
它的作用在於,把slaver上的binlog同步位置調成和master一致,調完後
start slave;
show slave status;
數據量小的情況下,幾分鐘就可以看到"slave_io_running"與"slave_sql_running"的狀態爲Yes了,這代表主從已經完成同步。
主從做完後我們來做個測試
測試1,寫發生在主,2從分別可以讀到剛纔在主上發生的寫
在主上我們插入一條記錄(order_id自增長)
現在我們連上從1-192.168.2.102
再連上從2-192.168.2.103
完成1主2從的同步,下面我們進入haproxy的搭建
Haproxy的2機熱備搭建
安裝和配置haproxy
爲了演練,我們把haproxy分別安裝在和mysql2個slaver一起的主機上,真實生產要記得專門爲haproxy安排2臺額外的單獨主機
yum -y install haproxy
在mysql slaver上創建一個無權限用於haproxy與mysql間心跳檢測用的mysql用戶
create user 'haproxy' identified by '';
因爲前面我們爲了演練方便,因此把每個mysql的password策略給禁了,如果是在生產環境這是不可能的,也就是說此處的 identified by '一定有密碼',要不然以上這個sql是過不了的。那麼沒關係,創建完後我們用set password或者用mysql圖形化客戶端把這個密碼給改成“空”吧,因爲是無權限用戶,因此一點不用擔心。
Haproxy的log不會記錄在磁盤上,便於我們今後監控和查看Haproxy的日誌,我們這邊需要把Haproxy的日誌和centos的rsyslog作一個關聯。
cd /var/log
mkdir haproxy
cd haproxy
touch haproxy.log
chmod a+w haproxy.log
vim /etc/rsyslog.cnf --修改rsyslog.cfg文件
把原文件中的以下兩行放開:
# $ModLoad imudp
#$UDPServerRun 514
並增加以下3行
$ModLoad imudp
$UDPServerRun 514
#新增的行
local7.* /var/log/boot.log
# Save haproxy log
#新增的行
local2.* /var/log/haproxy/haproxy.log
加完後完整的樣子是這樣的
還沒完,繼續修改/etc/sysconfig/rsyslog
vim /etc/sysconfig/rsyslog
SYSLOGD_OPTIONS="" 改爲 SYSLOGD_OPTIONS="-r -m 2 -c 2"
改完後重啓rsyslog
systemctl restart rsyslog
開始編輯/etc/haproxy/haproxy.cfg文件
haproxy1-192.168.2.102上的配置
把文件內容整個替換成這樣吧
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 5m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 2000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
## 定義一個監控頁面,監聽在1080端口,並啓用了驗證機制
listen stats
mode http
bind 0.0.0.0:1080
stats enable
stats hide-version
stats uri /dbs
stats realm Haproxy\ Statistics
stats auth admin:admin
stats admin if TRUE
listen proxy-mysql 0.0.0.0:23306
mode tcp
balance roundrobin
option tcplog
option mysql-check user haproxy #在mysql中創建無任何權限用戶haproxy,且無密碼
server ymkMySqlSlaver1 192.168.2.102:3306 check weight 1 maxconn 300
server ymkMySqlSlaver2 192.168.2.103:3306 check weight 1 maxconn 300
option tcpka
在此,我們做了這麼幾件事:
- 開啓了一個1080的haproxy自帶監控界面,它的地址爲http://192.168.2.102:1080/dbs
- 使用23306端口來代理mysql端口對外暴露給應用用,如:spring boot的jdbc pool的應用
- 使用haproxy自帶的mysql-check user mysql上用於監聽監控的用戶此處就是haproxy來探測mysql的可用性
2個haproxy上都做同樣的配置,接下去在192.168.2.102和192.168.2.103都按照這個順序來重啓rsyslog與haproxy。
systemctl restart rsyslog
systemctl restart haproxy
接下去,我們可以使用http://192.168.2.103:1080/dbs或者是192.168.2.102:1080,用戶名密碼都爲admin來監控haproxy了。
並且我們可以用192.168.2.102:23306以及192.168.2.103:23306,直接通過haproxy同時連上2臺mysql的slavers。此時,在應用通過haproxy連上2個slavers時,只有一個mysql“掛”了,應用是無感知的,它會在毫秒內被飄移到另一個可用的mysql slaver上,我們來做實驗。
我們通過192.168.2.102:23306連上2個mysql的slaver
查詢執行成功,然後我們來殺掉任意一臺mysql。。。嗯。。。192.168.2.102,就殺了你吧,嗯,對,就是你!!!
好,我被殺了!
再次回到應用連接處,依舊執行一次查詢
老婆,出來看上帝。。。看,查詢依舊可以被執行,說明,Haproxy已經幫我們把應用連接自動飄到了192.168.2.103上去了。
再做個測試,我們來殺haproxy,殺192.168.2.103上的haproxy
看,應用連接依然有效,說明192.168.2.102上的haproxy起作用了。
接下去就是keepalived的佈署了,激動人心的時候就要到來鳥!
佈署keepAlived集羣
安裝KeepAlived
在每臺slaver上安裝keepalived,因爲是演練環境,因此我們把keepalived也裝在和haproxy一起的vm上,如果是生產環境一定記得要爲keepalived佈署單獨的server,至少2臺。
yum instlal keepalived
簡單的不能再簡單了。
配置keepalived
注意了!每臺keepalived的配置是不同的,這裏分主次關係,這裏不像keepalived是load roubin的概念,這可是有優先級的概念哦。
對/etc/keepalived/keepalived.conf文件進行編輯。
192.168.2.102上的keepalived-lb01
global_defs {
router_id LB01
}
vrrp_script chk_haproxy
{
script "/etc/keepalived/scripts/haproxy_check.sh"
interval 2
timeout 2
fall 3
}
vrrp_instance haproxy {
state MASTER
#interface eth0
interface enp0s3
virtual_router_id 1
priority 100
authentication {
auth_type PASS
auth_pass password
}
unicast_peer {
192.168.2.102
192.168.2.103
}
virtual_ipaddress {
192.168.2.201
}
track_script {
chk_haproxy
}
notify_master "/etc/keepalived/scripts/haproxy_master.sh"
}
此處我們做了如下幾件事:
- 設定了192.168.2.102爲 master狀態
- 名字爲:LB01
- 優先級爲100,當然,另一臺keepalived的話一定優先級比他低,一定記得,從機的priority小於主機的priority
- 定義了一個虛擬ip,它叫192.168.2.201,有了這個ip後,一切應用都通過這個ip來訪問mysql的slavers集羣了
- 把該虛ip綁定到“interface enp0s3”網卡上,這塊網卡是我在192.168.2.102上的千兆網絡接口的系統接口名
192.168.2.103上的keepalived-lb02
此處,有2個地方和lb01的配置有不同:
- router_id
- priority
global_defs {
router_id LB02
}
vrrp_script chk_haproxy
{
script "/etc/keepalived/scripts/haproxy_check.sh"
interval 2
timeout 2
fall 3
}
vrrp_instance haproxy {
state BACKUP
#interface eth0
interface enp0s3
virtual_router_id 1
priority 50
authentication {
auth_type PASS
auth_pass password
}
unicast_peer {
192.168.2.102
192.168.2.103
}
virtual_ipaddress {
192.168.2.201
}
track_script {
chk_haproxy
}
notify_master "/etc/keepalived/scripts/haproxy_master.sh"
}
下面給出haproxy_check.sh和haproxy_master.sh文件的內容
haproxy_check.sh-每個keepalived的機器上都要放
#!/bin/bash
LOGFILE="/var/log/keepalived-haproxy-state.log"
date >>$LOGFILE
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
echo "fail: check_haproxy status" >>$LOGFILE
exit 1
else
echo "success: check_haproxy status" >>$LOGFILE
exit 0
fi
haproxy_master.sh-每個keepalived的機器上都要放
#!/bin/bash
LOGFILE="/var/log/keepalived-haproxy-state.log"
echo "Being Master ..." >> $LOGFILE
測試keepalived功能
先把兩臺機器上的keepalived啓動起來
測試1:通過VIP:192.168.2.201:23306連入mysql
成功!
測試2:殺192.168.2.102上的mysql進程後並通過vip查詢
飄移成功!!!
測試3:殺192.168.2.102上的haproxy進程後並通過vip查詢
成功,毫無壓力!!!Come on, again, I wanna more!
測試4:殺192.168.2.103上的haproxy,對mysql master作寫,並在2從上做讀
以觀看keepalived對haproxy是否真正起到了熱備
結論
keepalived+haproxy+mysql的1主多從的熱備搭建成功!
下面我們來攻克另一個課題,那就是,如何在springboot應用中,不要讓我們一羣開發每個人記得“主寫-從讀”並在編碼上自己去實現,我們需要在整體代碼框架層統一去做“攔截”,以實現自動根據service層的方法名來做到 “主寫-從讀”。
代碼層實現主寫-從讀的自動攔截封裝解決方案
框架一覽
pom.xml中需要額外加入的內容
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
</dependency>
applicatin.properties
此處我們定義了2個數據源,一個叫master,一個叫slaver,而slaver的地址我們即沒指向192.168.2.102也沒指向192.168.2.103,而是指向了keepalived暴露的那個虛IP192.168.2.201:23306
logging.config=classpath:log4j2.xml
#master db
mysql.datasource.master.type=com.alibaba.druid.pool.DruidDataSource
mysql.datasource.master.driverClassName=com.mysql.jdbc.Driver
mysql.datasource.master.url=jdbc:mysql://192.168.2.101:3306/ecom?useUnicode=true&characterEncoding=utf-8&useSSL=false
mysql.datasource.master.username=ecom
mysql.datasource.master.password=111111
mysql.datasource.master.initialSize=50
mysql.datasource.master.minIdle=50
mysql.datasource.master.maxActive=100
mysql.datasource.master.maxWait=60000
mysql.datasource.master.timeBetweenEvictionRunsMillis=60000
mysql.datasource.master.minEvictableIdleTimeMillis=120000
mysql.datasource.master.validationQuery=SELECT'x'
mysql.datasource.master.testWhileIdle=true
mysql.datasource.master.testOnBorrow=false
mysql.datasource.master.testOnReturn=false
mysql.datasource.master.poolPreparedStatements=true
mysql.datasource.master.maxPoolPreparedStatementPerConnectionSize=20
#slaver db
mysql.datasource.slaver1.type=com.alibaba.druid.pool.DruidDataSource
mysql.datasource.slaver1.driverClassName=com.mysql.jdbc.Driver
mysql.datasource.slaver1.url=jdbc:mysql://192.168.2.201:23306/ecom?useUnicode=true&characterEncoding=utf-8&useSSL=false
mysql.datasource.slaver1.username=ecom
mysql.datasource.slaver1.password=111111
mysql.datasource.slaver1.initialSize=50
mysql.datasource.slaver1.minIdle=50
mysql.datasource.slaver1.maxActive=100
mysql.datasource.slaver1.maxWait=60000
mysql.datasource.slaver1.timeBetweenEvictionRunsMillis=60000
mysql.datasource.slaver1.minEvictableIdleTimeMillis=120000
mysql.datasource.slaver1.validationQuery=SELECT'x'
mysql.datasource.slaver1.testWhileIdle=true
mysql.datasource.slaver1.testOnBorrow=false
mysql.datasource.slaver1.testOnReturn=false
mysql.datasource.slaver1.poolPreparedStatements=true
mysql.datasource.slaver1.maxPoolPreparedStatementPerConnectionSize=20
啓文件MultiDSDemo.java
package org.sky.retail.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@ServletComponentScan
@EnableAutoConfiguration(exclude = { DataSourceAutoConfiguration.class })
@ComponentScan(basePackages = { "org.sky.retail.demo" })
@EnableTransactionManagement
public class MultiDSDemo {
public static void main(String[] args) {
SpringApplication.run(MultiDSDemo.class, args);
}
}
自動裝配用MultiDSConfig.java
package org.sky.retail.demo.config;
import java.util.HashMap;
import java.util.Map;
import javax.sql.DataSource;
import org.sky.retail.demo.util.db.DBTypeEnum;
import org.sky.retail.demo.util.db.MyRoutingDataSource;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import com.alibaba.druid.pool.DruidDataSource;
@Configuration
public class MultiDSConfig {
@Bean
@ConfigurationProperties(prefix = "mysql.datasource.master")
public DataSource masterDataSource() {
return new DruidDataSource();
}
@Bean
@ConfigurationProperties(prefix = "mysql.datasource.slaver1")
public DataSource slave1DataSource() {
return new DruidDataSource();
}
@Bean
public DataSource myRoutingDataSource(@Qualifier("masterDataSource") DataSource masterDataSource,
@Qualifier("slave1DataSource") DataSource slave1DataSource) {
Map<Object, Object> targetDataSources = new HashMap<>();
targetDataSources.put(DBTypeEnum.MASTER, masterDataSource);
targetDataSources.put(DBTypeEnum.SLAVE1, slave1DataSource);
// targetDataSources.put(DBTypeEnum.SLAVE2, slave2DataSource);
MyRoutingDataSource myRoutingDataSource = new MyRoutingDataSource();
myRoutingDataSource.setDefaultTargetDataSource(masterDataSource);
myRoutingDataSource.setTargetDataSources(targetDataSources);
return myRoutingDataSource;
}
@Bean
public JdbcTemplate dataSource(DataSource myRoutingDataSource) {
return new JdbcTemplate(myRoutingDataSource);
}
@Bean
public DataSourceTransactionManager txManager(DataSource myRoutingDataSource) {
return new DataSourceTransactionManager(myRoutingDataSource);
}
}
用於對service層方 法做自動切面實現“主寫從讀”的DataSourceAop.java
package org.sky.retail.demo.aop;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.sky.retail.demo.util.db.DBContextHolder;
import org.springframework.stereotype.Component;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import javax.servlet.http.HttpServletRequest;
@Aspect
@Component
public class DataSourceAop {
protected Logger logger = LogManager.getLogger(this.getClass());
/**
* 定義切入點,切入點爲org.sky.retail.demo.service下的所有函數
*/
@Pointcut("!@annotation(org.sky.retail.demo.util.db.Master) "
+ "&& (execution(* org.sky.retail.demo.service..*.select*(..)) "
+ "|| execution(* org.sky.retail.demo.service..*.get*(..)))")
public void readPointcut() {
}
@Pointcut("@annotation(org.sky.retail.demo.util.db.Master) "
+ "|| execution(* org.sky.retail.demo.service..*.insert*(..)) "
+ "|| execution(* org.sky.retail.demo.service..*.add*(..)) "
+ "|| execution(* org.sky.retail.demo.service..*.update*(..)) "
+ "|| execution(* org.sky.retail.demo.service..*.edit*(..)) "
+ "|| execution(* org.sky.retail.demo.service..*.delete*(..)) "
+ "|| execution(* org.sky.retail.demo.service..*.remove*(..))")
public void writePointcut() {
}
@Before("readPointcut()")
public void read(JoinPoint joinPoint) {
// 接收到請求,記錄請求內容
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
HttpServletRequest request = attributes.getRequest();
DBContextHolder.slave();
}
@Before("writePointcut()")
public void write() {
DBContextHolder.master();
}
DBTypeEnum.java
package org.sky.retail.demo.util.db;
public enum DBTypeEnum {
MASTER, SLAVE1;
}
Master.java
package org.sky.retail.demo.util.db;
public @interface Master {
}
用於實現多數據源路由的MyRoutingDataSource.java
package org.sky.retail.demo.util.db;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
/**
* Created by mk on 2018/4/3 - to replace ODY's jdbc package.
*/
public class MyRoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DBContextHolder.get();
}
}
MyRoutingDataSource.java
DBContextHolder.java
package org.sky.retail.demo.util.db;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.util.concurrent.atomic.AtomicInteger;
public class DBContextHolder {
protected static Logger logger = LogManager.getLogger(DBContextHolder.class);
private static final ThreadLocal<DBTypeEnum> contextHolder = new ThreadLocal<>();
private static final AtomicInteger counter = new AtomicInteger(0);
public static void set(DBTypeEnum dbType) {
contextHolder.set(dbType);
}
public static DBTypeEnum get() {
return contextHolder.get();
}
public static void master() {
set(DBTypeEnum.MASTER);
logger.info(">>>>>>切換到master");
}
public static void slave() {
set(DBTypeEnum.SLAVE1);
logger.info(">>>>>>切換到slave");
// 輪詢用來實現應用程序隨機飄移,下面這段代碼如果說AZURE的PAAS上不能使用haproxy來飄那就只能使用原子類型取模來隨機飄移多slaver了
// int index = counter.getAndIncrement() % 2;
// logger.info("counter.getAndIncrement() % 2======" + index);
// if (counter.get() > 9999) {
// counter.set(-1);
// }
// if (index == 0) {
// set(DBTypeEnum.SLAVE1);
// logger.info(">>>>>>切換到slave1");
// } else {
// set(DBTypeEnum.SLAVE1);// todo SLAVE2
// logger.info(">>>>>>切換到slave2");
// }
}
}
Service類-OrderService.java
package org.sky.retail.demo.service;
import org.sky.retail.demo.vo.Order;
public interface OrderService {
public void insertOrder(Order order) throws Exception;
public Order getOrderByPK(int orderId) throws Exception;
}
測試用-OrderController.java
package org.sky.retail.demo.controller;
import java.util.HashMap;
import java.util.Map;
import javax.annotation.Resource;
import org.sky.platform.retail.controller.BaseController;
import org.sky.retail.demo.service.OrderService;
import org.sky.retail.demo.vo.Order;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import io.swagger.annotations.ApiImplicitParam;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiResponses;
import io.swagger.annotations.ApiResponse;
@RestController
@RequestMapping("/demo/order")
public class OrderController extends BaseController {
@Resource
private OrderService orderService;
@ApiOperation(value = "創建一條訂單", notes = "傳入一個訂單信息以增加一條訂單")
@ApiResponses(value = { @ApiResponse(code = 200, message = "創建訂單成功"),
@ApiResponse(code = 403, message = "創建訂單失敗,不能創建空的訂單對象"),
@ApiResponse(code = 417, message = "創建訂單失敗, 因爲某個系統錯誤") })
@RequestMapping(value = "/addEmployee", method = RequestMethod.POST)
public ResponseEntity<String> addOrder(@RequestBody String params) throws Exception {
ResponseEntity<String> response = null;
String returnResultStr;
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON_UTF8);
Map<String, Object> result = new HashMap<>();
try {
JSONObject requestJsonObj = JSON.parseObject(params);
if (requestJsonObj != null) {
Order order = getOrderFromJson(requestJsonObj);
logger.info(">>>>>>addOrder");
orderService.insertOrder(order);
result.put("code", HttpStatus.OK.value());
result.put("message", "insert a new order successfully");
result.put("orderId", order.getOrderId());
result.put("goodsId", order.getGoodsId());
result.put("amount", order.getAmount());
returnResultStr = JSON.toJSONString(result);
response = new ResponseEntity<>(returnResultStr, headers, HttpStatus.OK);
} else {
result.put("code", HttpStatus.FORBIDDEN.value());
result.put("message", "can not add a empty order");
}
} catch (Exception e) {
logger.error("addOrder error: " + e.getMessage(), e);
result.put("message", "add order error error: " + e.getMessage());
returnResultStr = JSON.toJSONString(result);
response = new ResponseEntity<>(returnResultStr, headers, HttpStatus.EXPECTATION_FAILED);
}
return response;
}
@ApiOperation(value = "通過訂單ID獲取訂單信息接口", notes = "傳入訂單ID獲取信息")
@ApiImplicitParam(name = "orderId", value = "訂單ID", paramType = "query", defaultValue = "", required = true)
@RequestMapping(value = "/getOrderById", method = RequestMethod.GET)
public Order getOrderById(int orderId) throws Exception {
Order order = new Order();
try {
order = orderService.getOrderByPK(orderId);
} catch (Exception e) {
logger.error("getOrderById orderId->" + orderId + " error: " + e.getMessage(), e);
}
return order;
}
private Order getOrderFromJson(JSONObject requestObj) {
// int orderId = requestObj.getInteger("orderId");
int goodsId = requestObj.getInteger("goodsId");
int amount = requestObj.getInteger("amount");
Order order = new Order();
// order.setOrderId(orderId);
order.setGoodsId(goodsId);
order.setAmount(amount);
return order;
}
}
測試
測試1-寫Go Master讀Go Slaver
讀操作測試
爲了測試可視化,我有意在DbContextHolder處做了2個日誌輸出。
測試2-殺192.168.2.102上的mysql來進行讀
無任何壓力,毫秒級飄移!
測試3-殺192.168.2.103上的keeyalived來進行讀-這個有點猛
猛也要殺啊,程序員沒啥難的,JAVA從入門到刪庫跑路,PYTHON零基礎到進精神病院!
毫無壓力,看到沒!
測試4-別測了你,再測用jmeter去吧,打完收工!
想收工?可能嗎?附一張生產級別haproxy+keepalived的佈署圖
一定記得:
- haproxy實際生產要和mysql分開布。
- haproxy+keepalived可以在一個vm上。
前面如果是在騰訊或者是ali雲用lvs做vip地址轉換,如果是azure就用ILB把vip轉向應用層。