面我已經寫過一篇SpringBoot+Nacos+Seata實現Dubbo分佈式事務管理的文章,今天爲什麼還要寫這篇呢,是因爲好多公司還在用Zookeeper
作爲Dubbo
的註冊中心和配置中心在大規模使用,還沒有完全遷移到Nacos
上來,所以Seata
的註冊中心和配置也是支持Zookeeper
,但是官方沒有完整的使用教程,因此,寫這篇主要爲了幫助使用Zookeeper
的用戶也可以輕鬆使用Seata
。
1.簡介
本文主要介紹SpringBoot2.1.5 + Dubbo 2.7.3 + Mybatis 3.4.2 + Zookeeper 3.4.14 +Seata 0.9.0整合來實現Dubbo分佈式事務管理,使用Zookeeper 作爲 Dubbo和Seata的註冊中心和配置中心,使用 MySQL 數據庫和 MyBatis來操作數據。
如果你還對SpringBoot
、Dubbo
、Zookeeper
、Seata
、 Mybatis
不是很瞭解的話,這裏我爲大家整理個它們的官網網站,如下
-
SpringBoot:https://spring.io/projects/spring-boot
-
Zookeeper:https://zookeeper.apache.org/
在這裏我們就不一個一個介紹它們是怎麼使用和原理,詳細請學習官方文檔,在這裏我將開始對它們進行整合,完成一個簡單的案例,來讓大家瞭解Seata
來實現Dubbo
分佈式事務管理的基本流程。
2.環境準備
2.1 下載Zookeeper並安裝啓動
Zookeeper下載:https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/
Zookeeper 快速入門:http://zookeeper.apache.org/doc/r3.4.14/zookeeperStarted.html
在單機模式下啓動Zookeeper
非常簡單。 我們將下載的文件解壓縮到指定目錄如:E:\tools\zookeeper-3.4.14
要啓動Zookeeper
,我們需要一個配置文件。下面是一個示例,在conf/zoo.cfg:
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=E:\\tools\\zookeeper-3.4.14\\data
dataLogDir=E:\\tools\\zookeeper-3.4.14\\logs
以上配置的詳細說明如下:
-
tickTime
:用於配置 ZooKeeper 中最小時間單位的長度,很多運行時的時間間隔都是使用 tickTime 的倍數來表示的。例如,ZooKeeper 中會話的最小超時時間默認是 2*tickTime。 -
initLimit
:該參數有默認值: 10,即表示是參數 tickTime 值得10倍,必須配置,且需要配置一個正整數,不支持系統屬性方式配置。該參數用於配置 Leader 服務器等待 Follower 啓動,並完成數據同步的時間。Follwer 服務器再啓動過程中,會與 Leader 建立連接並完成對數據的同步,從而確定自己對外提供服務的其實狀態。Leader 服務器允許 Follower 在 initLimit 時間內完成這個工作。 通常情況下,運維人員不用太在意這個參數的配置,使用默認值即可。但如果隨着 ZooKeeper 集羣管理的數據量增大,Follower 服務器再啓動的時候,從 Leader 上進行同步數據的時間也會相應變長,於是無法在較短的時間完成數據同步。因此,在這種情況下,有必要適當調大這個參數。 -
syncLimit
:該參數有默認值: 5,即表示參數 tickTime 值得5倍,必須配置,且需要配置一個正整數,不支持系統屬性配置。該參數用於配置 Leader 服務器和 Follower 服務器之間進行心跳檢測的最大延時時間。在 ZooKeeper 集羣運行過程中,Leader 服務器會與所有的 Follower 進行心跳檢測來確定該服務器是否存活。如果 Leader 服務器再 syncLimit 時間內無法獲取到 Follower 的心跳檢測響應,那麼 Leader 就會認爲該 Follower 已經脫離了和自己的同步。通常情況下,運維人員使用該參數的默認值即可,但如果部署ZooKeeper 集羣的網絡環境質量較低(例如網絡延時較大或丟包嚴重),那麼可以適當調大這個參數。 -
dataDir
: 該參數有默認值: dataDir,可以不配置,不支持系統屬性方式配置。參數 dataLogDir 用於配置 ZooKeeper 服務器存儲事務日誌文件的目錄。默認情況下,ZooKeeper 會將事務日誌文件和快照數據存儲在同一目錄中,應該儘量將這兩者的牧區區分開來。另外,如果條件允許,可以將事務日誌的存儲位置配置在一個單獨的磁盤上。事務日誌記錄對於磁盤的性能要求非常高,爲了保證數據的一致性,ZooKeeper 在返回客戶端事務請求相應之前,必須將本次請求對應的事務日誌寫入到磁盤中。因此,事務日誌寫入的性能直接決定了 ZooKeeper 在處理事務請求時的吞吐。針對同一塊磁盤的其他併發讀寫操作(例如 ZooKeeper 運行時日誌輸出和操作系統自身的讀寫等),尤其是數據快照操作,會極大地影響事務日誌的寫性能。因此儘量給事務日誌的輸出配置一個單獨的磁盤或掛載點,將極大地提升 ZooKeeper 的整體性能。 -
clientPort
:用於配置當前服務器對外的服務端口,客戶端會通過該端口和 ZooKeeper 服務器創建連接,一般設置爲2181。每臺 ZooKeeper 都可以配置任意可用的端口,同時集羣中的所有服務器不需要保持 clientPort 端口一致。該參數無默認值,必須配置。
dataLogDir
:存儲日誌的目錄
啓動Zookeeper:
bin/zkServer.cmd start
我們可以使客戶端去來連接
bin/zkCli.cmd
ZooKeeper -server host:port cmd args
stat path [watch] set path data [version] ls path [watch] delquota [-n|-b] path ls2 path [watch] setAcl path acl setquota -n|-b val path history redo cmdno printwatches on|off delete path [version] sync path listquota path rmr path get path [watch] create [-s] [-e] path data acl addauth scheme auth quit getAcl path close connect host:port
我們是用ls
查詢節點情況
ls /
2.2 下載seata0.9.0 並安裝啓動
2.2.1 在 Seata Release 下載最新版的 Seata Server 並解壓得到如下目錄:
. ├──bin ├──conf └──lib
2.2.2 修改 conf/registry.conf 和file.conf配置,
目前seata支持如下的file、nacos 、apollo、zk、consul的註冊中心和配置中心。這裏我們以zk
爲例。 將 type 改爲 zk
registry { # file zk type = "zk" zk { cluster = "default" serverAddr = "127.0.0.1:2181" session.timeout = 6000 connect.timeout = 2000 } file { name = "file.conf" } } config { # file、nacos 、apollo、zk、consul type = "zk" zk { serverAddr = "127.0.0.1:2181" session.timeout = 6000 connect.timeout = 2000 } file { name = "file.conf" } }
- serverAddr = "127.0.0.1:2181" :zk 的地址
- cluster = "default" :集羣設置爲默認
default
- session.timeout = 6000 :會話的超時時間
- connect.timeout = 2000:連接的超時時間
file.conf 配置
transport { # tcp udt unix-domain-socket type = "TCP" #NIO NATIVE server = "NIO" #enable heartbeat heartbeat = true #thread factory for netty thread-factory { boss-thread-prefix = "NettyBoss" worker-thread-prefix = "NettyServerNIOWorker" server-executor-thread-prefix = "NettyServerBizHandler" share-boss-worker = false client-selector-thread-prefix = "NettyClientSelector" client-selector-thread-size = 1 client-worker-thread-prefix = "NettyClientWorkerThread" # netty boss thread size,will not be used for UDT boss-thread-size = 1 #auto default pin or 8 worker-thread-size = 8 } shutdown { # when destroy server, wait seconds wait = 3 } serialization = "seata" compressor = "none" } service { #vgroup->rgroup vgroup_mapping.my_test_tx_group = "default" #only support single node default.grouplist = "127.0.0.1:8091" #degrade current not support enableDegrade = false #disable disable = false #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent max.commit.retry.timeout = "-1" max.rollback.retry.timeout = "-1" } client { async.commit.buffer.limit = 10000 lock { retry.internal = 10 retry.times = 30 } report.retry.count = 5 tm.commit.retry.count = 1 tm.rollback.retry.count = 1 } ## transaction log store store { ## store mode: file、db mode = "db" ## file store file { dir = "sessionStore" # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions max-branch-session-size = 16384 # globe session size , if exceeded throws exceptions max-global-session-size = 512 # file buffer size , if exceeded allocate new buffer file-write-buffer-cache-size = 16384 # when recover batch read size session.reload.read_size = 100 # async, sync flush-disk-mode = async } ## database store db { ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc. datasource = "dbcp" ## mysql/oracle/h2/oceanbase etc. db-type = "mysql" driver-class-name = "com.mysql.jdbc.Driver" url = "jdbc:mysql://127.0.0.1:3306/seata" user = "root" password = "123456" min-conn = 1 max-conn = 3 global.table = "global_table" branch.table = "branch_table" lock-table = "lock_table" query-limit = 100 } } lock { ## the lock store mode: local、remote mode = "remote" local { ## store locks in user's database } remote { ## store locks in the seata's server } } recovery { #schedule committing retry period in milliseconds committing-retry-period = 1000 #schedule asyn committing retry period in milliseconds asyn-committing-retry-period = 1000 #schedule rollbacking retry period in milliseconds rollbacking-retry-period = 1000 #schedule timeout retry period in milliseconds timeout-retry-period = 1000 } transaction { undo.data.validation = true undo.log.serialization = "jackson" undo.log.save.days = 7 #schedule delete expired undo_log in milliseconds undo.log.delete.period = 86400000 undo.log.table = "undo_log" } ## metrics settings metrics { enabled = false registry-type = "compact" # multi exporters use comma divided exporter-list = "prometheus" exporter-prometheus-port = 9898 } support { ## spring spring { # auto proxy the DataSource bean datasource.autoproxy = false } }
主要修改了store.mode
爲db
,還有數據庫相關的配置
2.2.3 修改 conf/nacos-config.txt配置爲zk-config.properties
transport.type=TCP transport.server=NIO transport.heartbeat=true transport.thread-factory.boss-thread-prefix=NettyBoss transport.thread-factory.worker-thread-prefix=NettyServerNIOWorker transport.thread-factory.server-executor-thread-prefix=NettyServerBizHandler transport.thread-factory.share-boss-worker=false transport.thread-factory.client-selector-thread-prefix=NettyClientSelector transport.thread-factory.client-selector-thread-size=1 transport.thread-factory.client-worker-thread-prefix=NettyClientWorkerThread transport.thread-factory.boss-thread-size=1 transport.thread-factory.worker-thread-size=8 transport.shutdown.wait=3 service.vgroup_mapping.order-service-seata-service-group=default service.vgroup_mapping.account-service-seata-service-group=default service.vgroup_mapping.storage-service-seata-service-group=default service.vgroup_mapping.business-service-seata-service-group=default service.enableDegrade=false service.disable=false service.max.commit.retry.timeout=-1 service.max.rollback.retry.timeout=-1 client.async.commit.buffer.limit=10000 client.lock.retry.internal=10 client.lock.retry.times=30 store.mode=db store.file.dir=file_store/data store.file.max-branch-session-size=16384 store.file.max-global-session-size=512 store.file.file-write-buffer-cache-size=16384 store.file.flush-disk-mode=async store.file.session.reload.read_size=100 store.db.driver-class-name=com.mysql.jdbc.Driver store.db.datasource=dbcp store.db.db-type=mysql store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true store.db.user=root store.db.password=123456 store.db.min-conn=1 store.db.max-conn=3 store.db.global.table=global_table store.db.branch.table=branch_table store.db.query-limit=100 store.db.lock-table=lock_table recovery.committing-retry-period=1000 recovery.asyn-committing-retry-period=1000 recovery.rollbacking-retry-period=1000 recovery.timeout-retry-period=1000 transaction.undo.data.validation=true transaction.undo.log.serialization=jackson transaction.undo.log.save.days=7 transaction.undo.log.delete.period=86400000 transaction.undo.log.table=undo_log transport.serialization=seata transport.compressor=none metrics.enabled=false metrics.registry-type=compact metrics.exporter-list=prometheus metrics.exporter-prometheus-port=9898 client.report.retry.count=5 service.disableGlobalTransaction=false
這裏主要修改瞭如下幾項:
- store.mode :存儲模式 默認file 這裏我修改爲db 模式 ,並且需要三個表
global_table
、branch_table
和lock_table
- store.db.driver-class-name: 默認沒有,會報錯。添加了
com.mysql.jdbc.Driver
- store.db.datasource=dbcp :數據源 dbcp
- store.db.db-type=mysql : 存儲數據庫的類型爲
mysql
- store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true : 修改爲自己的數據庫
url
、port
、數據庫名稱
- store.db.user=root :數據庫的賬號
- store.db.password=123456 :數據庫的密碼
- service.vgroup_mapping.order-service-seata-service-group=default
- service.vgroup_mapping.account-service-seata-service-group=default
- service.vgroup_mapping.storage-service-seata-service-group=default
- service.vgroup_mapping.business-service-seata-service-group=default
db模式下的所需的三個表的數據庫腳本位於seata\conf\db_store.sql
global_table
的表結構
CREATE TABLE `global_table` ( `xid` varchar(128) NOT NULL, `transaction_id` bigint(20) DEFAULT NULL, `status` tinyint(4) NOT NULL, `application_id` varchar(64) DEFAULT NULL, `transaction_service_group` varchar(64) DEFAULT NULL, `transaction_name` varchar(64) DEFAULT NULL, `timeout` int(11) DEFAULT NULL, `begin_time` bigint(20) DEFAULT NULL, `application_data` varchar(2000) DEFAULT NULL, `gmt_create` datetime DEFAULT NULL, `gmt_modified` datetime DEFAULT NULL, PRIMARY KEY (`xid`), KEY `idx_gmt_modified_status` (`gmt_modified`,`status`), KEY `idx_transaction_id` (`transaction_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
branch_table
的表結構
CREATE TABLE `branch_table` ( `branch_id` bigint(20) NOT NULL, `xid` varchar(128) NOT NULL, `transaction_id` bigint(20) DEFAULT NULL, `resource_group_id` varchar(32) DEFAULT NULL, `resource_id` varchar(256) DEFAULT NULL, `lock_key` varchar(128) DEFAULT NULL, `branch_type` varchar(8) DEFAULT NULL, `status` tinyint(4) DEFAULT NULL, `client_id` varchar(64) DEFAULT NULL, `application_data` varchar(2000) DEFAULT NULL, `gmt_create` datetime DEFAULT NULL, `gmt_modified` datetime DEFAULT NULL, PRIMARY KEY (`branch_id`), KEY `idx_xid` (`xid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
lock_table
的表結構
create table `lock_table` (
`row_key` varchar(128) not null,
`xid` varchar(96),
`transaction_id` long ,
`branch_id` long,
`resource_id` varchar(256) ,
`table_name` varchar(32) ,
`pk` varchar(32) ,
`gmt_create` datetime ,
`gmt_modified` datetime,
primary key(`row_key`)
);
2.2.4 將 Seata 配置添加到 Zookeeper 中
用於官方只提供了nacos的腳本配置。我這用java實現了將 Seata 配置添加到 Zookeeper 中。 我這裏參考了ZookeeperConfiguration
的源碼實現的導入到zk的初始化代碼。 通過查看以上源代碼我們可以看出 zk模式下數據的存儲格式。並且使用的zk的永久節點存儲。
2.4.2.1.創建根節點,其中根節點爲/config
public ZookeeperConfiguration() { if (zkClient == null) { Class var1 = ZookeeperConfiguration.class; synchronized(ZookeeperConfiguration.class) { if (null == zkClient) { zkClient = new ZkClient(FILE_CONFIG.getConfig("config.zk.serverAddr"), FILE_CONFIG.getInt("config.zk.session.timeout", 6000), FILE_CONFIG.getInt("config.zk.connect.timeout", 2000)); } } if (!zkClient.exists("/config")) { zkClient.createPersistent("/config", true); } } }
2.4.2.2添加配置的方法
public boolean putConfig(final String dataId, final String content, long timeoutMills) { FutureTask<Boolean> future = new FutureTask(new Callable<Boolean>() { public Boolean call() throws Exception { String path = "/config/" + dataId; if (!ZookeeperConfiguration.zkClient.exists(path)) { ZookeeperConfiguration.zkClient.create(path, content, CreateMode.PERSISTENT); } else { ZookeeperConfiguration.zkClient.writeData(path, content); } return true; } }); CONFIG_EXECUTOR.execute(future); try { return (Boolean)future.get(timeoutMills, TimeUnit.MILLISECONDS); } catch (Exception var7) { LOGGER.warn("putConfig {} : {} is error or timeout", dataId, content); return false; } }
2.4.2.3 使用java讀取zk-config.properties配置文件
package io.seata.samples.integration.call; import lombok.extern.slf4j.Slf4j; import org.I0Itec.zkclient.ZkClient; import org.apache.zookeeper.CreateMode; import org.springframework.util.ResourceUtils; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.util.Properties; import java.util.Set; /** * */ @Slf4j public class ZkDataInit { private static volatile ZkClient zkClient; public static void main(String[] args) { if (zkClient == null) { zkClient = new ZkClient("127.0.0.1:2181", 6000, 2000); } if (!zkClient.exists("/config")) { zkClient.createPersistent("/config", true); } //獲取key對應的value值 Properties properties = new Properties(); // 使用ClassLoader加載properties配置文件生成對應的輸入流 // 使用properties對象加載輸入流 try { File file = ResourceUtils.getFile("classpath:zk-config.properties"); InputStream in = new FileInputStream(file); properties.load(in); Set<Object> keys = properties.keySet();//返回屬性key的集合 for (Object key : keys) { boolean b = putConfig(key.toString(), properties.get(key).toString()); log.info(key.toString() + "=" + properties.get(key)+"result="+b); } } catch (IOException e) { e.printStackTrace(); } } /** * * @param dataId * @param content * @return */ public static boolean putConfig(final String dataId, final String content) { Boolean flag = false; String path = "/config/" + dataId; if (!zkClient.exists(path)) { zkClient.create(path, content, CreateMode.PERSISTENT); flag = true; } else { zkClient.writeData(path, content); flag = true; } return flag; } }
2.4.2.4 查看zk的節點結構
[zk: localhost:2181(CONNECTED) 3] ls /config [metrics.exporter-prometheus-port, store.file.session.reload.read_size, recovery .committing-retry-period, store.db.lock-table, store.db.datasource, transport.th read-factory.client-selector-thread-prefix, transaction.undo.log.save.days, metr ics.exporter-list, transport.server, client.async.commit.buffer.limit, store.fil e.max-branch-session-size, transport.thread-factory.client-selector-thread-size, transaction.undo.log.delete.period, transaction.undo.data.validation, service.d isableGlobalTransaction, transport.thread-factory.boss-thread-size, client.lock. retry.times, service.max.commit.retry.timeout, store.db.driver-class-name, store .file.flush-disk-mode, transport.thread-factory.worker-thread-size, store.mode, transport.serialization, transport.thread-factory.client-worker-thread-prefix, s tore.file.dir, recovery.rollbacking-retry-period, store.db.query-limit, transpor t.compressor, store.db.url, store.db.user, recovery.timeout-retry-period, servic e.disable, store.db.db-type, client.report.retry.count, store.file.file-write-bu ffer-cache-size, transaction.undo.log.table, client.lock.retry.internal, transac tion.undo.log.serialization, recovery.asyn-committing-retry-period, metrics.enab led, store.db.password, transport.thread-factory.worker-thread-prefix, transport .thread-factory.boss-thread-prefix, service.vgroup_mapping.storage-service-seata -service-group, service.vgroup_mapping.order-service-seata-service-group, store. db.global.table, store.db.branch.table, service.vgroup_mapping.account-service-s eata-service-group, service.vgroup_mapping.business-service-seata-service-group, service.max.rollback.retry.timeout, service.enableDegrade, store.file.max-globa l-session-size, transport.type, store.db.max-conn, transport.thread-factory.shar e-boss-worker, transport.thread-factory.server-executor-thread-prefix, metrics.r egistry-type, transport.heartbeat, transport.shutdown.wait, store.db.min-conn]
所有的配置都已經導入成功。
2.2.5 啓動 Seata Server
使用db 模式啓動
cd .. bin/seata-server.bat -m db
這時候在 zookeeper 的我們在客戶端可以使使用命令查詢節點
[zk: localhost:2181(CONNECTED) 4] ls / registry zookeeper config [zk: localhost:2181(CONNECTED) 4] ls /registry/zk/default [192.168.10.108:8091]
看到這,seata server就已經搭建成功
3 案例分析
參考官網中用戶購買商品的業務邏輯。整個業務邏輯由4個微服務提供支持:
- 庫存服務:扣除給定商品的存儲數量。
- 訂單服務:根據購買請求創建訂單。
- 帳戶服務:借記用戶帳戶的餘額。
- 業務服務:處理業務邏輯。
3.1 github地址
springboot-dubbo-seata:https://github.com/lidong1665/springboot-dubbo-seata-zk
-
samples-common :公共模塊
-
samples-account :用戶賬號模塊
-
samples-order :訂單模塊
-
samples-storage :庫存模塊
-
samples-business :業務模塊
3.2 賬戶服務:AccountDubboService
/** * @Author: lidong * @Description 賬戶服務接口 * @Date Created in 2019/9/5 16:37 */ public interface AccountDubboService { /** * 從賬戶扣錢 */ ObjectResponse decreaseAccount(AccountDTO accountDTO); }
3.3 訂單服務:OrderDubboService
/** * @Author: lidong * @Description 訂單服務接口 * @Date Created in 2019/9/5 16:28 */ public interface OrderDubboService { /** * 創建訂單 */ ObjectResponse<OrderDTO> createOrder(OrderDTO orderDTO); }
3.4 庫存服務:StorageDubboService
/** * @Author: lidong * @Description 庫存服務 * @Date Created in 2019/9/5 16:22 */ public interface StorageDubboService { /** * 扣減庫存 */ ObjectResponse decreaseStorage(CommodityDTO commodityDTO); }
3.5 業務服務:BusinessService
/** * @Author: lidong * @Description * @Date Created in 2019/9/5 17:17 */ public interface BusinessService { /** * 出處理業務服務 * @param businessDTO * @return */ ObjectResponse handleBusiness(BusinessDTO businessDTO); }
業務邏輯的具體實現主要體現在 訂單服務的實現和業務服務的實現
訂單服務的實現
@Service public class TOrderServiceImpl extends ServiceImpl<TOrderMapper, TOrder> implements ITOrderService { @Reference(version = "1.0.0") private AccountDubboService accountDubboService; /** * 創建訂單 * @Param: OrderDTO 訂單對象 * @Return: OrderDTO 訂單對象 */ @Override public ObjectResponse<OrderDTO> createOrder(OrderDTO orderDTO) { ObjectResponse<OrderDTO> response = new ObjectResponse<>(); //扣減用戶賬戶 AccountDTO accountDTO = new AccountDTO(); accountDTO.setUserId(orderDTO.getUserId()); accountDTO.setAmount(orderDTO.getOrderAmount()); ObjectResponse objectResponse = accountDubboService.decreaseAccount(accountDTO); //生成訂單號 orderDTO.setOrderNo(UUID.randomUUID().toString().replace("-","")); //生成訂單 TOrder tOrder = new TOrder(); BeanUtils.copyProperties(orderDTO,tOrder); tOrder.setCount(orderDTO.getOrderCount()); tOrder.setAmount(orderDTO.getOrderAmount().doubleValue()); try { baseMapper.createOrder(tOrder); } catch (Exception e) { response.setStatus(RspStatusEnum.FAIL.getCode()); response.setMessage(RspStatusEnum.FAIL.getMessage()); return response; } if (objectResponse.getStatus() != 200) { response.setStatus(RspStatusEnum.FAIL.getCode()); response.setMessage(RspStatusEnum.FAIL.getMessage()); return response; } response.setStatus(RspStatusEnum.SUCCESS.getCode()); response.setMessage(RspStatusEnum.SUCCESS.getMessage()); return response; } }
整個業務的實現邏輯
@Service @Slf4j public class BusinessServiceImpl implements BusinessService{ @Reference(version = "1.0.0") private StorageDubboService storageDubboService; @Reference(version = "1.0.0") private OrderDubboService orderDubboService; private boolean flag; /** * 處理業務邏輯 * @Param: * @Return: */ @GlobalTransactional(timeoutMills = 300000, name = "dubbo-gts-seata-example") @Override public ObjectResponse handleBusiness(BusinessDTO businessDTO) { log.info("開始全局事務,XID = " + RootContext.getXID()); ObjectResponse<Object> objectResponse = new ObjectResponse<>(); //1、扣減庫存 CommodityDTO commodityDTO = new CommodityDTO(); commodityDTO.setCommodityCode(businessDTO.getCommodityCode()); commodityDTO.setCount(businessDTO.getCount()); ObjectResponse storageResponse = storageDubboService.decreaseStorage(commodityDTO); //2、創建訂單 OrderDTO orderDTO = new OrderDTO(); orderDTO.setUserId(businessDTO.getUserId()); orderDTO.setCommodityCode(businessDTO.getCommodityCode()); orderDTO.setOrderCount(businessDTO.getCount()); orderDTO.setOrderAmount(businessDTO.getAmount()); ObjectResponse<OrderDTO> response = orderDubboService.createOrder(orderDTO); //打開註釋測試事務發生異常後,全局回滾功能 // if (!flag) { // throw new RuntimeException("測試拋異常後,分佈式事務回滾!"); // } if (storageResponse.getStatus() != 200 || response.getStatus() != 200) { throw new DefaultException(RspStatusEnum.FAIL); } objectResponse.setStatus(RspStatusEnum.SUCCESS.getCode()); objectResponse.setMessage(RspStatusEnum.SUCCESS.getMessage()); objectResponse.setData(response.getData()); return objectResponse; } }
3.6 使用seata的分佈式事務解決方案處理dubbo的分佈式事務
我們只需要在業務處理的方法handleBusiness
添加一個註解 @GlobalTransactional
@GlobalTransactional(timeoutMills = 300000, name = "dubbo-gts-seata-example") @Override public ObjectResponse handleBusiness(BusinessDTO businessDTO) { }
timeoutMills
: 超時時間name
:事務名稱
3.7 準備數據庫
注意: MySQL必須使用InnoDB engine
.
創建數據庫 並導入數據庫腳本
DROP DATABASE IF EXISTS seata; CREATE DATABASE seata; USE seata; DROP TABLE IF EXISTS `t_account`; CREATE TABLE `t_account` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` varchar(255) DEFAULT NULL, `amount` double(14,2) DEFAULT '0.00', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_account -- ---------------------------- INSERT INTO `t_account` VALUES ('1', '1', '4000.00'); -- ---------------------------- -- Table structure for t_order -- ---------------------------- DROP TABLE IF EXISTS `t_order`; CREATE TABLE `t_order` ( `id` int(11) NOT NULL AUTO_INCREMENT, `order_no` varchar(255) DEFAULT NULL, `user_id` varchar(255) DEFAULT NULL, `commodity_code` varchar(255) DEFAULT NULL, `count` int(11) DEFAULT '0', `amount` double(14,2) DEFAULT '0.00', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=64 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_order -- ---------------------------- -- ---------------------------- -- Table structure for t_storage -- ---------------------------- DROP TABLE IF EXISTS `t_storage`; CREATE TABLE `t_storage` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commodity_code` varchar(255) DEFAULT NULL, `name` varchar(255) DEFAULT NULL, `count` int(11) DEFAULT '0', PRIMARY KEY (`id`), UNIQUE KEY `commodity_code` (`commodity_code`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_storage -- ---------------------------- INSERT INTO `t_storage` VALUES ('1', 'C201901140001', '水杯', '1000'); -- ---------------------------- -- Table structure for undo_log -- 注意此處0.3.0+ 增加唯一索引 ux_undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `branch_id` bigint(20) NOT NULL, `xid` varchar(100) NOT NULL, `context` varchar(128) NOT NULL, `rollback_info` longblob NOT NULL, `log_status` int(11) NOT NULL, `log_created` datetime NOT NULL, `log_modified` datetime NOT NULL, `ext` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of undo_log -- ---------------------------- SET FOREIGN_KEY_CHECKS=1;
會看到如上的4個表
+-------------------------+ | Tables_in_seata | +-------------------------+ | t_account | | t_order | | t_storage | | undo_log | +-------------------------+
這裏爲了簡化我將這個三張表創建到一個庫中,使用是三個數據源來實現。
3.8 我們以賬號服務samples-account
爲例 ,分析需要注意的配置項目
3.8.1 引入的依賴
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.5.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <artifactId>springboot-dubbo-seata</artifactId> <packaging>pom</packaging> <name>springboot-dubbo-seata</name> <groupId>io.seata</groupId> <version>1.0.0</version> <description>Demo project for Spring Cloud Alibaba Dubbo</description> <modules> <module>samples-common</module> <module>samples-account</module> <module>samples-order</module> <module>samples-storage</module> <module>samples-business</module> </modules> <properties> <springboot.verison>2.1.5.RELEASE</springboot.verison> <java.version>1.8</java.version> <druid.version>1.1.10</druid.version> <mybatis.version>1.3.2</mybatis.version> <mybatis-plus.version>2.3</mybatis-plus.version> <lombok.version>1.16.22</lombok.version> <dubbo.version>2.7.3</dubbo.version> <seata.version>0.9.0</seata.version> <netty.version>4.1.32.Final</netty.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>${springboot.verison}</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>${springboot.verison}</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <version>${springboot.verison}</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>${druid.version}</version> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>${mybatis.version}</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus</artifactId> <version>${mybatis-plus.version}</version> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo</artifactId> <version>${dubbo.version}</version> <exclusions> <exclusion> <artifactId>spring</artifactId> <groupId>org.springframework</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-spring-boot-starter</artifactId> <version>${dubbo.version}</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.dubbo/dubbo-config-spring --> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-config-spring</artifactId> <version>${dubbo.version}</version> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-registry-zookeeper</artifactId> <version>${dubbo.version}</version> </dependency> <!-- https://mvnrepository.com/artifact/io.seata/seata-all --> <dependency> <groupId>io.seata</groupId> <artifactId>seata-all</artifactId> <version>${seata.version}</version> </dependency> <dependency> <groupId>org.apache.dubbo</groupId> <artifactId>dubbo-dependencies-zookeeper</artifactId> <version>${dubbo.version}</version> <type>pom</type> </dependency> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.14</version> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-framework</artifactId> <version>4.0.1</version> </dependency> <!-- https://mvnrepository.com/artifact/com.101tec/zkclient --> <dependency> <groupId>com.101tec</groupId> <artifactId>zkclient</artifactId> <version>0.11</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>${springboot.verison}</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> </dependency> <dependency> <groupId>io.netty</groupId> <artifactId>netty-all</artifactId> <version>${netty.version}</version> </dependency> <dependency> <groupId>com.alibaba.spring</groupId> <artifactId>spring-context-support</artifactId> <version>1.0.2</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.5</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.47</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-deploy-plugin</artifactId> <configuration> <skip>true</skip> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> </plugin> </plugins> </build> </project>
注意:
seata-all
: 這個是seata 所需的主要依賴。dubbo-spring-boot-starter
: springboot dubbo的依賴。
其他的就不一一介紹,其他的一目瞭然,就知道是幹什麼的。
3.8.2 application.properties配置
server.port=8102 spring.application.name=dubbo-account-example #====================================Dubbo config=============================================== dubbo.application.id= dubbo-account-example dubbo.application.name= dubbo-account-example dubbo.protocol.id=dubbo dubbo.protocol.name=dubbo dubbo.registry.id=dubbo-account-example-registry dubbo.registry.address=zookeeper://127.0.0.1:2181 dubbo.protocol.port=20880 dubbo.application.qos-enable=false dubbo.config-center.address=zookeeper://127.0.0.1:2181 dubbo.metadata-report.address=zookeeper://127.0.0.1:2181 #====================================mysql config============================================ spring.datasource.driver-class-name=com.mysql.jdbc.Driver spring.datasource.url=jdbc:mysql://127.0.0.1:3306/seata?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true spring.datasource.username=root spring.datasource.password=123456 #=====================================mybatis config====================================== mybatis.mapper-locations=classpath*:/mapper/*.xml
3.8.3 registry.conf(zk的配置)
registry.conf的配置
registry {
# file 、nacos 、eureka、redis、zk
type = "zk"
nacos {
serverAddr = "localhost"
namespace = "public"
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:1001/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "127.0.0.1:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "localhost:2181"
session.timeout = 6000
connect.timeout = 2000
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk
type = "zk"
nacos {
serverAddr = "localhost"
namespace = "public"
cluster = "default"
}
apollo {
app.id = "fescar-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
file {
name = "file.conf"
}
}
3.8.5 SeataAutoConfig 配置
package io.seata.samples.integration.account.config; import com.alibaba.druid.pool.DruidDataSource; import io.seata.rm.datasource.DataSourceProxy; import io.seata.spring.annotation.GlobalTransactionScanner; import org.apache.ibatis.session.SqlSessionFactory; import org.apache.ibatis.transaction.jdbc.JdbcTransactionFactory; import org.mybatis.spring.SqlSessionFactoryBean; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Primary; import org.springframework.core.io.support.PathMatchingResourcePatternResolver; /** * @Author: llidong * @Description seata global configuration * @Date Created in 2019/9/05 10:28 */ @Configuration public class SeataAutoConfig { /** * autowired datasource config */ @Autowired private DataSourceProperties dataSourceProperties; /** * init durid datasource * * @Return: druidDataSource datasource instance */ @Bean @Primary public DruidDataSource druidDataSource(){ DruidDataSource druidDataSource = new DruidDataSource(); druidDataSource.setUrl(dataSourceProperties.getUrl()); druidDataSource.setUsername(dataSourceProperties.getUsername()); druidDataSource.setPassword(dataSourceProperties.getPassword()); druidDataSource.setDriverClassName(dataSourceProperties.getDriverClassName()); druidDataSource.setInitialSize(0); druidDataSource.setMaxActive(180); druidDataSource.setMaxWait(60000); druidDataSource.setMinIdle(0); druidDataSource.setValidationQuery("Select 1 from DUAL"); druidDataSource.setTestOnBorrow(false); druidDataSource.setTestOnReturn(false); druidDataSource.setTestWhileIdle(true); druidDataSource.setTimeBetweenEvictionRunsMillis(60000); druidDataSource.setMinEvictableIdleTimeMillis(25200000); druidDataSource.setRemoveAbandoned(true); druidDataSource.setRemoveAbandonedTimeout(1800); druidDataSource.setLogAbandoned(true); return druidDataSource; } /** * init datasource proxy * @Param: druidDataSource datasource bean instance * @Return: DataSourceProxy datasource proxy */ @Bean public DataSourceProxy dataSourceProxy(DruidDataSource druidDataSource){ return new DataSourceProxy(druidDataSource); } /** * init mybatis sqlSessionFactory * @Param: dataSourceProxy datasource proxy * @Return: DataSourceProxy datasource proxy */ @Bean public SqlSessionFactory sqlSessionFactory(DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean(); factoryBean.setDataSource(dataSourceProxy); factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources("classpath*:/mapper/*.xml")); return factoryBean.getObject(); } /** * init global transaction scanner * * @Return: GlobalTransactionScanner */ @Bean public GlobalTransactionScanner globalTransactionScanner(){ return new GlobalTransactionScanner("account-gts-seata-example", "account-service-seata-service-group"); } }
其中:
@Bean
public GlobalTransactionScanner globalTransactionScanner(){
return new GlobalTransactionScanner("account-gts-seata-example", "account-service-seata-service-group");
}
GlobalTransactionScanner
: 初始化全局的事務掃描器
/**
* Instantiates a new Global transaction scanner.
*
* @param applicationId the application id
* @param txServiceGroup the default server group
*/
public GlobalTransactionScanner(String applicationId, String txServiceGroup) {
this(applicationId, txServiceGroup, DEFAULT_MODE);
}
- applicationId :爲應用id 這裏我傳入的是
account-gts-seata-example
- txServiceGroup: 默認server的分組 這裏我傳入的是
account-service-seata-service-group
這個和我們前面在nacos 配置的是保存一致。 - DEFAULT_MODE:默認的事務模式 爲AT_MODE + MT_MODE
3.8.6 AccountExampleApplication 啓動類的配置
package io.seata.samples.integration.account; import org.apache.dubbo.config.spring.context.annotation.EnableDubbo; import org.mybatis.spring.annotation.MapperScan; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.context.config.ConfigFileApplicationListener; @SpringBootApplication(scanBasePackages = "io.seata.samples.integration.account") @MapperScan({"io.seata.samples.integration.account.mapper"}) @EnableDubbo(scanBasePackages = "io.seata.samples.integration.account") public class AccountExampleApplication { public static void main(String[] args) { SpringApplication.run(AccountExampleApplication.class, args); } }
-
@EnableDubbo
等同於@DubboComponentScan
和@EnableDubboConfig
組合 -
@DubboComponentScan
掃描 classpaths 下類中添加了@Service
和@Reference
將自動注入到spring beans中。 -
@EnableDubboConfig 掃描的dubbo的外部化配置。
4 啓動所有的sample模塊
啓動 samples-account
、samples-order
、samples-storage
、samples-business
並且在zk的客戶端查看註冊情況
我們可以看到上面的服務都已經註冊成功。
5 測試
5. 1 發送一個下單請求
使用postman 發送 :http://localhost:8104/business/dubbo/buy
參數:
{ "userId":"1", "commodityCode":"C201901140001", "name":"fan", "count":50, "amount":"100" }
返回
{ "status": 200, "message": "成功", "data": null }
這時候控制檯:
2019-10-21 12:04:54.816 INFO 12780 --- [nio-8104-exec-1] i.s.s.i.c.controller.BusinessController : 請求參數:BusinessDTO(userId=1, commodityCode=C201901140001, name=fan, count=50, amount=100)
2019-10-21 12:04:54.823 INFO 12780 --- [nio-8104-exec-1] i.s.common.loader.EnhancedServiceLoader : load ContextCore[null] extension by class[io.seata.core.context.ThreadLocalContextCore]
2019-10-21 12:04:54.829 ERROR 12780 --- [nio-8104-exec-1] i.s.config.zk.ZookeeperConfiguration : getConfig client.tm.commit.retry.count is error or timeout,return defaultValue 1
2019-10-21 12:04:54.830 ERROR 12780 --- [nio-8104-exec-1] i.s.config.zk.ZookeeperConfiguration : getConfig client.tm.rollback.retry.count is error or timeout,return defaultValue 1
2019-10-21 12:04:54.833 INFO 12780 --- [nio-8104-exec-1] i.s.common.loader.EnhancedServiceLoader : load TransactionManager[null] extension by class[io.seata.tm.DefaultTransactionManager]
2019-10-21 12:04:54.833 INFO 12780 --- [nio-8104-exec-1] io.seata.tm.TransactionManagerHolder : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@394cfa5a
2019-10-21 12:04:54.840 INFO 12780 --- [nio-8104-exec-1] i.s.common.loader.EnhancedServiceLoader : load LoadBalance[null] extension by class[io.seata.discovery.loadbalance.RandomLoadBalance]
2019-10-21 12:04:55.037 INFO 12780 --- [nio-8104-exec-1] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.10.108:8091:2025358030]
2019-10-21 12:04:57.473 INFO 12780 --- [nio-8104-exec-1] i.s.s.i.c.service.BusinessServiceImpl : 開始全局事務,XID = 192.168.10.108:8091:2025358030
2019-10-21 12:05:04.280 INFO 12780 --- [nio-8104-exec-1] i.seata.tm.api.DefaultGlobalTransaction : [192.168.10.108:8091:2025358030] commit status:Committed
事務提交成功,
我們來看一下數據庫數據變化
5.2 測試回滾
我們samples-business
將BusinessServiceImpl
的handleBusiness2
下面的代碼去掉註釋
if (!flag) {
throw new RuntimeException("測試拋異常後,分佈式事務回滾!");
}
使用postman 發送 :http://localhost:8104/business/dubbo/buy2
.響應結果:
{ "timestamp": "2019-09-05T04:29:34.178+0000", "status": 500, "error": "Internal Server Error", "message": "測試拋異常後,分佈式事務回滾!", "path": "/business/dubbo/buy" }
控制檯
2019-10-21 12:05:46.752 INFO 12780 --- [nio-8104-exec-3] i.s.s.i.c.controller.BusinessController : 請求參數:BusinessDTO(userId=1, commodityCode=C201901140001, name=fan, count=50, amount=100) 2019-10-21 12:05:46.785 INFO 12780 --- [nio-8104-exec-3] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.10.108:8091:2025358056] 2019-10-21 12:05:46.786 INFO 12780 --- [nio-8104-exec-3] i.s.s.i.c.service.BusinessServiceImpl : 開始全局事務,XID = 192.168.10.108:8091:2025358056 2019-10-21 12:05:50.285 INFO 12780 --- [nio-8104-exec-3] i.seata.tm.api.DefaultGlobalTransaction : [192.168.10.108:8091:2025358056] rollback status:Rollbacked 2019-10-21 12:05:50.477 ERROR 12780 --- [nio-8104-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.RuntimeException: 測試拋異常後,分佈式事務回滾!] with root cause java.lang.RuntimeException: 測試拋異常後,分佈式事務回滾! at io.seata.samples.integration.call.service.BusinessServiceImpl.handleBusiness2(BusinessServiceImpl.java:93) ~[classes/:na] at io.seata.samples.integration.call.service.BusinessServiceImpl$$FastClassBySpringCGLIB$$2ab3d645.invoke(<generated>) ~[classes/:na] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749) ~[spring-aop-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.1.7.RELEASE.jar:5.1.7.RELEASE] at io.seata.spring.annotation.GlobalTransactionalInterceptor$1.execute(GlobalTransactionalInterceptor.java:104) ~[seata-all-0.9.0.jar:0.9.0] at io.seata.tm.api.TransactionalTemplate.execute(TransactionalTemplate.java:64) ~[seata-all-0.9.0.jar:0.9.0] at io.seata.spring.annotation.GlobalTransactionalInterceptor.handleGlobalTransaction(GlobalTransactionalInterceptor.java:101) ~[seata-all-0.9.0.jar:0.9.0] at io.seata.spring.annotation.GlobalTransactionalInterceptor.invoke(GlobalTransactionalInterceptor.java:76) ~[seata-all-0.9.0.jar:0.9.0] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688) ~[spring-aop-5.1.7.RELEASE.jar:5.1.7.RELEASE] at io.seata.samples.integration.call.service.BusinessServiceImpl$$EnhancerBySpringCGLIB$$a8aa15d5.handleBusiness2(<generated>) ~[classes/:na] at io.seata.samples.integration.call.controller.BusinessController.handleBusiness2(BusinessController.java:48) ~[classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_144] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_144] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_144] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_144] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1039) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:908) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:660) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882) ~[spring-webmvc-5.1.7.RELEASE.jar:5.1.7.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar:5.1.7.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200) ~[tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:836) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1747) [tomcat-embed-core-9.0.19.jar:9.0.19] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.19.jar:9.0.19] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.19.jar:9.0.19] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
我們查看數據庫數據,已經回滾,和上面的數據一致。
到這裏一個簡單的SpringBoot+Zookeeper+Seata實現Dubbo分佈式事務管理
案例基本就分析結束。感謝你的學習。