springcloud-nacos-seata 实现分布式事务

分布式事务组件seata的使用demo,AT模式,集成nacos、springboot、springcloud、mybatis-plus,数据库采用mysql;

ps:github代码:transaction_example

1. 服务端配置

1.1 Nacos-server

启动命令(standalone代表着单机模式运行,非集群模式):

cd bin

sh startup.sh -m standalone

ps:关闭的命令  sh shutdown.sh

提示信息:

nacos is starting with cluster
nacos is starting,you can check the /usr/local/nacos/logs/start.out

那么可以查看nacos的启动日志:

cat /usr/local/nacos/logs/start.out

均无报错,则启动成功;

访问:http://192.168.87.133:8848/nacos/index.html,登录nacos图形界面。账号密码都是nacos.

刚开始配置管理-配置列表:为空,什么也没有。

1.2 Seata-server

1.2.1 修改conf/registry.conf 配置

ps:每个应用的resource里需要配置一个registry.conf ,demo中与seata-server里的配置相同

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }

}

config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"

  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }
}

1.2.2 修改conf/nacos-config.txt 配置

(ps:application.propeties 的各个配置项,注意spring.cloud.alibaba.seata.tx-service-group 是服务组名称,与nacos-config.txt 配置的service.vgroup_mapping.${your-service-gruop}具有对应关系

demo中有两个服务,分别是storage-service和order-service,完整配置如下:

transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.thread-factory.boss-thread-prefix=NettyBoss
transport.thread-factory.worker-thread-prefix=NettyServerNIOWorker
transport.thread-factory.server-executor-thread-prefix=NettyServerBizHandler
transport.thread-factory.share-boss-worker=false
transport.thread-factory.client-selector-thread-prefix=NettyClientSelector
transport.thread-factory.client-selector-thread-size=1
transport.thread-factory.client-worker-thread-prefix=NettyClientWorkerThread
transport.thread-factory.boss-thread-size=1
transport.thread-factory.worker-thread-size=8
transport.shutdown.wait=3
service.vgroup_mapping.storage-service-group=default
service.vgroup_mapping.order-service-group=default
service.enableDegrade=false
service.disable=false
service.max.commit.retry.timeout=-1
service.max.rollback.retry.timeout=-1
client.async.commit.buffer.limit=10000
client.lock.retry.internal=10
client.lock.retry.times=30
client.lock.retry.policy.branch-rollback-on-conflict=true
client.table.meta.check.enable=true
client.report.retry.count=5
client.tm.commit.retry.count=1
client.tm.rollback.retry.count=1
store.mode=db
store.file.dir=file_store/data
store.file.max-branch-session-size=16384
store.file.max-global-session-size=512
store.file.file-write-buffer-cache-size=16384
store.file.flush-disk-mode=async
store.file.session.reload.read_size=100
store.db.datasource=dbcp
store.db.db-type=mysql
store.db.driver-class-name=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
store.db.user=root
store.db.password=123456
store.db.min-conn=1
store.db.max-conn=3
store.db.global.table=global_table
store.db.branch.table=branch_table
store.db.query-limit=100
store.db.lock-table=lock_table
recovery.committing-retry-period=1000
recovery.asyn-committing-retry-period=1000
recovery.rollbacking-retry-period=1000
recovery.timeout-retry-period=1000
transaction.undo.data.validation=true
transaction.undo.log.serialization=jackson
transaction.undo.log.save.days=7
transaction.undo.log.delete.period=86400000
transaction.undo.log.table=undo_log
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registry-type=compact
metrics.exporter-list=prometheus
metrics.exporter-prometheus-port=9898
support.spring.datasource.autoproxy=false

1.3 初始化seata 的nacos配置 :

cd conf
sh nacos-config.sh 192.168.87.133

最后一行提示这个代码配置推送到nacos成功了。

\r\n\033[42;37m init nacos config finished, please start seata-server. \033[0m

ps:conf/nacos-config.txt配置中不要加注释,否则可能会出现init nacos config fail. 

如:我加了段注释:(很坑的,之前一直以为是我自己哪里配错了。)

结果最后一行就是这个初始化配置失败;

然后去访问http://192.168.87.133:8848/nacos/index.html,配置列表有数据了。

 

1.4 启动seata-server

cd bin
sh seata-server.sh -p 8091 -m db
#or 
sh seata-server.sh
 

2. 应用配置

ps:使用的官方提供的demo,所以代码就不展示了。

2.1 order-service

application.properties:

spring.application.name=order-service
server.port=9091

# Nacos 注册中心地址
spring.cloud.nacos.discovery.server-addr = 192.168.87.133:8848

# seata 服务分组,要与服务端nacos-config.txt中service.vgroup_mapping的后缀对应
spring.cloud.alibaba.seata.tx-service-group=order-service-group

logging.level.io.seata = debug

# 数据源配置
spring.datasource.druid.url=jdbc:mysql://192.168.87.133:3306/seata_order?allowMultiQueries=true
spring.datasource.druid.driverClassName=com.mysql.jdbc.Driver
spring.datasource.druid.username=root
spring.datasource.druid.password=123456

registry.conf: 

(ps:order-service,storage-service中resource下的registry.conf与seata-server的conf下registry.conf配置一致)

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }
}
config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"
  nacos {
    serverAddr = "192.168.87.133"
    namespace = ""
    cluster = "default"
  }
}

2.2 storage-service

application.properties:

spring.application.name=storage-service
server.port=9092

# Nacos 注册中心地址
spring.cloud.nacos.discovery.server-addr = 192.168.87.133:8848

# seata 服务分组,要与服务端nacos-config.txt中service.vgroup_mapping的后缀对应
spring.cloud.alibaba.seata.tx-service-group=storage-service-group
logging.level.io.seata = debug

# 数据源配置
spring.datasource.druid.url=jdbc:mysql://192.168.87.133:3306/seata_storage?allowMultiQueries=true
spring.datasource.druid.driverClassName=com.mysql.jdbc.Driver
spring.datasource.druid.username=root
spring.datasource.druid.password=123456

registry.conf:

(ps:order-service,storage-service中resource下的registry.conf与seata-server的conf下registry.conf配置一致)

3.测试

3.1 启动成功:

nacos registry, storage-service 192.168.xxxx.xxx:9092 register finished

nacos registry, order-service 192.168.xxxx.xxx:9091 register finished

3.2 分布式事务成功,模拟正常下单、扣库存

ps:这里我测试了三次,storage_tbl表product-1的count初始值是9999999,因此测试没有问题;

3.3 分布式事务失败,模拟下单成功、扣库存失败,最终同时回滚

order-service:

2020-01-03 15:10:56.195  INFO 5020 --- [nio-9091-exec-3] i.seata.tm.api.DefaultGlobalTransaction  : [192.168.87.133:8091:2031140107] rollback status: Rollbacked
2020-01-03 15:10:56.196 ERROR 5020 --- [nio-9091-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is feign.FeignException: status 500 reading StorageFeignClient#deduct(String,Integer); content:
{"timestamp":"2020-01-03T07:10:56.166+0000","status":500,"error":"Internal Server Error","message":"异常:模拟业务异常:Storage branch exception","path":"/storage/deduct"}] with root cause

feign.FeignException: status 500 reading StorageFeignClient#deduct(String,Integer); content:
{"timestamp":"2020-01-03T07:10:56.166+0000","status":500,"error":"Internal Server Error","message":"异常:模拟业务异常:Storage branch exception","path":"/storage/deduct"}

 

storage-service: 

java.lang.RuntimeException: 异常:模拟业务异常:Storage branch exception
	at com.lucifer.storage.service.StorageService.deduct(StorageService.java:37) ~[classes/:na]
	at com.lucifer.storage.service.StorageService$$FastClassBySpringCGLIB$$89a96fbd.invoke(<generated>) ~[classes/:na]
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-5.0.10.RELEASE.jar:5.0.10.RELEASE]

ps:两个数据库表中均无数据变化,测试成功。

================================================================================================

ps:谈谈我在springcloud-nacos-seata整合的过程遇到的问题:

i.s.c.r.netty.NettyClientChannelManager : no available server to connect.

注意几点:

1.application.propeties 的各个配置项,注意spring.cloud.alibaba.seata.tx-service-group 是服务组名称,与nacos-config.txt 配置的service.vgroup_mapping.${your-service-gruop}具有对应关系;

2.每个应用的resource里需要配置一个registry.conf,demo中与seata-server里的配置相同。(也就是seata-server中的registry.conf跟你demo中的resources下的registry.conf均需要配置,并且一致

ps:我出现的问题是registry.conf第2点的每个应用的resource里需要配置一个registry.conf配置了,但是seata-server的conf目录下的registry.conf没有配置。我只配置了其中的conf/nacos-config.txt。

3.初始化seata 的nacos配置。

参考:springcloud-nacos-seata官方demo

 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章