Ogg For Bigdata 同步Oracle數據到KAFKA(包括初始化歷史數據)

OGG同步Oracle數據到KAFKA:OGG初始化進程初始化歷史數據

在前面曾寫過幾篇關於OGG同步Oracle等庫數據到kafka的文章:
OGG實時同步Oracle數據到Kafka實施文檔(供flink流式計算)
OGG For Bigdata 12按操作類型同步Oracle數據到kafka不同topic
但是那都是做測試,沒有說實際工作情況下如何將Oracle等庫錶的歷史數據初始化到kafka的方案,我這裏用過兩個方案,第一個比較笨的方案那就是寫shell腳本將數據從Oracle導出成json格式的數據然後再寫到kafka,另一種就是現在要介紹的通過OGG本身的初始化進程來做歷史數據初始化,本篇文章環境完全根據前面文章搭建的環境來做的。
先再來看下當前環境的大致配置情況:
在這裏插入圖片描述
由於本文做的一系列Ogg forBigdata投遞j’son消息到kafka操作是爲了提供flink消費做實時計算用,爲了極大的降低flink代碼解析json的成本,提高消費速度,本人文章對insert,delete,update/pkupdate的映射大致邏輯是這樣映射的:
1、對於insert操作,由於ogg for bigdata生成的json消息是下面這種情況:

 {"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:01.379000","pos":"00000000000000002199","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":1232,"ENAME":"    FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}

也就是有效數據存儲在after的部分,這裏不做變化;
2、對於delete 操作,由於ogg for bigdata生成的json消息是下面這種情況:

 {"table":"SCOTT.SCEMP","op_type":"D","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:01.379000","pos":"00000000000000002199","tokens":{"TKN-OP-TYPE":"DELETE"},"BEFORE":{"EMPNO":1232,"ENAME":"    FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}

也就是有效數據存儲在before的部分,由於insert,delete,update我這裏不再像前面文章映射到不同topic,這裏都映射到一個topic中,這裏flink解析就有問題了,因爲json結構不同,insert的有效數據在after而delete的在before,這裏爲了flink解析json方便,將delete的操作對應的json的有效數據也放到after中,怎麼實現?就是將delete轉成insert,轉置後的結果json如下:

{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:01.379000","pos":"00000000000000002199","tokens":{"TKN-OP-TYPE":"DELETE"},"after":{"EMPNO":1232,"ENAME":"    FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}

但是轉置完後,標識操作類型的op_type也變成了I,那後面flink計算時候怎麼知道這條記錄實際做的是delete?,這就是爲什麼我上篇文章在源端抽取進程加了TKN-OP-TYPE屬性來標識這條記錄做的是什麼操作,這樣就算replicat做了轉置,op_type會變,但是TKN-OP-TYPE是從源端帶來的屬性值,這個不會變。
3、對於普通update操作,由於ogg for bigdata生成的json消息是下面這種情況:

{"table":"SCOTT.SCEMP","op_type":"U","op_ts":"2019-09-16 16:23:50.607615","current_ts":"2019-09-16T16:24:01.925000","pos":"00000000230000015887","tokens":{"TKN-OP-TYPE":"SQL COMPUPDATE"},"before":{"EMPNO":8888},"aft
er":{"EMPNO":6666,"ENAME":"zyand"}}

這裏的json只會帶有加了附加日誌的主鍵及被修改的字段值, 我們首先需要做的是,把update after的數據單獨拿出來做一個json:

{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 16:36:58.607186","current_ts":"2019-09-16T16:37:06.891000","pos":"00000000230000016276","tokens":{"TKN-OP-TYPE":"SQL COMPUPDATE"},"after":{"EMPNO":6666,"ENAME":"zyand"}}

爲什麼不取before的數據,因爲before的數據對我們沒用,不需要取這些數據,其次,由於flink要計算的字段涉及empno,ename,job,sal,deptno這些字段,就算只是改了ename字段,其他字段沒有變化,我們也要將這些沒有變動的字段及其現在的值拿出來寫到kafka,保證json消息的完整性,讓flink在處理的時候更方便。
4、對於pkupdate操作,無論是主鍵+其他字段的修改還是僅主鍵單獨的變更,原本的pkupdate消息如下:

{"table":"SCOTT.SCEMP","op_type":"U","op_ts":"2019-09-16 15:18:29.607061","current_ts":"2019-09-16T15:46:06.534000","pos":"00000000230000013943","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"before":{"EMPNO":6666},"after":{"EMPNO":8888,"ENAME":"zyand","JOB":"kfc","SAL":100.00,"DEPTNO":30}}

這裏我們要把pkupdate before的數據拆分成一個單獨的json拿出來,並且讓除了主鍵以外的其他需要計算的指標ename,job,sal,deptno也要在這個json中並且這些除主鍵外的字段值均要爲null值,如下:

{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-17 09:36:39.480539","current_ts":"2019-09-17T09:36:52.022000","pos":"00000000230000021370","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":6666,"ENAME":null,"JOB":null,"SAL":null,"DEPTNO":null}}

而after的也要單獨拆分,要保證主鍵和所有字段的值都是現在最新的狀態值:

{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-17 09:36:39.480539","current_ts":"2019-09-17T09:37:12.096000","pos":"00000000230000021370","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":8888,"ENAME":"zyd","JOB":"kfc","SAL":100.00,"DEPTNO":30}}

之所以這麼做一是因爲前面說的保證j’son消息的完整性,其次是主鍵變更後,變更前的主鍵對應的j’son數據還在kafka中,而新的主鍵(包括變更主鍵和其他字段)對應的相關值除了變更主鍵時被變更的字段外其他的字段值都與舊主鍵值一致,這樣flink計算的時候就會重複計算,爲了避免重複計算,在主鍵變更後生成了新的主鍵+其他加了附加日誌的字段j’son後,還要寫一箇舊的主鍵對應的j’son消息,讓舊的主鍵最新的其他字段值都爲null,這樣flink在計算的時候,根據主鍵取最新狀態值的時候就不會出現重複計算的問題了。
下面是上面邏輯的大致流程圖:
在這裏插入圖片描述
下面看具體實驗:

–下面所有源端表都是在scott用戶下操作。

一、源端創建測試用表

create table scemp as select * from emp;
create table scdept as select * from dept;
ALTER TABLE scemp  ADD CONSTRAINT PK_scemp PRIMARY KEY (EMPNO);
ALTER TABLE scdept  ADD CONSTRAINT PK_scdept PRIMARY KEY (DEPTNO);

在這裏插入圖片描述
在這裏插入圖片描述

二、源端OGG操作

1、添加附加日誌

[oracle@source ogg12]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.2.2 OGGCORE_12.2.0.2.0_PLATFORMS_170630.0419_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Jun 30 2017 14:42:26
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.



GGSCI (source) 16> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     ABENDED     D_KA        00:00:00      5714:15:13  
EXTRACT     ABENDED     D_KF        00:00:00      6507:59:26  
EXTRACT     ABENDED     D_SC        00:00:00      140:41:02   
EXTRACT     RUNNING     D_ZT        00:00:00      00:00:04    
EXTRACT     STOPPED     E_KA        00:00:00      2692:41:17  
EXTRACT     ABENDED     E_SC        00:00:00      00:29:58    
EXTRACT     STOPPED     E_ZT        00:00:00      00:15:43   


GGSCI (source) 2> dblogin userid ogg password ogg
Successfully logged into database.

GGSCI (source as ogg@orcl) 18> add trandata SCOTT.SCEMP

Logging of supplemental redo data enabled for table SCOTT.SCEMP.
TRANDATA for scheduling columns has been added on table 'SCOTT.SCEMP'.
TRANDATA for instantiation CSN has been added on table 'SCOTT.SCEMP'.
GGSCI (source as ogg@orcl) 19> add trandata SCOTT.SCDEPT

Logging of supplemental redo data enabled for table SCOTT.SCDEPT.
TRANDATA for scheduling columns has been added on table 'SCOTT.SCDEPT'.
TRANDATA for instantiation CSN has been added on table 'SCOTT.SCDEPT'.
GGSCI (source as ogg@orcl) 20> info trandata SCOTT.SC*

Logging of supplemental redo log data is enabled for table SCOTT.SCDEPT.

Columns supplementally logged for table SCOTT.SCDEPT: DEPTNO.

Prepared CSN for table SCOTT.SCDEPT: 2151646
Logging of supplemental redo log data is enabled for table SCOTT.SCEMP.

Columns supplementally logged for table SCOTT.SCEMP: EMPNO.

Prepared CSN for table SCOTT.SCEMP: 2151611

因爲現在只是對主鍵加了附加日誌,未來DML操作,insert,delete向kafka投遞消息時,規定所有的數據都在after中便於j’son解析註冊,沒問題,但是update以json格式投遞到kafka然後flink消費時字段值只有主鍵和被修改的字段存在值,但是未來SCEMP表可能empno,ename,job,sal,deptno這幾個字段都會用到,dept表所有字段都會用到,並且要求無論對哪些字段做update操作,投遞到kafka的所有json數據必須都要有上面幾個字段及相關值。所以額外給emp表的empno,ename,job,sal,deptno組合添加附加日誌,dept表給整個表添加附加日誌來支持後續flink計算:

[oracle@source ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Sep 6 15:46:02 2019

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter table  scott.scemp add SUPPLEMENTAL LOG GROUP groupaa(empno,ename,job,sal,deptno) always;

Table altered.

SQL> ALTER TABLE scott.scdept add SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Table altered.

2、源端配置初始化進程

數據初始化,指的是從源端Oracle 數據庫將已存在的需要的數據同步至目標端,配置初始化進程:

GGSCI (source) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     ABENDED     D_KA        00:00:00      5714:15:13  
EXTRACT     ABENDED     D_KF        00:00:00      6507:59:26  
EXTRACT     ABENDED     D_SC        00:00:00      140:41:02   
EXTRACT     RUNNING     D_ZT        00:00:00      00:00:04    
EXTRACT     STOPPED     E_KA        00:00:00      2692:41:17  
EXTRACT     ABENDED     E_SC        00:00:00      00:29:58    
EXTRACT     STOPPED     E_ZT        00:00:00      00:15:43   


GGSCI (source) 2> dblogin userid ogg password ogg
Successfully logged into database.

GGSCI (source as ogg@orcl) 3> add extract initsc,sourceistable
EXTRACT added.
GGSCI (source as ogg@orcl) 4> edit params init01
加入下面配置
EXTRACT init01
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
USERID ogg,PASSWORD ogg
RMTHOST 192.168.1.66, MGRPORT 7809
RMTFILE ./dirdat/ed,maxfiles 999, megabytes 500
----------SCOTT.SCEMP
table SCOTT.SCEMP,tokens(
TKN-OP-TYPE = @GETENV ('GGHEADER', 'OPTYPE')
);
----------SCOTT.SCDEPT
table SCOTT.SCDEPT,tokens(
TKN-OP-TYPE = @GETENV ('GGHEADER', 'OPTYPE')
);

3、源端生成表結構

GoldenGate 提供了 DEFGEN 工具,用於生成數據定義,當源表和目標表中 的定義不同時,GoldenGate 進程將引用該專用工具。在運行 DEFGEN 之前,需要 爲其創建一個參數文件:

GGSCI (source) 1> edit params init_scott
加入下面配置
defsfile /u01/app/oracle/ogg12/dirdef/init_scott.def
userid ogg,password ogg
table scott.SCEMP;
table scott.SCDEPT;


GGSCI (source) 4> exit
生成表結構文件,需要執行shell命令,如果配置中的文件已經存在,執行下面命令會報錯,所以 在執行前需要先刪除:
[oracle@source ogg12]$ cd dirdef/
[oracle@source dirdef]$ ls
emp.def  hdfs.def  init_emp.def  init_scott.def  kafka.def  scott.def  ztvoucher.def
[oracle@source dirdef]$ rm -rf init_scott.def 
[oracle@source ogg12]$ ./defgen paramfile dirprm/init_scott.prm 

***********************************************************************
        Oracle GoldenGate Table Definition Generator for Oracle
      Version 12.2.0.2.2 OGGCORE_12.2.0.2.0_PLATFORMS_170630.0419
   Linux, x64, 64bit (optimized), Oracle 11g on Jun 30 2017 11:35:56
 
Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2019-09-10 15:41:15
***********************************************************************

Operating System Version:
Linux
Version #2 SMP Tue May 17 07:23:38 PDT 2016, Release 4.1.12-37.4.1.el6uek.x86_64
Node: source
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 84810

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
defsfile /u01/app/oracle/ogg12/dirdef/init_scott.def
userid ogg,password ***
table scott.SCEMP;
Retrieving definition for SCOTT.SCEMP.
table scott.SCDEPT;
Retrieving definition for SCOTT.SCDEPT.


Definitions generated for 2 tables in /u01/app/oracle/ogg12/dirdef/init_scott.def.


將生成的定義文件傳送到目標端, 目標端的replicate進程會使用這個文件。

[oracle@source ogg12]$ scp  /u01/app/oracle/ogg12/dirdef/init_scott.def [email protected]:/hadoop/ogg12/dirdef/
[email protected]'s password: 
init_scott.def                                                                                                                                                                  100% 2354     2.3KB/s   00:00    

4、配置抽取進程

因爲環境中已經存在一個向195.168.1.66作用的抽取進程和投遞進程 e_zt,d_zt:

GGSCI (source) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     ABENDED     D_KA        00:00:00      5714:15:13  
EXTRACT     ABENDED     D_KF        00:00:00      6507:59:26  
EXTRACT     ABENDED     D_SC        00:00:00      140:41:02   
EXTRACT     RUNNING     D_ZT        00:00:00      00:00:04    --這個
EXTRACT     STOPPED     E_KA        00:00:00      2692:41:17  
EXTRACT     ABENDED     E_SC        00:00:00      00:29:58    
EXTRACT     STOPPED     E_ZT        00:00:00      00:15:43     --這個

,並且195.168.1.66的kafka應用進程已經存在並停止了:

[root@hadoop ogg12]# ./ggsci

Oracle GoldenGate for Big Data
Version 12.3.2.1.1 (Build 005)

Oracle GoldenGate Command Interpreter
Version 12.3.0.1.2 OGGCORE_OGGADP.12.3.0.1.2_PLATFORMS_180712.2305
Linux, x64, 64bit (optimized), Generic on Jul 13 2018 00:46:09
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.


GGSCI (hadoop) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    STOPPED     RHDFS       00:00:00      454:26:03   
REPLICAT    STOPPED     RKAFKA      00:00:00      2637:46:02   --這個

現在只需要把上面兩張表的配置加入到e_zt,現在抽取進程配置如下:

GGSCI (source) 2> edit params e_zt

寫入下面內容:
extract e_zt
userid ogg,password ogg
setenv(NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
setenv(ORACLE_SID="orcl")
reportcount every 30 minutes,rate
numfiles 5000
discardfile ./dirrpt/e_zt.dsc,append,megabytes 1000
warnlongtrans 2h,checkinterval 30m
exttrail ./dirdat/zt
dboptions allowunusedcolumn
tranlogoptions archivedlogonly
tranlogoptions altarchivelogdest primary /u01/arch
dynamicresolution
fetchoptions nousesnapshot
ddl include mapped
ddloptions addtrandata,report
notcpsourcetimer
NOCOMPRESSDELETES
NOCOMPRESSUPDATES
GETUPDATEBEFORES
----------SCOTT.ZTVOUCHER
table SCOTT.ZTVOUCHER,keycols(MANDT,GJAHR,BUKRS,BELNR,BUZEI,MONAT,BUDAT),tokens(
TKN-OP-TYPE = @GETENV ('GGHEADER', 'OPTYPE')
);
----------SCOTT.ORA_HDFS
table SCOTT.ORA_HDFS,tokens(
TKN-OP-TYPE = @GETENV ('GGHEADER', 'OPTYPE')
);
----------SCOTT.SCEMP
table SCOTT.SCEMP,tokens(
TKN-OP-TYPE = @GETENV ('GGHEADER', 'OPTYPE')
);
----------SCOTT.SCDEPT
table SCOTT.SCDEPT,tokens(
TKN-OP-TYPE = @GETENV ('GGHEADER', 'OPTYPE')
);

5、配置投遞進程

將上面兩張表加進來

extract d_zt
rmthost 192.168.1.66,mgrport 7809,compress
userid ogg,password ogg
PASSTHRU
numfiles 5000
rmttrail ./dirdat/zt
dynamicresolution
table scott.ztvoucher;
table scott.ora_hdfs;
table scott.scemp;
table scott.scdept;

三、ODS端操作

1、配置初始化進程

GGSCI (hadoop) 3> ADD replicat init01, specialrun
REPLICAT added.


GGSCI (hadoop) 4> edit params init01
添加下面配置:
SPECIALRUN
end runtime
setenv(NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
targetdb libfile libggjava.so set property=./dirprm/kafka.props
SOURCEDEFS ./dirdef/init_scott.def
EXTFILE ./dirdat/ed
reportcount every 1 minutes, rate
grouptransops 10000
map scott.SCEMP,target SCOTT.SCEMP;
map scott.SCDEPT,target SCOTT.SCDEPT;


2、配置應用進程

因爲之前已經配置了rkafka進程,現在只需要在這個進程裏面加那兩張表的配置就行。
這裏有一個問題,雖然update之後的數據能夠讓flink正常算,但是對於pkupdate之前的主鍵對應的記錄值我們還是會做計算,所以這裏flink計算會出現問題,會讓同一條記錄(只變了主鍵其他值不變,在kafka中是兩條消息)計算兩次了,而且我們前面規定了爲了flink計算方便,所有數據都從json的after部分取數,所以這裏我把對於pkupdate操作來說,在插入kafka一條update之後的數據後,再插入一條update前的數據,並且這個update前的數據除了主鍵是原來的值外,其餘要計算的指標值都設置成null,這樣相當於原來變更前的主鍵其他指標最新的值都是null了,flink在對當前主鍵最新值計算的時候就會把這些值當成空值來計算從而減去update前的值,只計算update後的值,就不會出現重複計算了,而且前面的配置太冗餘,看最新的應用進程配置:

GGSCI (hadoop) 9> view params rkafka

REPLICAT rkafka
-- Trail file for this example is located in "AdapterExamples/trail" directory
-- Command to add REPLICAT
-- add replicat rkafka, exttrail AdapterExamples/trail/tr
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE
allowduptargetmap
NOINSERTDELETES
IGNOREDELETES
IGNOREINSERTS
GETUPDATES
INSERTUPDATES
MAP SCOTT.SCEMP, TARGET SCOTT.SCEMP;
IGNOREDELETES
IGNOREINSERTS
MAP SCOTT.SCEMP, TARGET SCOTT.SCEMP,colmap(
EMPNO=before.EMPNO,
ENAME=@COLSTAT (NULL), 
JOB=@COLSTAT (NULL), 
SAL=@COLSTAT (NULL), 
DEPTNO=@COLSTAT (NULL)
),filter(@strfind(@token('TKN-OP-TYPE'),'PK UPDATE') >0);
NOINSERTUPDATES
GETINSERTS
IGNOREDELETES
IGNOREUPDATES
MAP SCOTT.SCEMP, TARGET SCOTT.SCEMP;
IGNOREUPDATES
IGNOREINSERTS
GETDELETES
INSERTDELETES
MAP SCOTT.SCEMP, TARGET SCOTT.SCEMP;
NOINSERTDELETES
IGNOREDELETES
IGNOREINSERTS
GETUPDATES
INSERTUPDATES
MAP SCOTT.SCDEPT, TARGET SCOTT.SCDEPT;
IGNOREDELETES
IGNOREINSERTS
MAP SCOTT.SCDEPT, TARGET SCOTT.SCDEPT,colmap(
DEPTNO=before.DEPTNO,
DNAME=@COLSTAT (NULL), 
LOC=@COLSTAT (NULL), 
TESS=@COLSTAT (NULL)
),filter(@strfind(@token('TKN-OP-TYPE'),'PK UPDATE') >0);
NOINSERTUPDATES
GETINSERTS
IGNOREDELETES
IGNOREUPDATES
MAP SCOTT.SCDEPT, TARGET SCOTT.SCDEPT;
IGNOREUPDATES
IGNOREINSERTS
GETDELETES
INSERTDELETES
MAP SCOTT.SCDEPT, TARGET SCOTT.SCDEPT;

三、同步數據

1、源端操作

啓動進程:

GGSCI (source) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     ABENDED     D_KA        00:00:00      5853:30:25  
EXTRACT     ABENDED     D_KF        00:00:00      6647:14:38  
EXTRACT     ABENDED     D_SC        00:00:00      279:56:15   
EXTRACT     RUNNING     D_ZT        00:00:00      00:00:07    
EXTRACT     STOPPED     E_KA        00:00:00      2831:56:29  
EXTRACT     ABENDED     E_SC        00:00:00      00:08:54    
EXTRACT     STOPPED     E_ZT        00:00:00      137:51:42   


GGSCI (source) 10> start e_zt

Sending START request to MANAGER ...
EXTRACT E_ZT starting
GGSCI (source) 12> start d_zt
EXTRACT D_ZT is already running.

2、源端初始化進程

GGSCI (source) 3> start init01

Sending START request to MANAGER ...
EXTRACT INIT01 starting


GGSCI (source) 4> info init01

EXTRACT    INIT01    Last Started 2019-09-16 10:14   Status STARTING
Checkpoint Lag       Not Available
Process ID           87517
Log Read Checkpoint  Table SCOTT.SCDEPT
                     2019-09-16 10:14:47  Record 4
Task                 SOURCEISTABLE


GGSCI (source) 5> info init01

EXTRACT    INIT01    Last Started 2019-09-16 10:17   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table SCOTT.SCDEPT
                     2019-09-16 10:17:01  Record 4
Task                 SOURCEISTABLE


或則通過下面方式初始化

[oracle@source ogg12]$ ./extract paramfile dirprm/init01.prm  reportfile dirrpt/init01.rpt
[oracle@source ogg12]$ tail -30f dirrpt/init01.rpt 

2019-09-16 09:45:32  INFO    OGG-02911  Processing table SCOTT.SCEMP.

2019-09-16 09:45:32  INFO    OGG-02911  Processing table SCOTT.SCDEPT.

***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************


Report at 2019-09-16 09:45:32 (activity since 2019-09-16 09:45:26)

Output to ./dirdat/ed:

From Table SCOTT.SCEMP:
       #                   inserts:        16
       #                   updates:         0
       #                   deletes:         0
       #                  discards:         0
From Table SCOTT.SCDEPT:
       #                   inserts:         4
       #                   updates:         0
       #                   deletes:         0
       #                  discards:         0


REDO Log Statistics
  Bytes parsed                    0
  Bytes output                 4417

去目標端查看生成的trail文件:

[root@hadoop dirdat]# ls -ltr ed*
-rw-r----- 1 root root 6265 Sep 16 10:17 ed000000

[root@hadoop ogg12]# cat loginit_zt 
cd ./dirdat
open ed000000 
ghdr on
detail on
detail data
usertoken on
usertoken detail
ggstoken on
ggstoken detail
headertoken on
headertoken detail
reclen 0
pos last
pos rev
logtrail
pos
[root@hadoop dirdat]# cd ..
[root@hadoop ogg12]# ./logdump 

Oracle GoldenGate Log File Dump Utility
Version 12.3.0.1.2 OGGCORE_OGGADP.12.3.0.1.2_PLATFORMS_180712.2305

Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.


 
Logdump 91 >obey loginit_zt
cd ./dirdat
open ed000000
Current LogTrail is /hadoop/ogg12/dirdat/ed000000 
ghdr on
detail on
detail data
usertoken on
usertoken detail
ggstoken on
ggstoken detail
headertoken on
headertoken detail
reclen 0
Reclen set to 0 
pos last
Reading forward from RBA 6265 
pos rev
Reading in reverse from RBA 6265 
logtrail
Current LogTrail is /hadoop/ogg12/dirdat/ed000000 
pos
Current position is RBA 6265   Reverse 
Logdump 92 >pos last 
Reading in reverse from RBA 6265 
Logdump 93 >pos rev 
Reading in reverse from RBA 6265 
Logdump 94 >n 
TokenID x47 'G' Record Header    Info x01  Length  129 
TokenID x48 'H' GHDR             Info x00  Length   36 
 450c 0041 3600 05ff e26e 1fa8 d5b8 f202 0000 0000 | E..A6....n..........  
 0000 0000 0000 0000 0352 0000 0001 0000           | .........R......  
TokenID x44 'D' Data             Info x00  Length   54 
TokenID x55 'U' User Tokens      Info x00  Length   19 
TokenID x5a 'Z' Record Trailer   Info x01  Length  129 
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :    54  (x0036)   IO Time    : 2019/09/16 10:17:08.011.746   
IOType     :     5  (x05)     OrigNode   :   255  (xff) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :          0       AuditPos   : 0 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2019/09/16 10:17:08.011.746 Insert               Len    54 RBA 6136 
Name: SCOTT.SCDEPT  (TDR Index: 2) 
After  Image:                                             Partition 12    U s   
 0000 000a 0000 0000 0000 0000 0028 0001 000e 0000 | .............(......  
 000a 4f50 4552 4154 494f 4e53 0002 000a 0000 0006 | ..OPERATIONS........  
 424f 5354 4f4e 0003 0004 ffff 0000                | BOSTON........  
Column     0 (x0000), Len    10 (x000a)  
 0000 0000 0000 0000 0028                          | .........(  
Column     1 (x0001), Len    14 (x000e)  
 0000 000a 4f50 4552 4154 494f 4e53                | ....OPERATIONS  
Column     2 (x0002), Len    10 (x000a)  
 0000 0006 424f 5354 4f4e                          | ....BOSTON  
Column     3 (x0003), Len     4 (x0004)  
 ffff 0000                                         | ....  
  
User tokens:   19 bytes 
TKN-OP-TYPE         : INSERT 


數據過來了

3、目標端初始化進程

先看下當前kafka中topic信息:

[root@hadoop kafka]# cat ./list.sh 
#!/bin/bash
bin/kafka-topics.sh -describe -zookeeper 192.168.1.66:2181
[root@hadoop kafka]# ./list.sh 
Topic:DEPT	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: DEPT	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:EMP	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: EMP	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:ZTVOUCHER_DEL	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: ZTVOUCHER_DEL	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:ZTVOUCHER_INS	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: ZTVOUCHER_INS	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:kylin_streaming_topic	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: kylin_streaming_topic	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:scott	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: scott	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:test	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: test	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:zttest	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: zttest	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Topic:ztvoucher	PartitionCount:1	ReplicationFactor:1	Configs:
	Topic: ztvoucher	Partition: 0	Leader: 0	Replicas: 0	Isr: 0

因爲kafka已經配置了當沒有相關topic時會自動創建相關topic,但是爲了規範,這裏手動創建topic:

 [root@hadoop kafka]# cat create.sh 
read -p "input topic:" name
bin/kafka-topics.sh --create --zookeeper 192.168.1.66:2181 --replication-factor 1 --partitions 1 --topic $name
[root@hadoop kafka]# ./create.sh 
input topic:SCEMP
Created topic "SCEMP".
[root@hadoop kafka]# ./create.sh 
input topic:SCDEPT
Created topic "SCDEPT".

單獨開兩個會話消費上面兩個topic數據:

[root@hadoop kafka]# cat console.sh 
#!/bin/bash
read -p "input topic:" name

bin/kafka-console-consumer.sh --zookeeper 192.168.1.66:2181 --topic $name --from-beginning
[root@hadoop kafka]# ./console.sh 
input topic:SCEMP
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
[root@hadoop kafka]# ./console.sh 
input topic:SCDEPT
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].

開始初始化數據:

[root@hadoop ogg12]# ./replicat paramfile ./dirprm/init01.prm reportfile ./dirrpt/init01.rpt -p INITIALDATALOAD

查看日誌:

[root@hadoop ogg12]# tail -f dirrpt/init01.rpt 
Sep 16, 2019 10:24:59 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [oracle/goldengate/datasource/DataSource-context.xml]
Sep 16, 2019 10:24:59 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [config/ggue-context.xml]
Sep 16, 2019 10:24:59 AM org.springframework.beans.factory.support.DefaultListableBeanFactory registerBeanDefinition
INFO: Overriding bean definition for bean 'dataSourceConfig' with a different definition: replacing [Generic bean: class [oracle.goldengate.datasource.DataSourceConfig]; scope=singleton; abstract=false; lazyIni
t=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [oracle/goldengate/datasource/DataSource-context.xml]] with [Generic bean: class [oracle.goldengate.datasource.DataSourceConfig]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [config/ggue-context.xml]]Sep 16, 2019 10:24:59 AM org.springframework.beans.factory.support.DefaultListableBeanFactory registerBeanDefinition
INFO: Overriding bean definition for bean 'versionInfo' with a different definition: replacing [Generic bean: class [oracle.goldengate.util.VersionInfo]; scope=singleton; abstract=false; lazyInit=false; autowir
eMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [oracle/goldengate/datasource/DataSource-context.xml]] with [Generic bean: class [oracle.goldengate.util.VersionInfo]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [config/ggue-context.xml]]Sep 16, 2019 10:24:59 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.GenericApplicationContext@501edcf1: startup date [Mon Sep 16 10:24:59 CST 2019]; root of context hierarchy

Oracle GoldenGate for Big Data, 12.3.2.1.1.005
Copyright (c) 2007, 2018. Oracle and/or its affiliates. All rights reserved
Built with Java 1.8.0_161  (class version: 52.0)
SOURCEDEFS ./dirdef/init_scott.def
EXTFILE ./dirdat/ed
reportcount every 1 minutes, rate
grouptransops 10000
map scott.SCEMP,target SCOTT.SCEMP;
map scott.SCDEPT,target SCOTT.SCDEPT;

2019-09-16 10:25:01  INFO    OGG-01815  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /hadoop/ogg12/dirtmp.

Database Version:

Database Language and Character Set:

2019-09-16 10:25:01  INFO    OGG-02243  Opened trail file /hadoop/ogg12/dirdat/ed000000 at 2019-09-16 10:25:01.285030.

2019-09-16 10:25:01  INFO    OGG-03506  The source database character set, as determined from the trail file, is UTF-8.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2019-09-16 10:25:01  INFO    OGG-02243  Opened trail file /hadoop/ogg12/dirdat/ed000000 at 2019-09-16 10:25:01.303836.

2019-09-16 10:25:01  WARNING OGG-02761  Source definitions file, ./dirdef/init_scott.def, is ignored because trail file /hadoop/ogg12/dirdat/ed000000 contains table definitions.

2019-09-16 10:25:01  INFO    OGG-06505  MAP resolved (entry scott.SCEMP): map "SCOTT"."SCEMP",target SCOTT.SCEMP.

2019-09-16 10:25:01  INFO    OGG-02756  The definition for table SCOTT.SCEMP is obtained from the trail file.

2019-09-16 10:25:01  INFO    OGG-06511  Using following columns in default map by name: EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO.

2019-09-16 10:25:01  INFO    OGG-06510  Using the following key columns for target table SCOTT.SCEMP: EMPNO.

查看兩個topic消費情況:

[root@hadoop kafka]# ./console.sh 
input topic:SCEMP
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:01.379000","pos":"00000000000000002199","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":1232,"ENAME":"
FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:21.767000","pos":"00000000000000002396","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":1222,"ENAME":"
FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:31.787000","pos":"00000000000000002593","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":3211,"ENAME":"
FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:41.803000","pos":"00000000000000002790","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7369,"ENAME":"
er","JOB":"CLERK","MGR":7902,"HIREDATE":"1980-12-17 00:00:00","SAL":800.00,"COMM":null,"DEPTNO":20}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:25:51.814000","pos":"00000000000000002983","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7499,"ENAME":"
ALLEN","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-02-20 00:00:00","SAL":1600.00,"COMM":300.00,"DEPTNO":30}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:26:01.831000","pos":"00000000000000003182","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7521,"ENAME":"
WARD","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-02-22 00:00:00","SAL":1250.00,"COMM":500.00,"DEPTNO":30}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:26:11.847000","pos":"00000000000000003380","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7566,"ENAME":"
JONES","JOB":"MANAGER","MGR":7839,"HIREDATE":"1981-04-02 00:00:00","SAL":2975.00,"COMM":null,"DEPTNO":20}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:26:21.864000","pos":"00000000000000003578","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7654,"ENAME":"
MARTIN","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-09-28 00:00:00","SAL":1250.00,"COMM":1400.00,"DEPTNO":30}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:26:31.881000","pos":"00000000000000003778","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7698,"ENAME":"
BLAKE","JOB":"MANAGER","MGR":7839,"HIREDATE":"1981-05-01 00:00:00","SAL":2850.00,"COMM":null,"DEPTNO":30}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:26:41.898000","pos":"00000000000000003976","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7782,"ENAME":"
CLARK","JOB":"MANAGER","MGR":7839,"HIREDATE":"1981-06-09 00:00:00","SAL":2450.00,"COMM":null,"DEPTNO":10}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:26:51.907000","pos":"00000000000000004174","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7839,"ENAME":"
KING","JOB":"PRESIDENT","MGR":null,"HIREDATE":"1981-11-17 00:00:00","SAL":5000.00,"COMM":null,"DEPTNO":10}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:27:01.920000","pos":"00000000000000004373","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7844,"ENAME":"
TURNER","JOB":"SALESMAN","MGR":7698,"HIREDATE":"1981-09-08 00:00:00","SAL":1500.00,"COMM":0,"DEPTNO":30}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:27:11.932000","pos":"00000000000000004573","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7876,"ENAME":"
ADAMS","JOB":"CLERK","MGR":7788,"HIREDATE":"1987-05-23 00:00:00","SAL":1100.00,"COMM":null,"DEPTNO":20}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:27:21.938000","pos":"00000000000000004769","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7900,"ENAME":"
JAMES","JOB":"CLERK","MGR":7698,"HIREDATE":"1981-12-03 00:00:00","SAL":950.00,"COMM":null,"DEPTNO":30}}{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-16 10:17:08.006650","current_ts":"2019-09-16T10:27:31.948000","pos":"00000000000000004965","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":7902,"ENAME":"
FORD","JOB":"ANALYST","MGR":7566,"HIREDATE":"1981-12-03 00:00:00","SAL":3000.00,"COMM":null,"DEPTNO":20}}

[root@hadoop kafka]# ./console.sh 
input topic:SCDEPT
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-16 10:17:08.011746","current_ts":"2019-09-16T10:27:51.977000","pos":"00000000000000005753","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"DEPTNO":10,"DNAME":"
ACCOUNTING","LOC":"NEW YORK","TESS":null}}{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-16 10:17:08.011746","current_ts":"2019-09-16T10:28:12.000000","pos":"00000000000000005884","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"DEPTNO":20,"DNAME":"
RESEARCH","LOC":"DALLAS","TESS":null}}{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-16 10:17:08.011746","current_ts":"2019-09-16T10:28:22.011000","pos":"00000000000000006011","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"DEPTNO":30,"DNAME":"
SALES","LOC":"CHICAGO","TESS":null}}{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-16 10:17:08.011746","current_ts":"2019-09-16T10:28:32.027000","pos":"00000000000000006136","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"DEPTNO":40,"DNAME":"
OPERATIONS","LOC":"BOSTON","TESS":null}}

SCDEP ,SCEMP表已經 初始化數據過來了。
接下來啓動應用進程增量同步數據:

GGSCI (hadoop) 2> start rkafka

Sending START request to MANAGER ...
REPLICAT RKAFKA starting


GGSCI (hadoop) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    STOPPED     RHDFS       00:00:00      617:28:44   
REPLICAT    STARTING    RKAFKA      00:00:00      00:32:14    


GGSCI (hadoop) 8> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    STOPPED     RHDFS       00:00:00      617:28:58   
REPLICAT    RUNNING     RKAFKA      00:00:00      00:00:02 

4、驗證增量同步

a、源端做insert操作:

insert into scemp (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
values (9999, 'zhaoyd', 'kfc', 6666, sysdate, 100.00, 300.00, 30);
insert into scdept(deptno,dname,loc)values(99,'kfc','bj');

去kafka看結果:

----SCEMP消息:
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:29:42.000388","current_ts":"2019-09-18T09:30:57.750000","pos":"00000000230000057892","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"EMPNO":9999,"ENAME":"zhaoyd","JOB":"kfc","MGR":6666,"HIREDATE":"2019-09-18 09:29:40","SAL":100.00,"COMM":300.00,"DEPTNO":30}}
----SCDEPT消息:
{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-18 09:29:42.000388","current_ts":"2019-09-18T09:31:07.764000","pos":"00000000230000058138","tokens":{"TKN-OP-TYPE":"INSERT"},"after":{"DEPTNO":99,"DNAME":"kfc","LOC":"bj","TESS":null}}

可以看到insert都正常同步過來了。

b、源端做普通update操作:

update scemp set  ename='zyd' where empno=9999;--修改帶附加日誌的字段
update scemp set  mgr=654 where empno=9999;--修改沒加附加日誌的字段
update scdept set dname='beij' where deptno=99;--這個表全列附加日誌

去kafka看結果:

----SCEMP
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:32:59.000686","current_ts":"2019-09-18T09:33:53.994000","pos":"00000000230000058486","tokens":{"TKN-OP-TYPE":"SQL COMPUPDATE"},"after":{"EMPNO":9999,"ENAME":"zyd","JOB":"kfc","SAL":100.00,"DEPTNO":30}}
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:32:59.000686","current_ts":"2019-09-18T09:34:04.009000","pos":"00000000230000058850","tokens":{"TKN-OP-TYPE":"SQL COMPUPDATE"},"after":{"EMPNO":9999,"ENAME":"zyd","JOB":"kfc","MGR":654,"SAL":100.00,"DEPTNO":30}}

----SCDEPT
{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-18 09:32:59.000686","current_ts":"2019-09-18T09:34:14.021000","pos":"00000000230000059193","tokens":{"TKN-OP-TYPE":"SQL COMPUPDATE"},"after":{"DEPTNO":99,"
DNAME":"beij","LOC":"bj","TESS":null}}

從第一條update結果看,所有添加了附加日誌的列及最新值都過來了,第二條結果發現SCEMP表在做了update mgr字段時候,除了其餘所有加了附加日誌的字段值都跟着過來了,mgr最新值也過來了,現在的json內容是:
主鍵+附加日誌字段+被修改字段,能夠滿足flink極爲方便的獲取每個需要計算指標的最新值。

c、源端做pkupdate操作:


update scemp set  empno=3333, ename='zzd'  where empno=9999;

update scemp set  empno=9999, mgr=6543  where empno=3333;

update scemp set  empno=3333   where empno=9999;

update scdept set deptno=33,dname='zd' where deptno=99;

去看kafka消息:

----SCEMP
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:40:33.000792","current_ts":"2019-09-18T09:40:58.666000","pos":"00000000230000059547","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":3333,"ENAME":"zzd","JOB":"kfc","SAL":100.00,"DEPTNO":30}}
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:40:33.000792","current_ts":"2019-09-18T09:41:08.681000","pos":"00000000230000059547","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":9999,"ENAME":null,"JOB":null,"SAL":null,"DEPTNO":null}}

{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:53:59.000496","current_ts":"2019-09-18T09:54:07.801000","pos":"00000000230000060265","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":9999,"ENAME":"zzd","JOB":"kfc","MGR":6543,"SAL":100.00,"DEPTNO":30}}
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:53:59.000496","current_ts":"2019-09-18T09:54:17.820000","pos":"00000000230000060265","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":3333,"ENAME":null,"JOB":null,"SAL":null,"DEPTNO":null}}

{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:54:44.000419","current_ts":"2019-09-18T09:54:54.876000","pos":"00000000230000060664","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":3333,"ENAME":"zzd","JOB":"kfc","SAL":100.00,"DEPTNO":30}}
{"table":"SCOTT.SCEMP","op_type":"I","op_ts":"2019-09-18 09:54:44.000419","current_ts":"2019-09-18T09:55:04.882000","pos":"00000000230000060664","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"EMPNO":9999,"ENAME":null,"JOB":null,"SAL":null,"DEPTNO":null}}
----SCDEPT
{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-18 09:40:33.000792","current_ts":"2019-09-18T09:41:18.697000","pos":"00000000230000059888","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"DEPTNO":33,"DNAME":"zd","LOC":"bj","TESS":null}}
{"table":"SCOTT.SCDEPT","op_type":"I","op_ts":"2019-09-18 09:40:33.000792","current_ts":"2019-09-18T09:41:28.704000","pos":"00000000230000059888","tokens":{"TKN-OP-TYPE":"PK UPDATE"},"after":{"DEPTNO":99,"DNAME":null,"LOC":null,"TESS":null}}

從上面結果看到,現在pkupdate操作被分成了兩個json,舊的主鍵對應的j’son中需要計算的指標值都是空,而新的主鍵對應的json中需要計算的指標都是各指標最新的值,能夠滿足flink在發生pkupdate時候計算不會出錯。

發佈了27 篇原創文章 · 獲贊 33 · 訪問量 23萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章