MYSQL篇:
1.安裝logstash,略
2.logstash -f sync_game.conf --path.settings=/etc/logstash
3.tail -f /var/log/logstash/* ==> 同步成不成功,仔細查看你的日誌
特殊名詞解釋:
:sql_last_value 代表你上次插入的 id 最大值,用於增量同步數據。
友情提醒:
mysql-connector-java-8.0.14.jar 可以從maven倉庫下載
最新的 ==> https://mvnrepository.com/artifact/mysql/mysql-connector-java/8.0.15
配置文件:
### sync_game.conf
input {
stdin {}
jdbc {
# 需要連接的數據庫
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/engine"
jdbc_user => "root"
jdbc_password => "root"
# jdbc驅動所在的路徑
jdbc_driver_library => "/root/install/mysql-connector-java-8.0.14.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "130000"
# 需要執行的sql文件
statement_filepath => "game.sql"
# 設置監聽間隔 各字段含義(分、時、天、月、年),全部爲*默認含義爲每分鐘更新一次
schedule => "* * * * *"
# elasticsearch索引類型名
type => "main"
record_last_run => true
use_column_value => true
tracking_column => "id"
last_run_metadata_path => "/etc/logstash/run_metadata.d/my_info"
}
}
output {
elasticsearch {
# elasticsearch索引名
index => "game"
# 使用input中的type作爲elasticsearch索引下的類型名
#document_type => "%{type}"
document_type => "main"
# elasticsearch的ip和端口號
hosts => "172.26.192.107:9200"
# 同步mysql中數據uuid作爲elasticsearch中文檔id
document_id => "%{uuid}"
}
stdout {
codec => json_lines
}
}
### game.sql
SELECT * FROM game_merge where id > :sql_last_value
MONGO篇:
(1) 輸入,輸出流mongo插件裝上
./logstash-plugin install logstash-output-mongodb
./logstash-plugin install logstash-input-mongodb
(2)在你的mongodb交互式環境中創建一個root用戶,一定要use 數據庫哦,因爲賬戶是針對庫的,每個庫都能創建一個root
-> use engine
-> db.createUser({user:"root",pwd:"root",roles:[{role:"userAdminAnyDatabase", db:"admin"}]})
(Mongo::Auth::Unauthorized 如果你在日誌看到這個報錯,說明你沒use數據庫,創建root用戶)
(3) logstash -f mongo_map.conf --path.settings=/etc/logstash
### 1,確保你的logstash配置文件路徑正確,也可以嘗試不加path,可能不會報錯。
### 2.確保你的mongo_map.conf配置文件路徑對的,這裏用的是相對路徑
(3) 監控你的日誌
有報錯的在日誌裏面能看到,如果看到 Attempting to install template 說明正在els安裝模板 應該是沒啥問題了。
配置文件:
### mongo_map.conf
# 如果重新導入需要刪除game_map_process.db文件和索引,才能重新導入
input {
stdin {}
mongodb {
uri => "mongodb://root:[email protected]:27017/engine"
placeholder_db_dir => "/root/logstash"
placeholder_db_name => "game_map_process.db"
collection => "game_map"
batch_size => 5000
}
}
filter {
mutate {
remove_field => [ "_id", "log_entry" ]
}
}
output {
elasticsearch {
hosts => ["172.26.192.107:9200"]
document_type => "gmap"
index => "main"
document_id => "%{uuid}"
}
stdout {
codec => json_lines
}
### demo2 2019年05月10日
input {
stdin {}
mongodb {
uri => "xxx"
placeholder_db_dir => "/root/script/logstash"
placeholder_db_name => "game_map_process.db"
collection => "game_map"
batch_size => 5000
}
}
filter {
mutate {
remove_field => [ "_id", "log_entry" ]
}
}
output {
elasticsearch {
hosts => ["10.0.10.155:9200"]
document_type => "gmap"
index => "main"
document_id => "%{uuid}"
}
stdout {
codec => rubydebug
}
}
注意事項:
這玩意有點坑的就是,每個組件的版本不一樣同步可能會失敗,我也沒辦法枚舉所有的場景,這裏展示我當前的配置。
logstash 5.6.14
elastic search 6.5
### END
親試按照此流程應該是可以跑通的,遇到特別報錯的同學,可以看下英文文檔(看清楚els版本)尋求幫助 https://www.elastic.co/guide/en/logstash/6.5/plugins-outputs-mongodb.html#plugins-outputs-mongodb-database
或者給我留言
###
### 特別提醒:
我們同步els的時候,如果沒有在els裏面創建模板,同步的時候會自動創建(默認是standard分詞器),當然分詞器有的時候可能不是我們滿意的,我們可以預先在els裏面安裝模板,這裏提供創建模板指定分詞器的代碼,如果過時了,請參考官方文檔。ik_smart是一箇中文分詞器,需要你預先裝下。有問題可以給我提。
PUT main
{
"settings":{
"index":{
"number_of_shards" : 5,
"number_of_replicas" : 0,
"analysis":{
"analyzer":{
"analyzer_keyword":{
"tokenizer":"ik_smart",
"filter":"lowercase"
}
}
}
}
}
}
PUT main/_mapping/gmap
{
"properties" : {
"name": {
"type": "text",
"analyzer": "ik_smart"
},
"uuid" : {
"type" : "text"
},
"feature" : {
"type": "text",
"analyzer": "ik_smart"
},
"mass" : {
"type" : "text",
"analyzer": "ik_smart"
},
"near" : {
"type": "text"
}
}
}
----------
2019年05月10日更新:
發現centos用官網方式安裝的logstash 沒有/etc/logstash目錄,導致沒辦法指定配置文件,就沒辦法看日誌,所以這裏貼出配置文件的代碼:
/etc/logstash/logstash.yml:
path.logs: /var/log/logstash
/etc/logstash/log4j2.properties:
status = error
name = LogstashPropertiesConfig
appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %-.10000m%n
appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.rolling.ref = ${sys:ls.log.format}_rolling
注意事項:
連接mongo的時候 一定要確定你在那個數據庫庫有 admin的用戶,不然連不上的,而且logstash很可能不會報錯。
use 數據庫名
db.createUser(
{
user: "admin",
pwd: "admin",
roles:
[
{
role: "userAdminAnyDatabase",
db: "admin"
}
]
}
)