1、ik+pinyin分詞器
在之前的一篇文章中配置了IK分詞器,在這裏說一下怎麼使elasticsearch可以同時使用兩個分詞器:主要是參考這位老哥的http://blog.csdn.net/napoay/article/details/53907921的做法。
這裏還是主要使用sense,主要是kibana還沒有搞清楚具體怎麼使用
PUT /medcl/
{
"index": {
"analysis": {
"analyzer": {
"ik_pinyin_analyzer": {
"type": "custom",
"tokenizer": "ik_smart",
"filter": ["my_pinyin", "word_delimiter"]
}
},
"filter": {
"my_pinyin": {
"type": "pinyin",
"first_letter": "prefix",
"padding_char": " "
}
}
}
}
}
輸入這些東西,稍微解釋一下子:創建一個爲medcl的index,然後自定義一個分詞器analyzer叫做ik_pinyin_analyzer,使用的是ik_smart進行分詞,然後添加兩個過濾器,一個是自定義的my_pinyin,另一個是elasticsearch原本就有的word_delimiter,my_pinyin使用的就是拼音分詞器過濾的。elasticsearch自帶的過濾器有很多,http://blog.csdn.net/i6448038/article/details/50625397這篇博客中寫的很詳細。
然後建議一個type爲folks的mapping,此時使用的就是剛剛自己定義的ik_pinyin_analyzer作爲分詞器。
POST /medcl/folks/_mapping
{
"folks": {
"properties": {
"name": {
"type": "keyword",
"fields": {
"pinyin": {
"type": "text",
"store": "no",
"term_vector": "with_positions_offsets",
"analyzer": "ik_pinyin_analyzer"
}
}
}
}
}
}
往服務器中添加兩條索引信息
POST /medcl/folks/andy
{"name":["劉德華","劉邦"]}
POST /medcl/folks/tina
{"name":"中華人民共和國國歌"}
測試一下是否可以被查詢到
POST /medcl/folks/_search
{
"query": {
"match": {
"name.pinyin": "國歌"#改成zhonghua測試拼音的是否通過
}
},
"highlight": {
"fields": {
"*": {}
}
}
}
查詢到的結果長這樣子:
{
"took": 7,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 2.6638038,
"hits": [
{
"_index": "medcl",
"_type": "folks",
"_id": "tina",
"_score": 2.6638038,
"_source": {
"name": "中華人民共和國國歌"
},
"highlight": {
"name.pinyin": [
"<em>中華人民共和國</em><em>國歌</em>"
]
}
},
{
"_index": "medcl",
"_type": "folks",
"_id": "andy",
"_score": 0.22009256,
"_source": {
"name": [
"劉德華",
"劉邦"
]
},
"highlight": {
"name.pinyin": [
"<em>劉德華</em>"
]
}
}
]
}
}
2、同義詞的配置
上面的內容不是本文的重點,接下來纔是本文的重點:配置同義詞。
在config文件下新建一個目錄analysis用來存放同義詞文件,新建文件synonyms.txt,裏面存放同義詞,這裏存放同義詞的方式有兩種:(注意是英文狀態的逗號)
還有一點要注意的,就是文件保存的格式,尤其是在本地編輯之後上傳到服務器的,要把文件保存成utf-8格式的,要不然會報錯的。
第一種:
中文,漢語
第二種
中文,漢語=>中文
第一種在分詞的時候,有“中文”的地方,都會解析成“中文,漢語”,把“中文,漢語”存入索引中。
第二種在分詞的時候,“中文,漢語”都會解析成爲“中文”,然後把“中文”存入索引中。
兩種實現的功能是一樣的,我才用的是第一種,這種維護起來比較好維護。定義好同義詞文件之後重啓es。
和剛剛創建的ik_pinyin分詞器使想相同的,創建一個名爲xjs的index,裏面定義了兩種分詞器,一種是by_smart,另一種是ik_max_word,兩個都採用了自己定義的同義詞過濾器。
PUT /xjs
{
"index": {
"analysis": {
"analyzer": {
"by_smart": {
"type": "custom",
"tokenizer": "ik_smart",
"filter": ["by_sfr"]
},
"by_max_word": {
"type": "custom",
"tokenizer": "ik_max_word",
"filter": ["by_sfr"]
}
},
"filter": {
"by_sfr": {
"type": "synonym",
"synonyms_path": "analysis/synonyms.txt"#同義詞文件的位置
}
}
}
}
}
然後創建mapping,指定分詞的時候採用最大粒度的by_max_word,搜索的時候使用的是by_smart。
PUT /xjs/typename/_mapping
{
"properties": {
"title": {
"type": "text",
"index": "analyzed",
"analyzer": "by_max_word",
"search_analyzer": "by_smart"
}
}
}
這個時候測試一下同義詞是否生效
POST /xjs/_analyze?pretty=true&analyzer=by_smart
{"text":"中文"}
會得到下面的內容,很清楚看到中文兩個字被分詞成了中文和漢語兩個詞。
{
"tokens": [
{
"token": "中文",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 0
},
{
"token": "漢語",
"start_offset": 0,
"end_offset": 2,
"type": "SYNONYM",
"position": 0
}
]
}
添加兩條測試索引:
POST /xjs/title/1
{"title":"漢語的重要性"}
POST /xjs/title/2
{"title":"中文其實很好學的"}
然後搜索試一下:
POST /xjs/title/_search
{
"query" : { "match" : { "title" : "中文" }},
"highlight" : {
"pre_tags" : ["<tag1>"],
"post_tags" : ["</tag1>"],
"fields" : {
"note" : {}
}
}
}
會看到兩條信息都被索引出來了
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 0.60057575,
"hits": [
{
"_index": "xjs",
"_type": "title",
"_id": "1",
"_score": 0.60057575,
"_source": {
"title": "漢語的重要性"
}
},
{
"_index": "xjs",
"_type": "title",
"_id": "2",
"_score": 0.5930795,
"_source": {
"title": "中文其實很好學的"
}
}
]
}
}
3、logstash指定分詞器和同義詞過濾器
在logstash默認採用的分詞器使standard,就是把中文分成一個字一個字,這個可以在測試高亮的時候看出來,會發現每個字都被高亮了,而不是每個詞語。
在elasticsearch5.x之前,只要在elasticsearch中配置分詞器爲ik之後就可以讓logstash使用ik分詞器,但是5.x之後這種方式被取消了,這就顯得很麻煩了。
在logstash-5.5.0目錄新建一個template文件夾,然後新建一個logstash.json文件,注意配置這個文件的時候一定要小新,因爲運行的時候會不會提示該文件有錯誤的。裏面配置信息和上面的兩個基本上差不多,這裏就不做解釋了。
{
"template": "*",
"version": 50001,
"settings": {
"index.refresh_interval": "5s",
"index": {
"analysis": {
"analyzer": {
"by_smart": {
"type": "custom",
"tokenizer": "ik_smart",
"filter": [
"by_sfr"
]
},
"by_max_word": {
"type": "custom",
"tokenizer": "ik_max_word",
"filter": [
"by_sfr"
]
}
},
"filter": {
"by_sfr": {
"type": "synonym",
"synonyms_path": "/usr/local/elasticsearch-5.5.0/config/analysis/synonyms.txt"
}
}
}
}
},
"mappings": {
"_default_": {
"_all": {
"enabled": true,
"norms": false,
"analyzer": "by_max_word",
"search_analyzer": "by_smart"
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"analyzer": "by_max_word",
"search_analyzer": "by_smart"
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"analyzer": "by_max_word",
"search_analyzer": "by_smart",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
],
"properties": {
"@timestamp": {
"type": "date",
"include_in_all": false
},
"@version": {
"type": "keyword",
"include_in_all": false
}
}
}
}
}
然後進入到配置logstash和mysql數據庫同步的conf文件中,添加兩行內容
input {
stdin {
}
jdbc {
# mysql jdbc connection string to our backup databse
jdbc_connection_string => "jdbc:mysql://localhost:3306/test01"
# the user we wish to excute our statement as
jdbc_user => "root"
jdbc_password => "123456"
# the path to our downloaded jdbc driver
jdbc_driver_library => "/usr/local/logstash-5.5.0/mysql-connector-java-6.0.6.jar"
# the name of the driver class for mysql
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
#statement_filepath => "config-mysql/test02.sql"
statement => "select * from test02"
schedule => "* * * * *"
type => "test02"
}
}
filter {
json {
source => "message"
remove_field => ["message"]
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "test01"
document_id => "%{id}"
#添加下面兩行內容
template_overwrite => true
#文件的位置
template => "/usr/local/logstash-5.5.0/template/logstash.json"
}
stdout {
codec => json_lines
}
}
然後重啓一下logstash,就可以發現配置的信息已經加載進來了,
可以看一下默認的配置文件
GET /_template
然後會得到大概長這樣的文件,和剛剛那個配置的信息基本上是相同的,如果不同的話,便是剛剛json文件配置有錯誤。
{
"logstash": {
"order": 0,
"version": 50001,
"template": "*",
"settings": {
"index": {
"analysis": {
"filter": {
"by_sfr": {
"type": "synonym",
"synonyms_path": "/usr/local/elasticsearch-5.5.0/config/analysis/synonyms.txt"
}
},
"analyzer": {
"by_smart": {
"filter": [
"by_sfr"
],
"type": "custom",
"tokenizer": "ik_smart"
},
"by_max_word": {
"filter": [
"by_sfr"
],
"type": "custom",
"tokenizer": "ik_max_word"
}
}
},
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"mapping": {
"search_analyzer": "by_smart",
"norms": false,
"analyzer": "by_max_word",
"type": "text"
},
"match_mapping_type": "string"
}
},
{
"string_fields": {
"mapping": {
"search_analyzer": "by_smart",
"norms": false,
"analyzer": "by_max_word",
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"_all": {
"search_analyzer": "by_smart",
"norms": false,
"analyzer": "by_max_word",
"enabled": true
},
"properties": {
"@timestamp": {
"include_in_all": false,
"type": "date"
},
"@version": {
"include_in_all": false,
"type": "keyword"
}
}
}
},
"aliases": {}
}
}
接下來就可以添加一下信息來測試了,這個根據自己數據庫中的信息去測試,例如剛剛配置了中文,漢語的同義詞,可以在數據庫中添加和這兩個詞相關的記錄,然後搜索一下中文或者漢語會發現新添加的兩條就都可以被索引出來。