分佈式搜索被分爲兩個階段:query and fetch。查詢和取回。
query phase
查詢階段During the initial query
phase, the query
is broadcast to a shard copy (a primary or replica shard) of every shard in the index. Each shard executes the search locally and builds
a priority queue of
matching documents.
Figure 14. Query phase of distributed search
The query phase consists of the following three steps:
- The client sends a
search
request toNode 3
, which creates an empty priority queue of sizefrom + size
. Node 3
forwards the search request to a primary or replica copy of every shard in the index. Each shard executes the query locally and adds the results into a local sorted priority queue of sizefrom + size
.- Each shard returns the doc IDs and sort values of all the docs in its priority queue to the coordinating node,
Node 3
, which merges these values into its own priority queue to produce a globally sorted list of results.
When a search request is sent to a node, that node becomes the coordinating node. It is the job of this node to broadcast the search request to all involved shards, and to gather their responses into a globally sorted result set that it can return to the client.
The first step is to broadcast the request to a shard copy of every node in the index. Just likedocument GET
requests,
search requests can be handled by a primary shard or by any of its replicas.This
is how more replicas (when combined with more hardware) can increase search throughput. A coordinating node will round-robin through all shard copies on subsequent requests in order to spread the load.
fetch phase
The distributed phase consists of the following steps:
- The coordinating node identifies which documents need to be fetched and issues a multi
GET
request to the relevant shards. - Each shard loads the documents and enriches them, if required, and then returns the documents to the coordinating node.
- Once all documents have been fetched, the coordinating node returns the results to the client.
Deep Pagination
The query-then-fetch process supports pagination with the from
and size
parameters,
but within limits. Remember
that each shard must build a priority queue of length from
+ size
, all of which need to be passed back to the coordinating node. And the coordinating node needs to sort through number_of_shards
* (from + size)
documents in order to find the correct size
documents.
Depending on the size of your documents, the number of shards, and the hardware you are using, paging 10,000 to 50,000 results (1,000 to 5,000 pages) deep should be perfectly doable. But with big-enough from
values,
the sorting process can become very heavy indeed, using vast amounts of CPU, memory, and bandwidth. For this reason, we strongly advise against deep paging.
In practice, “deep pagers” are seldom human anyway. A human will stop paging after two or three pages and will change the search criteria. The culprits are usually bots or web spiders that tirelessly keep fetching page after page until your servers crumble at the knees.
If you do need
to fetch large numbers of docs from your cluster, you can do so efficiently by disabling sorting with the scan
search
type, which we discuss later in this chapter.
search options
The preference
parameter
allows you
to control which shards or nodes are used to handle the search request. It accepts values such as _primary
, _primary_first
, _local
, _only_node:xyz
, _prefer_node:xyz
,
and _shards:2,3
Bouncing Results
Imagine that you are sorting your results by a timestamp
field,
and two documents have the same timestamp. Because search requests are round-robined between all available shard copies, these two documents may be returned in one order when the request is served by the primary, and in another order when served by the replica.
This is known as the bouncing results problem: every time the user refreshes the page, the results appear in a different order. The problem can be avoided by always using the same shards for the same
user, which can be done by setting the preference
parameter
to an arbitrary string like the user’s session ID.
search_typeedit
While query_then_fetch
is
the default search
type, other search types can be specified for particular purposes, for example:
GET /_search?search_type=count
-
count
- The
count
search type has only aquery
phase. It can be used when you don’t need search results, just a document count or aggregations on documents matching the query. query_and_fetch
- The
query_and_fetch
search type combines the query and fetch phases into a single step. This is an internal optimization that is used when a search request targets a single shard only, such as when arouting
value has been specified. While you can choose to use this search type manually, it is almost never useful to do so. dfs_query_then_fetch
anddfs_query_and_fetch
- The
dfs
search types have a prequery phase that fetches the term frequencies from all involved shards in order to calculate global term frequencies. We discuss this further in Relevance Is Broken!. scan
- The
scan
search type is used in conjunction with thescroll
API to retrieve large numbers of results efficiently. It does this by disabling sorting. We discuss scan-and-scroll in the next section. -
scan and scroll
用來取出大量數據,避免深度分頁所帶來的效率問題。from size分頁所帶來的問題就是如果查詢頁數過多所帶來的排序排序問題,我們只要禁止排序,就可以優化from size分頁查詢 - scroll 類似關係型database中的遊標,可以向下滾動。只是一個數據的view,當scan查詢初始化的時候,scroll的只是那個時間點的數據快照,之後的更新不會在scroll中出現。
- scan.禁止排序,只是取出數據。
- 打開遊標,設定遊標的有效時間爲1m,1分鐘過後遊標失效。
- 返回_scroll_id,用_scroll_id可以向下滾動。
-
index management 索引管理
-
In fact, if you want to, you can prevent the automatic creation of indices by adding the following setting to the
config/elasticsearch.yml
file on each node:action.auto_create_index: false
指定index的mapping - {
"settings": {
"number_of_shards" : 1,
"number_of_replicas" : 0
},
- "mappings":{
- "index":{
- "type":{
- "properties":{
- "field":{
- "type":"string",
- "index":"analyzed",
- "analyzer":"ik",
- "raw":{
- "type":"string",
- "index":"no"
- }
- }
- }
- }
- }
- }
- }
-
PUT /spanish_docs
創建scope爲index作用域的analyzer 名字爲es_std
{
"settings": {
"analysis": {
"analyzer": {
"es_std": {
"type": "standard",
"stopwords": "_spanish_"
}
}
}
}
} -
the root object
-
_id
- The string ID of the document
-
_type
- The type name of the document
-
_index
- The index where the document lives
-
_uid
-
The
_type
and_id
concatenated together astype#id
-
The
_id
field does have one setting that you may want to use: thepath
setting tells Elasticsearch that it should extract the value for the_id
from a field within the document itself.PUT /my_index
VIEW IN SENSE document _id的值來源於已經存在的某個值
{
"mappings": {
"my_type": {
"_id": {
"path": "doc_id"
},
"properties": {
"doc_id": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
} -
dynamic mapping
-
true
- Add new fields dynamically—the default
-
false
- Ignore new fields
-
strict
-
Throw an exception if an unknown field is encountered
PUT /my_index
{
"mappings": {
"my_type": {
"dynamic": "strict",
"properties": {
"title": { "type": "string"},
"stash": {
"type": "object",
"dynamic": true
}
}
}
}
}
Settingdynamic
tofalse
doesn’t alter the contents of the_source
field at all. The_source
will still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.
customizing dynamic mapping
date_detection
dynamic
to false
doesn’t
alter the contents of the _source
field
at all. The _source
will
still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.PUT /my_indexDefault mappings
{
"mappings": {
"my_type": {
"date_detection": false
}
}
}
dynamic
to false
doesn’t
alter the contents of the _source
field
at all. The _source
will
still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.PUT /my_index
{
"mappings": {
"_default_": {
"_all": { "enabled": false }
},
"blog": {
"_all": { "enabled": true }
}
}
}
reindexing your data
bulk
API to
push them into the new index.-
index aliases and zero downtime
dynamic
to false
doesn’t
alter the contents of the _source
field
at all. The _source
will
still contain the whole JSON document that you indexed. However, any unknown fields will not be added to the mapping and will not be searchable.