使用cache tier

使用cache tier

cache tier幾種模式:

  • Writeback Mode: When admins configure tiers with writeback mode, Ceph clients write data to the cache tier and receive an ACK from the cache tier. In time, the data written to the cache tier migrates to the storage tier and gets flushed from the cache tier. Conceptually, the cache tier is overlaid “in front” of the backing storage tier. When a Ceph client needs data that resides in the storage tier, the cache tiering agent migrates the data to the cache tier on read, then it is sent to the Ceph client. Thereafter, the Ceph client can perform I/O using the cache tier, until the data becomes inactive. This is ideal for mutable data (e.g., photo/video editing, transactional data, etc.).

  • Read-only Mode: When admins configure tiers with readonly mode, Ceph clients write data to the backing tier. On read, Ceph copies the requested object(s) from the backing tier to the cache tier. Stale objects get removed from the cache tier based on the defined policy. This approach is ideal for immutable data (e.g., presenting pictures/videos on a social network, DNA data, X-Ray imaging, etc.), because reading data from a cache pool that might contain out-of-date data provides weak consistency. Do not use readonly mode for mutable data.

And the modes above are accomodated to adapt different configurations:

  • Read-forward Mode: this mode is the same as the writeback mode when serving write requests. But when Ceph clients is trying to read objects not yet copied to the cache tier, Ceph forward them to the backing tier by replying with a “redirect” message. And the clients will instead turn to the backing tier for the data. If the read performance of the backing tier is on a par with that of its cache tier, while its write performance or endurance falls far behind, this mode might be a better choice.

  • Read-proxy Mode: this mode is similar to readforward mode: both of them do not promote/copy the data when the requested object does not exist in the cache tier. But instead of redirecting the Ceph clients to the backing tier when cache misses, the cache tier reads from the backing tier on behalf of the clients. Under some circumstances, this mode can help to reduce the latency.

使用:

1. 新增cache crush map:

  • 新建type爲root的crush bucket:
root@ceph:~# ceph osd crush add-bucket cache root
added bucket cache type root to crush map
  • 新建type爲host的crush bucket:
root@ceph:~# ceph osd crush add-bucket host-cache host
added bucket host-cache type host to crush map
  • 移動crush map中的host到root下:
root@ceph:~# ceph osd crush move host-cache root=cache
moved item id -6 name 'host-cache' to location {root=cache} in crush map
  • 添加osd到host-cache,或者移動已有的osd到host-cache:
root@ceph:~# ceph osd crush create-or-move osd.2 0.03899 host=host-cache
create-or-move updating item name 'osd.2' weight 0.03899 at location {host=host-cache} to crush map
  • 檢查crush map:
root@ceph:~# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-5 0.03899 root cache                                            
-6 0.03899     host host-cache                                   
 2 0.03899         osd.2            up  1.00000          1.00000 
-1 0.08780 root default                                          
-2 0.08780     host ceph                                         
 1 0.08780         osd.1            up  1.00000          1.00000 

2. 新增crush rule:

  • 創建基於cache bucket的crush rule:
root@ceph:~# ceph osd crush rule create-simple cache cache host
  • 查看當前rule:
root@ceph:~# ceph osd crush rule dump
    {
        "rule_id": 2,
        "rule_name": "cache",
        "ruleset": 2,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -5,
                "item_name": "cache"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    }

3. 創建cache pool:

  • 創建cache pool:
root@ceph:~# ceph osd pool create cache 64 64
pool 'cache' created
  • 修改pool的crush rule:
root@ceph:~# ceph osd pool set cache crush_ruleset 2
set pool 3 crush_ruleset to 2

4. 創建cache tier:

  • 綁定data pool和cache pool:
root@ceph:~# ceph osd tier add data cache
pool 'cache' is now (or already was) a tier of 'data'
  • 設置cache tier模式:
root@ceph:~# ceph osd tier cache-mode cache writeback
set cache-mode for pool 'cache' to writeback
  • 設置overlay:
root@ceph:~# ceph osd tier set-overlay data cache
overlay for 'data' is now (or already was) 'cache' (WARNING: overlay pool cache_mode is still NONE)
  • 設置cache pool參數:
root@ceph:~# ceph osd pool set cache hit_set_type bloom
root@ceph:~# ceph osd pool set cache hit_set_count 1
root@ceph:~# ceph osd pool set cache hit_set_period 600
root@ceph:~# ceph osd pool set cache target_max_bytes 10000000000
root@ceph:~# ceph osd pool set cache target_max_objects 300000
root@ceph:~# ceph osd pool set cache cache_min_flush_age 600
root@ceph:~# ceph osd pool set cache cache_min_evict_age 600
root@ceph:~# ceph osd pool set cache cache_target_dirty_ratio 0.4
root@ceph:~# ceph osd pool set cache cache_target_dirty_high_ratio 0.6
root@ceph:~# ceph osd pool set cache cache_target_full_ratio 0.8
root@ceph:~# ceph osd pool set cache min_read_recency_for_promote 1
root@ceph:~# ceph osd pool set cache min_write_recency_for_promote 1
發佈了38 篇原創文章 · 獲贊 62 · 訪問量 17萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章