Java缓存库对比-以及OHCache总结

堆内缓存:

  1. LinkedHashMap:Java自带类,内置LRU驱逐策略的实现(access-order);多线程访问需要自己实现同步。
  2. Guava Cache:Google Guava工具包中的缓存实现,支持LRU驱逐策略;支持多线程并发访问,支持按时间过期,但只有在访问时才清除过期数据。
  3. Ehcache:支持多种驱逐策略:LFU、LRU、FIFO,支持持久化和集群。性能跟Guava Cache比相当。
  4. Caffeine:支持W-TinyLFU驱逐策略,Benchmark测试读写性能是Guava Cache的6倍左右。

堆外缓存:

  1. OHCache:支持缓存驱逐和过期(Cassandra使用的缓存库)
  2. ChronicleMap:支持Hash结构,性能好,不支持缓存驱逐
  3. MapDB:支持Tree结构,可顺序扫描,不支持缓存驱逐
  4. Ehcache3:BigMemory收费

当缓存数据量非常大时,GC压力过大会导致服务响应慢甚至崩溃,在HugeGraph图数据库中为了缓解该问题,需引入堆外缓存,选择了使用OHCache。

OHCache对外缓存总结

开源:https://github.com/snazy/ohc

特性:

  • 底层有2种实现:linked实现,适用于中大entry场景,默认实现,大部分feature只在该实现中才支持;chunked实现,适用于小entry场景,通过设置chunkSize或者fixedEntrySize启用。
  • 通过eviction()来设置缓存驱逐策略,支持:LRU, W_TINY_LFU, NONE共3种,后两种仅在linked实现中支持。
  • 通过capacity()来指定缓存的容量,单位是字节,注意不是条数。
  • 通过hashTableSize()来设置每个hash_table最大存放的entry个数数量(默认是0.75的factor)。
  • 必须指定自定义的keySerializer、valueSerializer系列化器,重载实现系列化serialize()方法与反序列化deserialize()方法;需要注意的是还需要重载serializedSize()方法,且该方法返回值必须和实际写入的内容大小完全一样,否则会报错或数据不对。
  • 支持缓存条目过期机制,通过timeouts()开启;允许通过defaultTTLmillis()设置一个默认过期时间,也可以在put()的时候单独设置expireAt过期时间。注意:过期机制仅linked实现才支持,因为chunked实现是以chunk为单位进行缓存驱逐的,无法精确到entry粒度;而且并没有独立的线程来回收过期条目,只会在get或put操作时进行检查(见get()或者ensureFreeSpaceForNewEntry()),若过期则删除,详细代码见Timeouts.removeExpired()。
  • 可以通过getWithLoader()来利用线程池异步加载数据。
  • 建议使用jemalloc来减少内存碎片。
  • 实际的内存占用:daa_capacity + segment_count * hash_table_size * 8,一个cache包括segment_count个hash_table,segment_count默认是2 * CPUs;一个hash_table的存放8字节*表大小个的entry地址信息。每个entry都带着头部(linked实现占64字节,chunked实现若定长KV则占16字节否则占24字节),头部后面是键值数据,头部包括:前/后指针、键/值大小、引用数等。

Configures and builds OHC instance:

Field Meaning Default
keySerializer Serializer implementation used for keys Must be configured
valueSerializer Serializer implementation used for values Must be configured
executorService Executor service required for get operations using a cache loader. E.g. OHCache.getWithLoaderAsync(Object, CacheLoader) (Not configured by default meaning get operations with cache loader not supported by default)
segmentCount Number of segments 2 * number of CPUs (java.lang.Runtime.availableProcessors())
hashTableSize Initial size of each segment’s hash table 8192
loadFactor Hash table load factor. I.e. determines when rehashing occurs. .75f
capacity Capacity of the cache in bytes 16 MB * number of CPUs (java.lang.Runtime.availableProcessors()), minimum 64 MB
chunkSize If set and positive, the chunked implementation will be used and each segment will be divided into this amount of chunks. 0 - i.e. linked implementation will be used
fixedEntrySize If set and positive, the chunked implementation with fixed sized entries will be used. The parameter chunkSize must be set for fixed-sized entries. 0 - i.e. linked implementation will be used, if chunkSize is also 0
maxEntrySize Maximum size of a hash entry (including header, serialized key + serialized value) (not set, defaults to capacity divided by number of segments)
throwOOME Throw OutOfMemoryError if off-heap allocation fails false
hashAlgorighm Hash algorithm to use internally. Valid options are: XX for xx-hash, MURMUR3 or CRC32 Note: this setting does may only help to improve throughput in rare situations - i.e. if the key is very long and you’ve proven that it really improves performace MURMUR3
unlocked If set to true, implementations will not perform any locking. The calling code has to take care of synchronized access. In order to create an instance for a thread-per-core implementation, set segmentCount=1, too. false
defaultTTLmillis If set to a value > 0, implementations supporting TTLs will tag all entries with the given TTL in milliseconds. 0
timeoutsSlots The number of timeouts slots for each segment - compare with hashed wheel timer. 64
timeoutsPrecision The amount of time in milliseconds for each timeouts-slot. 128
ticker Indirection for current time - used for unit tests. Default ticker using System.nanoTime() and System.currentTimeMillis()
capacity Expected number of elements in the cache No default value, recommended to provide a default value.
eviction Choose the eviction algorithm to use. Available are:LRU: Plain LRU - least used entry is subject to evictionW-WinyLFU: Enable use of Window Tiny-LFU. The size of the frequency sketch (“admission filter”) is set to the value of hashTableSize. See this article for a description.None: No entries will be evicted - this effectively provides a capacity-bounded off-heap map. LRU
frequencySketchSize Size of the frequency sketch used by W-WinyLFU Defaults to hashTableSize.
edenSize Size of the eden generation used by W-WinyLFU relative to a segment’s size 0.2

文档地址:https://javadoc.io/static/org.caffinitas.ohc/ohc-core/0.7.0/org/caffinitas/ohc/OHCacheBuilder.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章