ConcurrentHashMap解析
ConCurrentHashMap定義
/**
* 支持完全併發性檢索和高期望併發更新的 hash table
* 此類遵循Hashtable相同的函數規範,和包含與Hashtable的每個方法對應的方法版本
* 然而,儘管所有的操作都是線程安全的,但是檢索操作不需要鎖定,而且不支持防止所有訪問鎖定整個table
* 在依賴於線程安全性而不是synchronize細節的程序中,該類完全可以與Hashtable相互操作
*
* 檢索操作(包含get)通常不阻塞,所以可能與更新操作重疊(包括put和remove)
* retrievals檢索操作反映了最近完成的update 操作(更正式的說:給定key的update操作具有happens-before
* 關係和任何(non-null)retrieval檢索指定key report 更新的value。)
* 對於聚合操作,例如:clear,putAll,併發retireval檢索可能反映insertion或removal的某些enties
* 類似的,Iterator、Spliterators和Enumerations返回元素反映了這個hash table在創建iterator/Enumeration時或之後的狀態
* 它們不會throw ConcurrentModificationException,不過Iteratori 一次只能被一個線程使用
* 請記住:聚合狀態方法(size、isEmpty、containsValue)的結果通常只有在映射沒有其他線程中進行併發更新時纔有用。
* 否則,這些方法的結果映射的是瞬時狀態可能足以用於檢測或評估的目的,但不能用於程序控制
*
* 當有太多碰撞時,table被總動態擴展。(例如:key具有不同的hash code,但是取模table size後屬於相同slot)
* 預期平均效果每個map大約每兩個bins維護一個mapping 數量(對應的0.75 load factor 掌握resize)
* 隨着mapping的added和removed,這個平均值可能會有很多差異,但總來說,這維護了hash table 普遍接受的時間/空間的權衡
* 然而,reszie或者hash table 的其他類型操作可能相對較慢。在可能的情況下,提供一個估計大小作爲initalCapacity的構造參數是一個好主意
* 一個可選項loadFactor構造函數提供了一種定製initalCapacity的進一步的方法,
* 它指定了table的密度,用於計算且給定數量的元素分配的空間數量。
* 此外,爲了與該類的之前版本兼容,構造函數還可以指定一個expected期望的concurrencyLevel作爲內部分級的額外提示
* Note:使用具有完全相同的hashCode的許多key可定會降低任何hash table的性能,爲了改善影響
* 當key是Comparable時,此類可以使用key之間的compare順序來幫助斷開tie鏈接
*
* 一個ConcurrentHashMap的 Set 投影可能由newKey(),newKeySet(int)來創建,或viewed(使用keySet())
* 只有當Keys被interest並且被mapped的value(可能暫時性的)沒有被使用或全部形同的mapping value。
*
* ConcurrentHashMap可以作爲可伸縮頻率map(直方圖或多Set形式)通過使用LongAdder value和使用computeIfAbsent來初始化
* 例如:要向ConcurrentHashMap<String,LongAdder> freqs添加count,可以使用freqs.computeIfAbsent(key,k->new LongAdder()).increment()
*
* 此類的其他view和Iterator實現了Map和Iterator接口所有可選方法
*
* 類似Hashtable但是與HashMap不同,這個類不允許將null作爲key或value
*
* ConcurrentHashMap支持一組順序和並行的批量操縱,與大多數Stream方法不同,這些操作被設計爲安全的
* 而且通常是明智的,即使使用油其他線程併發更行mapping;例如:當計算共享註冊類表中value的快照摘要時,
* 有三種類型的操作,每種操作都有四種形式,接受帶有key、value、entry和(key,value)對的函數作爲從參數and/or返回值
* 因爲ConcurrentHashMap的element不是順序在任何特定的地方,並且可能在不同順序不同的並行執行處理
* 正確的處理程序不能依賴於任何排序,或任何其他Object或value可能能是短時性的在計算過程中變化;
* 除了每個動作,最好是沒有副作用的,批量操作在Map.Entry 上不支持setValue方法
*
* 1.forEach:對每個元素執行給定的操作。變量形式在執行操作前對每個元素執行給定的轉換
*
* 2.search:返回對每個元素執行應用函數的第一個可用 non-null 結果;找到結果時跳過進一步的search。
*
* 3.reduce:累積每個元素。所提供的reduce 函數不能依賴於排序(更正式的說:它應該同時具有關聯性和交換性)有五種變體:
* 1.plain reductions:(對於key,value函數參數,沒有此方法的形式,因爲沒有對應的返回值類型)
* 2.mapped reductions:累積應用於每個元素的給定函數結果
* 3.使用給定的基值減少到標量double、long、int
*
* 這些bulk操作接受parallelismThreshold參數。如果當前map size小於給定的閾值,則按方法順序進行。
* 使用參數Long.MAX_VALUE禁止所有parallel。使用 1 將會導致大量parallel,方法將其劃分爲足夠的subTask
* 以充分利用所有用於並行計算的ForkJoinPool.commonPool().通常首選擇將這些極值中的一個
* 然後測量在值之間使用的性能,這些值權衡開銷與吞吐量
*
* bulk操作的parallel屬性跟在ConcurrentHashMap的parallel屬性後面:從get(key)返回的任何non-null結果
* 並且相關的訪方法在與關聯的插入、更新的方法有happens-before關係。任何bulk操作的結果都反映了這些
* 元素的組成(但對整個map來說不一定是原子的關係,除非它以某種方式被認爲是靜態的)。
* 相反,由於map中的key和value從不爲null,因此null可以作爲當前缺少任何結果的可靠原子指示器。
* 若要維護此屬性,null值可以用作非標量reduce操作的隱式基礎(更正式的說:他應該是應用於reduce的標示元素)
* 最常見的reduce具有這些屬性;例如:使用基數0計算和或使用基數最大值計算最小值
*
* 作爲參數提供的search和transformation 函數應該類似的返回null,表示缺少任何結果(在這種情下不使用它)
* 對於mapped reduce,這還允許transformation充當filter,如果元素不應該組合,則返回null
* (或者對於原始專門化返回標示基)。可以創建符合transformation和filter,方法是在search或reduce操作中
* 使用他們之前,在null意味着現在什麼都沒有,自己組合他們
*
* accepting and/or returning entry參數的方法維護key-value關聯。例如:當找到最大的key時,它們可能有用
* Note:可以使用new AbstractMap.SimpleEntry(k,v)提供普通entry參數
*
* bulk操作可能會突然完成,在應用提供的函數遇到異常時。在處理此類異常時,請記住:
* 其他parallel執行的函數也可能拋出異常,或者如果沒有異常發生,則已經完成
*
* 與sequential順序形式相比,parallel形式的加速比較常見但並不能保證。如果parallel計算的底層工作比計算
* 本身更昂貴,那麼涉及小map上的極短函數的parallel操作可能比sequential執行的更慢。
* 類似的,如果所有處理器都忙於執行不相關的任務,parallel可能不會導致太多的parallel;
*
* 所有任何方法剎那火速必須時non-null
*
* 此類時java cellection framework 成員
*
* @since 1.5
* @author Doug Lea
* @param <K> the type of keys maintained by this map
* @param <V> the type of mapped values
*/
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
implements ConcurrentMap<K,V>, Serializable {
private static final long serialVersionUID = 7249069246763182397L;
ConcurrentHashMap方法實現:
Overview 概述
/*
* Overview:概述
*
* 這個hash table的主要設計目的時保持併發可讀性(通常是get()方法,但也包含Iterator和相關方法),
*
* 同時最小化update爭用。次要目標是保持空間消耗與HashMap相同或更好。
* 並支持多個線程對空table的高效插入。
*
* 此map通常充當一個binned(bucketed)hash table,每個key-value mapping都保存在一個node中。
* 大多數node具有hash、key、value和next 字段。然而,存在各種子類:
* TreeNode:排列在balanced tree,而不是lists中。
* TreeBin:持有一組TreeNode的root節點
* ForwardingNode:在resizing期間提案bins的head。
* ReservationNode:用作佔位符,同時在computeIfAbsent和相關方法中建立值
* TreeNode、ForwardNode和ReservationNode不包含普通的key、value或hash,
* 在search過程中很容易區分,因爲他們有負值hash字段和null key和value字段
* (這些特殊node不是不常見的就是瞬時的,因爲攜帶未使用的字段的影響微不足道)
*
* 第一次插入時,該table被惰性的初始化2的冪的size。table中的每個bin通常包含
* 一個node list(通常情況下list中只有0個或1個node)。
* table訪問需要volatile/atomic reads、writes和CASes。因爲沒有其他方法可以
* 在不添加進一步的間接操作的情況下安排此操作,所以我們使用intrinsics(Safe)操作
*
* 我們將Node 的 hash 字段 的top(sign)bit位用於控制目的--
* 由於處理約束,無論如果它都是可用的。在map方法中,具有負hash字段的node被特殊處理或忽略
*
* 在empty bin中插入第一個node(通過put或者它的變體)只需要將其封裝到bin中即可。
* 到目前爲止,這是大多數key/hash 分佈下的put的最常見情況。其他的update操作(insert,delete和replace)
* 需要lock。我們不希望浪費將一個不同的lock對象與每個bin關聯所需的空間,所以應該使用bin
* list 的first node 作爲lock,對於這些lock的locking支持依賴於內置synchronize monitors
*
* 但是將將list的first node作爲lock本身還不夠:當一個node被鎖定時,任何uodate必須驗證
* 它仍然是鎖定後frist node,如果不是,則重試。因爲新的node'總是被附加到list中,
* 一旦一個node在一個bin中是first,它就會保持在first,直到刪除或bin失效(reszie後)
*
* per-bin lock的主要缺點是:受相同的lock保護的bin list中其他node的update操作可能停止
* 例如:當使用equals()或mapping函數話費很長時間時。然而,統計上在隨機hashCode下
* 這不是一個常見問題。理想情況下,bin中節點的頻率遵循泊松分佈
* (http://en.wikipedia.org/wiki/Poisson_distribution),給定調整大小的閾值爲0.75
* 平均參數約爲0.5,儘管由於調整粒度而存在較大差異,忽略方差,list大小k的預期出現出現次數
* 是 (exp(-0.5) * pow(0.5, k) / factorial(k)). 第一個值是:
*
* 0: 0.60653066
* 1: 0.30326533
* 2: 0.07581633
* 3: 0.01263606
* 4: 0.00157952
* 5: 0.00015795
* 6: 0.00001316
* 7: 0.00000094
* 8: 0.00000006
* more:少於千分之一
*
* 在隨機hash下,兩個線程訪問不同的元素lock競爭的概率大約爲 1 / (8 * #elements)
*
* 在實踐中遇到hash code分佈有明顯偏離均勻隨機性。這包括 N > (1<<30) 時的情況
* 因此一些key必須碰撞。類似的,在一些愚蠢或惡意的用法中,多個key被設計爲具有相同的hashCode
* 或者只有在屏蔽的高bit位上不同的hashCode。因此,當一個bin中的node數量超過閾值
* 時,我們使用第二個策略。這些TreeNode使用balanced tree節點(red-black的特殊形式)
* 邊界搜索時間O(logN).每個search一次TreeBin至少兩倍緩慢在常規list,但是鑑於
* N不能超過(1<<64) (在拋出address之前) 這個邊界search step、lock等合理的常量
* (約100個Node檢查最壞的情況)只要key具有comparable(這是很常見的:String,Long等)
* TreeBin節點(TreeNode)也像普通node那樣維護next遍歷指針,因此可以以相同的方式在iterator中遍歷
*
* 當入住率超過百分比閾值(通常爲0.75,但是見上下文)時,將resize table。
* 任何線程注意到一個過滿的bin時都可以在初始化線程之後
* 都可以分配並設置替換數組之後幫助resize。但是這些其他線程可能會繼續執行插入等操作,
* 而不是停止。在resize的過程中,TreeBin的使用避免了最壞的填充效果
* resize的過程是一個接一個的將bin從table轉移到下一個table。但是,線程聲明在之前
* 要傳輸小blocks的索引(通過字段transaferIndex),從而減少爭用。字段sizeCtl中的生成
* stamp確保resizeings不重疊。因爲使用的是2的冪展開,所以每個bin中的元素必須保持相同
* 的index。或者以2的冪的偏移量移動。我們通過捕獲cases來消除不必要的節點創建
* 其中old node可以重用,因爲他們的next字段不會更改。平均而言,當table翻倍時,
* 大約六分之一的table需要clone,他們的替換node將是垃圾回收節點,只要它們不被任何
* 位於併發遍歷table中的reader線程引用。在transfer時,old table bin只包含一個特殊的
* forwardNode(hash字段:“MOVED”),該node包含下一個table作爲key。
* 遇到forwardNode時,使用new table重新啓動訪問和更新操作
*
* 每個bin transfer都需要它的bin lock,它可以在resize時等待lock。但是由於其他線程
* 可以加入並幫助resize,而不是爭用lock,所以隨着調整的進行,平均聚合等待時間更短
* transfer操作必須確保任何遍歷都可以使用new old table中所有可訪問的bins。
* 這部分是通過last bin(table.size-1)向上first 開始安排的。當看到一個forwardNode
* 時,遍歷(參calss Traverser)安排到new table,而不需要重新訪問node。爲了確保
* 沒有跳過中間的node,即使移動順序不正確,也要在遍歷過程中第一次遇見ForwardNode時
* 創建一個堆棧(參見class TableStack),以便在稍後處理當前table時維護它的位置。
* 對這些save/restore機制的需求相對較少,但當遇到一個forwardNdoe時,通常會
* 遇到更多。所以iterator使用一個簡單的緩存方案來避免創建這麼多的新TableStack 節點
*
*
* 遍歷方案還是用於範圍的bins的部分遍歷(通過一個備用的遍歷構造函數),以支持分區的聚合
* 操作。此外,如果將read-only操作轉發給空table,則會放棄該操作,空table提供了對於
* 關閉樣式清除的支持,而關閉樣式目前沒有實現
*
* 延遲table初始化在第一次使用之前將佔用的空間最小化,並且當第一個操作來自putAll、帶有
* map參數的構造函數或返序列化時,也避免了調整大小。這些情況試圖覆蓋inital capacity設置
* 但是在競爭的情況下也沒有傷害harmlessly
*
* 元素計數使用LongAdder的專門化來維護。我們需要結合專門化,而不是僅僅使用LongAdder
* 來訪問隱式的競爭感知,這會創建更多個CounterCells。計數器機制避免了對updates的爭用
* 但如果在併發訪問期間頻繁的read,則會遇到cache 抖動。爲了避免頻繁read,僅在添加到
* 已經包含兩個或多個node的bin時才嘗試在爭用下resize。在均勻hash 分佈的情況下,在閾值
* 出於發生這種情況的概率大約爲13%, 這意味着大約只有八分之一的人會設置check threshold
* (resize後,這樣做的人會少很多)
*
* TreeBin爲search和相關操縱使用了一種特殊的比較形式(這是我們不能使用還有集合(
* 如TreeMap)的主要原因)。TreeBin包含Comparable元素,但可能包含其他的元素,
* 這些元素具有comparable但對於相同的T不一定具有可比性,因此我門不能在他們之間調用compareTo
* 要處理這個問題,tree的順序主要由hash值決定的,然後時Compara.comparateTo.
* (如果適用的話)。在node查找時,如果elsements不可比或比較結果爲0,那麼在綁定hash值的
* 情況下,可能需要搜索左右的子元素。(這對應於完整的list搜索,如果所有元素都不可比較
* 並且有綁定的散列將是必要的)。在insert時,爲了保持rebalanced的總順序(按這裏
* 要求的接近),我門將類和identityHashCode作爲鏈接符進行比較。
* red-black balance 代碼是根據CLR算法介紹更新而來:
* (http://gee.cs.oswego.edu/dl/classes/collections/RBCell.java)
*
* TreeBins還需要一個額外的lock機制。雖然list遍歷始終是可能的,即使在更新期間
* reader也可以執行list遍歷,但是tree遍歷不行,這主要是因爲tree的旋轉可能會更改
* root節點and/or其他鏈接。TreeBins包含一個簡單的read/write機制,基於
* main bin-synchorization策略:與插入或刪除關聯的機構調整已經被bin-locked
* (因此不能與其他writer發生衝突),但必須等待正在進行的reader 讀取完成。
* 由於只能喲一個這樣的waiter,所以我們使用一個簡單的方案,使用一個waiter字段來
* 阻止writer,但是reader永遠不需要阻塞。如果持有root lock,他們將沿着緩慢遍歷的
* 路徑(通過next-pointer)前進,直到lock可用或list被耗盡,無論哪種情況先出現
* 這些情況並不快,但是最大化了總預期吞吐量
*
* 威化API和該類之前的版本序列化的兼容性帶來了一些奇怪的現象。這主要是我們保留了
* concurrencyLevel的未修改但未使用的構造參數。我們接受loadFactor構造函數參數
* 但只將其應用於初始table capacity(這是我們能夠保證遵守它的唯一一次)
* 我們還聲明瞭一個未使用的segment類,該類僅在序列化時以最小形式實例化
*
* 而且,僅僅爲了於這個類以前的版本兼容,它擴展了AbstracMap,即使它的所有方法都被覆蓋了
* 所以它只是無用的包袱
*
* 這個文件是組織使事情更容易跟隨比他們可能閱讀:
* 1.主要的靜態聲明和工具方法
* 2.field和主要的public 方法(將多個public方法分解爲internal 方法)
* 3.sizing、tree、traversers、bulk 操作
*/
Constants 常量
/**
* 最大的table capacity,這個值必須恰好爲 1<<30 才能保證java數組分配和index範圍內
* 以獲取兩倍table的大小,而且還需要,因爲32bit位hash字段的前兩位用於控制目的
*/
private static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 默認inital table capacity。必須是2的冪(即至少是1)和最大MAXIMUM_CAPACITY
*/
private static final int DEFAULT_CAPACITY = 16;
/**
* 最大的數組大小(非2的冪)。toArray和相關方法需要。
*/
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
/**
* 默認concurrency level併發級別。未使用,但爲與該類之前的版本兼容而定義
*/
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
/**
* 此table的load tactor。在constructors構造器中重寫此值,隻影響
* inital table capacity。通常不是用實際的double point值--
* 使用 n - (n >>> 2)之類的表達式來關聯的resizing threahold 更簡單
*/
private static final float LOAD_FACTOR = 0.75f;
/**
* 使用Tree而不是list來使用bin,bin的計數閾值。
* 當一個元素添加到具有這麼node的bin時,bin將轉換爲tree。
* 該值必須大於2,並且至少應該是8,以便與tree removal中關於收縮後
* 轉換爲plain list的假設相吻合
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* 在resize操作期間untreeifying(spilt)bin的計數閾值。
* 應該小於TREEIFY_THRESHOLD,且最多爲6,以配合去除後的收縮檢查
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* 最小的table capcaity,其中bin可以treeify(否則該table將resize如果太多的node在bin中.)。
* 該值應該至少爲4*TREEIFY_CAPACITY,以避免resize和treeification閾值之間衝突
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* 每個tranfer step轉移步驟最小應該rebinnings復歸數。
* 範圍細分爲多少個resizer線程,此值用作下限,以避免resizer遇到過多的內存爭用
* 此值最少是DEFAULT_CAPACITY
*/
private static final int MIN_TRANSFER_STRIDE = 16;
/**
* 用於生成stamp的bits位數在sizeCtl中,32bit數組至少爲6
*/
private static final int RESIZE_STAMP_BITS = 16;
/**
* 可以幫助resize的最大線程數。必須服惡化32bit- RESIZE_STAMP_BITS bits
* Must fit in 32 - RESIZE_STAMP_BITS bits.
*/
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
/**
* 用於sizeCtl記錄size stamp的bit偏移
*/
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
/*
* Node hash字段的編碼,見上面的解釋
*/
static final int MOVED = -1; // 用於 forwarding nodes 的hash
static final int TREEBIN = -2; // 用於 roots of trees 的hash
static final int RESERVED = -3; // 用於 transient reservations 的hash
static final int HASH_BITS = 0x7fffffff; // 普通hash的可用bits
/** CPU的數量,以限制某些大小 */
static final int NCPU = Runtime.getRuntime().availableProcessors();
/**
* 序列化僞字段,僅用於jdk7兼容性提供
* @serialField segments Segment[]
* segments,每個是特殊的hash table
* @serialField segmentMask int
* 爲每個segment建立的mask值,key的hashCode的上半部用於選擇segment
* @serialField segmentShift int
* segemnt內索引的偏移量
*/
private static final ObjectStreamField[] serialPersistentFields = {
new ObjectStreamField("segments", Segment[].class),
new ObjectStreamField("segmentMask", Integer.TYPE),
new ObjectStreamField("segmentShift", Integer.TYPE),
};
Nodes 節點
/**
* Key-value entry. 這個類永遠不會以user-mutable 用戶可變 Map.Entry 的形式導出,
* (即:支持setValue,參見下面的的MapEntry),但可以用於bulk任務中使用的read-only遍歷
* 具有負值hash的Node的Subclasses是特殊的,並且包含包含null key和value
* (但從不導出)。否則,key和value永遠不會爲null
*/
static class Node<K,V> implements Map.Entry<K,V> {
final int hash; //entry的hash字段
final K key; //key
volatile V val; //value
volatile Node<K,V> next; //next指向
Node(int hash, K key, V val) {
this.hash = hash;
this.key = key;
this.val = val;
}
Node(int hash, K key, V val, Node<K,V> next) {
this(hash, key, val);
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return val; }
public final int hashCode() { return key.hashCode() ^ val.hashCode(); }
public final String toString() {
return Helpers.mapEntryToString(key, val);
}
public final V setValue(V value) {
throw new UnsupportedOperationException();
}
public final boolean equals(Object o) {
Object k, v, u; Map.Entry<?,?> e;
return ((o instanceof Map.Entry) &&
(k = (e = (Map.Entry<?,?>)o).getKey()) != null &&
(v = e.getValue()) != null &&
(k == key || k.equals(key)) &&
(v == (u = val) || v.equals(u)));
}
/**
* 對map.get()的虛擬支持;在subclasses被重寫
*/
Node<K,V> find(int h, Object k) {
Node<K,V> e = this;
if (k != null) {
do { //判斷當前e的key與指定的k是否相等
K ek;
if (e.hash == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
} while ((e = e.next) != null); //一直遍歷,直到next爲null
}
return null;
}
}
Static utilities 靜態工具方法
/**
* 將(XORs)hash較高的bit擴展爲較低的,並強制top頂部bit爲0.由於該table使用的是
* 2的冪的mask,所以僅在當前mask之上以bit爲單位的hash集合將總是發生衝突。
* (已知的例子中有一組浮點key在小table中保存連續的整數) So we
* 因此我呢應用宇哥transform將高位的影響向下傳播。bit的傳播速度、實用性和質量之間的權衡
* 因爲許多常見的hash集合已經得到了合理的分佈(所以不能從傳播中獲益)
* 因爲我們使用tree處理bin中大量的碰撞,所以我們只是以最便宜的方式移動一些bit來減少系統損失
* 以及以包含最高位的影響,否則由於table的邊界,將永遠不會在索引計算中使用
*/
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
/**
* 返回給定size的所需 capacity 的2的冪
* See Hackers Delight, sec 3.2
*/
private static final int tableSizeFor(int c) {
int n = -1 >>> Integer.numberOfLeadingZeros(c - 1);
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
/**
* 返回x's Class 如果它的形式是 "class C implements Comparable<C>",
* 否則返回 null.
*/
static Class<?> comparableClassFor(Object x) {
if (x instanceof Comparable) {
Class<?> c; Type[] ts, as; ParameterizedType p;
if ((c = x.getClass()) == String.class) // 快速檢查
return c;
if ((ts = c.getGenericInterfaces()) != null) {
for (Type t : ts) {
if ((t instanceof ParameterizedType) &&
((p = (ParameterizedType)t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c 類型參數爲c
return c;
}
}
}
return null;
}
/**
* 返回 k.compareTo(x) if x 匹配 kc (k's 篩選後的 comparable class),
* 否則 0.
*/
@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable)k).compareTo(x));
}
/* ---------------- Table element access。table元素訪問 -------------- */
/*
* atomic access方法用於table elements以及正在進行resizing的next table。
* 所有caller必須檢查tab參數是否爲null。所有caller都會偏執的預先檢查tab的長度是否爲0
* (或者類似的檢查),因此確保採用任何hash 與(length-1)處理得到的的索引參數都是有效的
* Note:要糾正用戶wrt任意併發性的錯誤,這裏檢查必須對本地變量進行操作,這導致了下面
* 一些奇怪的inline 內聯分配。
* Note:setTabAt 總是發生在 locked區域內,因此要求只能順序release。
*/
@SuppressWarnings("unchecked")
//原子訪問
static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) {
return (Node<K,V>)U.getObjectAcquire(tab, ((long)i << ASHIFT) + ABASE);
}
//原子CAS設置
static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i,
Node<K,V> c, Node<K,V> v) {
return U.compareAndSetObject(tab, ((long)i << ASHIFT) + ABASE, c, v);
}
//原子設置
static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v) {
U.putObjectRelease(tab, ((long)i << ASHIFT) + ABASE, v);
}
Fields & Public Method .字段和Public方法
/* ---------------- Fields 字段/域-------------- */
/**
* bins的數字,在第一次insertion時lazily inital
* size總是2的冪。由iterator直接訪問
*/
transient volatile Node<K,V>[] table;
/**
* 要使用的下一個table,只有在resize時非空
*/
private transient volatile Node<K,V>[] nextTable;
/**
* 基本counter計數器值,主要在沒有爭用時使用,但也作爲table inital時的fallback回退
* 通過CAS updated
*/
private transient volatile long baseCount;
/**
* tanle inital和resize的空控件。當爲負數時,table開始inital或resize,
* -1:標示初始化; -(1+active reize線程數量):resize。
* 否則,當table爲null時,當創建時保留inital table size,默認情況爲0.
* 初始化後保存下一個元素數量,根據該值resize table
*/
private transient volatile int sizeCtl;
/**
* next table 索引(+1)split當resize時
*/
private transient volatile int transferIndex;
/**
* Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
*/
private transient volatile int cellsBusy;
/**
* Table of counter cells. When non-null, size is a power of 2.
*/
private transient volatile CounterCell[] counterCells;
// views
private transient KeySetView<K,V> keySet;
private transient ValuesView<K,V> values;
private transient EntrySetView<K,V> entrySet;
/* ---------------- Public operations -------------- */
/**
* Creates a new, empty map with the default initial table size (16).
*/
public ConcurrentHashMap() {
}
/**
* Creates a new, empty map with an initial table size
* accommodating the specified number of elements without the need
* to dynamically resize.
*
* @param initialCapacity The implementation performs internal
* sizing to accommodate this many elements.
* @throws IllegalArgumentException if the initial capacity of
* elements is negative
*/
public ConcurrentHashMap(int initialCapacity) {
this(initialCapacity, LOAD_FACTOR, 1);
}
/**
* Creates a new map with the same mappings as the given map.
*
* @param m the map
*/
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
}
/**
* Creates a new, empty map with an initial table size based on
* the given number of elements ({@code initialCapacity}) and
* initial table density ({@code loadFactor}).
*
* @param initialCapacity the initial capacity. The implementation
* performs internal sizing to accommodate this many elements,
* given the specified load factor.
* @param loadFactor the load factor (table density) for
* establishing the initial table size
* @throws IllegalArgumentException if the initial capacity of
* elements is negative or the load factor is nonpositive
*
* @since 1.6
*/
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
/**
* Creates a new, empty map with an initial table size based on
* the given number of elements ({@code initialCapacity}), initial
* table density ({@code loadFactor}), and number of concurrently
* updating threads ({@code concurrencyLevel}).
*
* @param initialCapacity the initial capacity. The implementation
* performs internal sizing to accommodate this many elements,
* given the specified load factor.
* @param loadFactor the load factor (table density) for
* establishing the initial table size
* @param concurrencyLevel the estimated number of concurrently
* updating threads. The implementation may use this value as
* a sizing hint.
* @throws IllegalArgumentException if the initial capacity is
* negative or the load factor or concurrencyLevel are
* nonpositive
*/
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
// Original (since JDK1.2) Map methods
/**
* {@inheritDoc}
*/
public int size() {
long n = sumCount();
return ((n < 0L) ? 0 :
(n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
(int)n);
}
/**
* {@inheritDoc}
*/
public boolean isEmpty() {
return sumCount() <= 0L; // ignore transient negative values
}
/**
* Returns the value to which the specified key is mapped,
* or {@code null} if this map contains no mapping for the key.
*
* <p>More formally, if this map contains a mapping from a key
* {@code k} to a value {@code v} such that {@code key.equals(k)},
* then this method returns {@code v}; otherwise it returns
* {@code null}. (There can be at most one such mapping.)
*
* @throws NullPointerException if the specified key is null
*/
public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
int h = spread(key.hashCode());
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}
/**
* Tests if the specified object is a key in this table.
*
* @param key possible key
* @return {@code true} if and only if the specified object
* is a key in this table, as determined by the
* {@code equals} method; {@code false} otherwise
* @throws NullPointerException if the specified key is null
*/
public boolean containsKey(Object key) {
return get(key) != null;
}
/**
* Returns {@code true} if this map maps one or more keys to the
* specified value. Note: This method may require a full traversal
* of the map, and is much slower than method {@code containsKey}.
*
* @param value value whose presence in this map is to be tested
* @return {@code true} if this map maps one or more keys to the
* specified value
* @throws NullPointerException if the specified value is null
*/
public boolean containsValue(Object value) {
if (value == null)
throw new NullPointerException();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
V v;
if ((v = p.val) == value || (v != null && value.equals(v)))
return true;
}
}
return false;
}
/**
* Maps the specified key to the specified value in this table.
* Neither the key nor the value can be null.
*
* <p>The value can be retrieved by calling the {@code get} method
* with a key that is equal to the original key.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with {@code key}, or
* {@code null} if there was no mapping for {@code key}
* @throws NullPointerException if the specified key or value is null
*/
public V put(K key, V value) {
return putVal(key, value, false);
}
/** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh; K fk; V fv;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))
break; // no lock when adding to empty bin
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else if (onlyIfAbsent // check first node without acquiring lock
&& fh == hash
&& ((fk = f.key) == key || (fk != null && key.equals(fk)))
&& (fv = f.val) != null)
return fv;
else {
V oldVal = null;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key, value);
break;
}
}
}
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
addCount(1L, binCount);
return null;
}
/**
* Copies all of the mappings from the specified map to this one.
* These mappings replace any mappings that this map had for any of the
* keys currently in the specified map.
*
* @param m mappings to be stored in this map
*/
public void putAll(Map<? extends K, ? extends V> m) {
tryPresize(m.size());
for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
putVal(e.getKey(), e.getValue(), false);
}
/**
* Removes the key (and its corresponding value) from this map.
* This method does nothing if the key is not in the map.
*
* @param key the key that needs to be removed
* @return the previous value associated with {@code key}, or
* {@code null} if there was no mapping for {@code key}
* @throws NullPointerException if the specified key is null
*/
public V remove(Object key) {
return replaceNode(key, null, null);
}
/**
* Implementation for the four public remove/replace methods:
* Replaces node value with v, conditional upon match of cv if
* non-null. If resulting value is null, delete.
*/
final V replaceNode(Object key, V value, Object cv) {
int hash = spread(key.hashCode());
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0 ||
(f = tabAt(tab, i = (n - 1) & hash)) == null)
break;
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
boolean validated = false;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
validated = true;
for (Node<K,V> e = f, pred = null;;) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
V ev = e.val;
if (cv == null || cv == ev ||
(ev != null && cv.equals(ev))) {
oldVal = ev;
if (value != null)
e.val = value;
else if (pred != null)
pred.next = e.next;
else
setTabAt(tab, i, e.next);
}
break;
}
pred = e;
if ((e = e.next) == null)
break;
}
}
else if (f instanceof TreeBin) {
validated = true;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null &&
(p = r.findTreeNode(hash, key, null)) != null) {
V pv = p.val;
if (cv == null || cv == pv ||
(pv != null && cv.equals(pv))) {
oldVal = pv;
if (value != null)
p.val = value;
else if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (validated) {
if (oldVal != null) {
if (value == null)
addCount(-1L, -1);
return oldVal;
}
break;
}
}
}
return null;
}
/**
* Removes all of the mappings from this map.
*/
public void clear() {
long delta = 0L; // negative number of deletions
int i = 0;
Node<K,V>[] tab = table;
while (tab != null && i < tab.length) {
int fh;
Node<K,V> f = tabAt(tab, i);
if (f == null)
++i;
else if ((fh = f.hash) == MOVED) {
tab = helpTransfer(tab, f);
i = 0; // restart
}
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
Node<K,V> p = (fh >= 0 ? f :
(f instanceof TreeBin) ?
((TreeBin<K,V>)f).first : null);
while (p != null) {
--delta;
p = p.next;
}
setTabAt(tab, i++, null);
}
}
}
}
if (delta != 0L)
addCount(delta, -1);
}
/**
* Returns a {@link Set} view of the keys contained in this map.
* The set is backed by the map, so changes to the map are
* reflected in the set, and vice-versa. The set supports element
* removal, which removes the corresponding mapping from this map,
* via the {@code Iterator.remove}, {@code Set.remove},
* {@code removeAll}, {@code retainAll}, and {@code clear}
* operations. It does not support the {@code add} or
* {@code addAll} operations.
*
* <p>The view's iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* <p>The view's {@code spliterator} reports {@link Spliterator#CONCURRENT},
* {@link Spliterator#DISTINCT}, and {@link Spliterator#NONNULL}.
*
* @return the set view
*/
public KeySetView<K,V> keySet() {
KeySetView<K,V> ks;
if ((ks = keySet) != null) return ks;
return keySet = new KeySetView<K,V>(this, null);
}
/**
* Returns a {@link Collection} view of the values contained in this map.
* The collection is backed by the map, so changes to the map are
* reflected in the collection, and vice-versa. The collection
* supports element removal, which removes the corresponding
* mapping from this map, via the {@code Iterator.remove},
* {@code Collection.remove}, {@code removeAll},
* {@code retainAll}, and {@code clear} operations. It does not
* support the {@code add} or {@code addAll} operations.
*
* <p>The view's iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* <p>The view's {@code spliterator} reports {@link Spliterator#CONCURRENT}
* and {@link Spliterator#NONNULL}.
*
* @return the collection view
*/
public Collection<V> values() {
ValuesView<K,V> vs;
if ((vs = values) != null) return vs;
return values = new ValuesView<K,V>(this);
}
/**
* Returns a {@link Set} view of the mappings contained in this map.
* The set is backed by the map, so changes to the map are
* reflected in the set, and vice-versa. The set supports element
* removal, which removes the corresponding mapping from the map,
* via the {@code Iterator.remove}, {@code Set.remove},
* {@code removeAll}, {@code retainAll}, and {@code clear}
* operations.
*
* <p>The view's iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* <p>The view's {@code spliterator} reports {@link Spliterator#CONCURRENT},
* {@link Spliterator#DISTINCT}, and {@link Spliterator#NONNULL}.
*
* @return the set view
*/
public Set<Map.Entry<K,V>> entrySet() {
EntrySetView<K,V> es;
if ((es = entrySet) != null) return es;
return entrySet = new EntrySetView<K,V>(this);
}
/**
* Returns the hash code value for this {@link Map}, i.e.,
* the sum of, for each key-value pair in the map,
* {@code key.hashCode() ^ value.hashCode()}.
*
* @return the hash code value for this map
*/
public int hashCode() {
int h = 0;
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; )
h += p.key.hashCode() ^ p.val.hashCode();
}
return h;
}
/**
* Returns a string representation of this map. The string
* representation consists of a list of key-value mappings (in no
* particular order) enclosed in braces ("{@code {}}"). Adjacent
* mappings are separated by the characters {@code ", "} (comma
* and space). Each key-value mapping is rendered as the key
* followed by an equals sign ("{@code =}") followed by the
* associated value.
*
* @return a string representation of this map
*/
public String toString() {
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
Traverser<K,V> it = new Traverser<K,V>(t, f, 0, f);
StringBuilder sb = new StringBuilder();
sb.append('{');
Node<K,V> p;
if ((p = it.advance()) != null) {
for (;;) {
K k = p.key;
V v = p.val;
sb.append(k == this ? "(this Map)" : k);
sb.append('=');
sb.append(v == this ? "(this Map)" : v);
if ((p = it.advance()) == null)
break;
sb.append(',').append(' ');
}
}
return sb.append('}').toString();
}
/**
* Compares the specified object with this map for equality.
* Returns {@code true} if the given object is a map with the same
* mappings as this map. This operation may return misleading
* results if either map is concurrently modified during execution
* of this method.
*
* @param o object to be compared for equality with this map
* @return {@code true} if the specified object is equal to this map
*/
public boolean equals(Object o) {
if (o != this) {
if (!(o instanceof Map))
return false;
Map<?,?> m = (Map<?,?>) o;
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
Traverser<K,V> it = new Traverser<K,V>(t, f, 0, f);
for (Node<K,V> p; (p = it.advance()) != null; ) {
V val = p.val;
Object v = m.get(p.key);
if (v == null || (v != val && !v.equals(val)))
return false;
}
for (Map.Entry<?,?> e : m.entrySet()) {
Object mk, mv, v;
if ((mk = e.getKey()) == null ||
(mv = e.getValue()) == null ||
(v = get(mk)) == null ||
(mv != v && !mv.equals(v)))
return false;
}
}
return true;
}
/**
* Stripped-down version of helper class used in previous version,
* declared for the sake of serialization compatibility.
*/
static class Segment<K,V> extends ReentrantLock implements Serializable {
private static final long serialVersionUID = 2249069246763182397L;
final float loadFactor;
Segment(float lf) { this.loadFactor = lf; }
}
/**
* Saves this map to a stream (that is, serializes it).
*
* @param s the stream
* @throws java.io.IOException if an I/O error occurs
* @serialData
* the serialized fields, followed by the key (Object) and value
* (Object) for each key-value mapping, followed by a null pair.
* The key-value mappings are emitted in no particular order.
*/
private void writeObject(java.io.ObjectOutputStream s)
throws java.io.IOException {
// For serialization compatibility
// Emulate segment calculation from previous version of this class
int sshift = 0;
int ssize = 1;
while (ssize < DEFAULT_CONCURRENCY_LEVEL) {
++sshift;
ssize <<= 1;
}
int segmentShift = 32 - sshift;
int segmentMask = ssize - 1;
@SuppressWarnings("unchecked")
Segment<K,V>[] segments = (Segment<K,V>[])
new Segment<?,?>[DEFAULT_CONCURRENCY_LEVEL];
for (int i = 0; i < segments.length; ++i)
segments[i] = new Segment<K,V>(LOAD_FACTOR);
java.io.ObjectOutputStream.PutField streamFields = s.putFields();
streamFields.put("segments", segments);
streamFields.put("segmentShift", segmentShift);
streamFields.put("segmentMask", segmentMask);
s.writeFields();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
s.writeObject(p.key);
s.writeObject(p.val);
}
}
s.writeObject(null);
s.writeObject(null);
}
/**
* Reconstitutes this map from a stream (that is, deserializes it).
* @param s the stream
* @throws ClassNotFoundException if the class of a serialized object
* could not be found
* @throws java.io.IOException if an I/O error occurs
*/
private void readObject(java.io.ObjectInputStream s)
throws java.io.IOException, ClassNotFoundException {
/*
* To improve performance in typical cases, we create nodes
* while reading, then place in table once size is known.
* However, we must also validate uniqueness and deal with
* overpopulated bins while doing so, which requires
* specialized versions of putVal mechanics.
*/
sizeCtl = -1; // force exclusion for table construction
s.defaultReadObject();
long size = 0L;
Node<K,V> p = null;
for (;;) {
@SuppressWarnings("unchecked")
K k = (K) s.readObject();
@SuppressWarnings("unchecked")
V v = (V) s.readObject();
if (k != null && v != null) {
p = new Node<K,V>(spread(k.hashCode()), k, v, p);
++size;
}
else
break;
}
if (size == 0L)
sizeCtl = 0;
else {
long ts = (long)(1.0 + size / LOAD_FACTOR);
int n = (ts >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)ts);
@SuppressWarnings("unchecked")
Node<K,V>[] tab = (Node<K,V>[])new Node<?,?>[n];
int mask = n - 1;
long added = 0L;
while (p != null) {
boolean insertAtFront;
Node<K,V> next = p.next, first;
int h = p.hash, j = h & mask;
if ((first = tabAt(tab, j)) == null)
insertAtFront = true;
else {
K k = p.key;
if (first.hash < 0) {
TreeBin<K,V> t = (TreeBin<K,V>)first;
if (t.putTreeVal(h, k, p.val) == null)
++added;
insertAtFront = false;
}
else {
int binCount = 0;
insertAtFront = true;
Node<K,V> q; K qk;
for (q = first; q != null; q = q.next) {
if (q.hash == h &&
((qk = q.key) == k ||
(qk != null && k.equals(qk)))) {
insertAtFront = false;
break;
}
++binCount;
}
if (insertAtFront && binCount >= TREEIFY_THRESHOLD) {
insertAtFront = false;
++added;
p.next = first;
TreeNode<K,V> hd = null, tl = null;
for (q = p; q != null; q = q.next) {
TreeNode<K,V> t = new TreeNode<K,V>
(q.hash, q.key, q.val, null, null);
if ((t.prev = tl) == null)
hd = t;
else
tl.next = t;
tl = t;
}
setTabAt(tab, j, new TreeBin<K,V>(hd));
}
}
}
if (insertAtFront) {
++added;
p.next = first;
setTabAt(tab, j, p);
}
p = next;
}
table = tab;
sizeCtl = n - (n >>> 2);
baseCount = added;
}
}
// ConcurrentMap methods
/**
* {@inheritDoc}
*
* @return the previous value associated with the specified key,
* or {@code null} if there was no mapping for the key
* @throws NullPointerException if the specified key or value is null
*/
public V putIfAbsent(K key, V value) {
return putVal(key, value, true);
}
/**
* {@inheritDoc}
*
* @throws NullPointerException if the specified key is null
*/
public boolean remove(Object key, Object value) {
if (key == null)
throw new NullPointerException();
return value != null && replaceNode(key, null, value) != null;
}
/**
* {@inheritDoc}
*
* @throws NullPointerException if any of the arguments are null
*/
public boolean replace(K key, V oldValue, V newValue) {
if (key == null || oldValue == null || newValue == null)
throw new NullPointerException();
return replaceNode(key, newValue, oldValue) != null;
}
/**
* {@inheritDoc}
*
* @return the previous value associated with the specified key,
* or {@code null} if there was no mapping for the key
* @throws NullPointerException if the specified key or value is null
*/
public V replace(K key, V value) {
if (key == null || value == null)
throw new NullPointerException();
return replaceNode(key, value, null);
}
// Overrides of JDK8+ Map extension method defaults
/**
* Returns the value to which the specified key is mapped, or the
* given default value if this map contains no mapping for the
* key.
*
* @param key the key whose associated value is to be returned
* @param defaultValue the value to return if this map contains
* no mapping for the given key
* @return the mapping for the key, if present; else the default value
* @throws NullPointerException if the specified key is null
*/
public V getOrDefault(Object key, V defaultValue) {
V v;
return (v = get(key)) == null ? defaultValue : v;
}
public void forEach(BiConsumer<? super K, ? super V> action) {
if (action == null) throw new NullPointerException();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
action.accept(p.key, p.val);
}
}
}
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
if (function == null) throw new NullPointerException();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
V oldValue = p.val;
for (K key = p.key;;) {
V newValue = function.apply(key, oldValue);
if (newValue == null)
throw new NullPointerException();
if (replaceNode(key, newValue, oldValue) != null ||
(oldValue = get(key)) == null)
break;
}
}
}
}
/**
* Helper method for EntrySetView.removeIf.
*/
boolean removeEntryIf(Predicate<? super Entry<K,V>> function) {
if (function == null) throw new NullPointerException();
Node<K,V>[] t;
boolean removed = false;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
K k = p.key;
V v = p.val;
Map.Entry<K,V> e = new AbstractMap.SimpleImmutableEntry<>(k, v);
if (function.test(e) && replaceNode(k, null, v) != null)
removed = true;
}
}
return removed;
}
/**
* Helper method for ValuesView.removeIf.
*/
boolean removeValueIf(Predicate<? super V> function) {
if (function == null) throw new NullPointerException();
Node<K,V>[] t;
boolean removed = false;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
K k = p.key;
V v = p.val;
if (function.test(v) && replaceNode(k, null, v) != null)
removed = true;
}
}
return removed;
}
/**
* If the specified key is not already associated with a value,
* attempts to compute its value using the given mapping function
* and enters it into this map unless {@code null}. The entire
* method invocation is performed atomically, so the function is
* applied at most once per key. Some attempted update operations
* on this map by other threads may be blocked while computation
* is in progress, so the computation should be short and simple,
* and must not attempt to update any other mappings of this map.
*
* @param key key with which the specified value is to be associated
* @param mappingFunction the function to compute a value
* @return the current (existing or computed) value associated with
* the specified key, or null if the computed value is null
* @throws NullPointerException if the specified key or mappingFunction
* is null
* @throws IllegalStateException if the computation detectably
* attempts a recursive update to this map that would
* otherwise never complete
* @throws RuntimeException or Error if the mappingFunction does so,
* in which case the mapping is left unestablished
*/
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
if (key == null || mappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh; K fk; V fv;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
Node<K,V> r = new ReservationNode<K,V>();
synchronized (r) {
if (casTabAt(tab, i, null, r)) {
binCount = 1;
Node<K,V> node = null;
try {
if ((val = mappingFunction.apply(key)) != null)
node = new Node<K,V>(h, key, val);
} finally {
setTabAt(tab, i, node);
}
}
}
if (binCount != 0)
break;
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else if (fh == h // check first node without acquiring lock
&& ((fk = f.key) == key || (fk != null && key.equals(fk)))
&& (fv = f.val) != null)
return fv;
else {
boolean added = false;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = e.val;
break;
}
Node<K,V> pred = e;
if ((e = e.next) == null) {
if ((val = mappingFunction.apply(key)) != null) {
if (pred.next != null)
throw new IllegalStateException("Recursive update");
added = true;
pred.next = new Node<K,V>(h, key, val);
}
break;
}
}
}
else if (f instanceof TreeBin) {
binCount = 2;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null &&
(p = r.findTreeNode(h, key, null)) != null)
val = p.val;
else if ((val = mappingFunction.apply(key)) != null) {
added = true;
t.putTreeVal(h, key, val);
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (!added)
return val;
break;
}
}
}
if (val != null)
addCount(1L, binCount);
return val;
}
/**
* If the value for the specified key is present, attempts to
* compute a new mapping given the key and its current mapped
* value. The entire method invocation is performed atomically.
* Some attempted update operations on this map by other threads
* may be blocked while computation is in progress, so the
* computation should be short and simple, and must not attempt to
* update any other mappings of this map.
*
* @param key key with which a value may be associated
* @param remappingFunction the function to compute a value
* @return the new value associated with the specified key, or null if none
* @throws NullPointerException if the specified key or remappingFunction
* is null
* @throws IllegalStateException if the computation detectably
* attempts a recursive update to this map that would
* otherwise never complete
* @throws RuntimeException or Error if the remappingFunction does so,
* in which case the mapping is unchanged
*/
public V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (key == null || remappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int delta = 0;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null)
break;
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f, pred = null;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = remappingFunction.apply(key, e.val);
if (val != null)
e.val = val;
else {
delta = -1;
Node<K,V> en = e.next;
if (pred != null)
pred.next = en;
else
setTabAt(tab, i, en);
}
break;
}
pred = e;
if ((e = e.next) == null)
break;
}
}
else if (f instanceof TreeBin) {
binCount = 2;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null &&
(p = r.findTreeNode(h, key, null)) != null) {
val = remappingFunction.apply(key, p.val);
if (val != null)
p.val = val;
else {
delta = -1;
if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0)
break;
}
}
if (delta != 0)
addCount((long)delta, binCount);
return val;
}
/**
* Attempts to compute a mapping for the specified key and its
* current mapped value (or {@code null} if there is no current
* mapping). The entire method invocation is performed atomically.
* Some attempted update operations on this map by other threads
* may be blocked while computation is in progress, so the
* computation should be short and simple, and must not attempt to
* update any other mappings of this Map.
*
* @param key key with which the specified value is to be associated
* @param remappingFunction the function to compute a value
* @return the new value associated with the specified key, or null if none
* @throws NullPointerException if the specified key or remappingFunction
* is null
* @throws IllegalStateException if the computation detectably
* attempts a recursive update to this map that would
* otherwise never complete
* @throws RuntimeException or Error if the remappingFunction does so,
* in which case the mapping is unchanged
*/
public V compute(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (key == null || remappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int delta = 0;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
Node<K,V> r = new ReservationNode<K,V>();
synchronized (r) {
if (casTabAt(tab, i, null, r)) {
binCount = 1;
Node<K,V> node = null;
try {
if ((val = remappingFunction.apply(key, null)) != null) {
delta = 1;
node = new Node<K,V>(h, key, val);
}
} finally {
setTabAt(tab, i, node);
}
}
}
if (binCount != 0)
break;
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f, pred = null;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = remappingFunction.apply(key, e.val);
if (val != null)
e.val = val;
else {
delta = -1;
Node<K,V> en = e.next;
if (pred != null)
pred.next = en;
else
setTabAt(tab, i, en);
}
break;
}
pred = e;
if ((e = e.next) == null) {
val = remappingFunction.apply(key, null);
if (val != null) {
if (pred.next != null)
throw new IllegalStateException("Recursive update");
delta = 1;
pred.next = new Node<K,V>(h, key, val);
}
break;
}
}
}
else if (f instanceof TreeBin) {
binCount = 1;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null)
p = r.findTreeNode(h, key, null);
else
p = null;
V pv = (p == null) ? null : p.val;
val = remappingFunction.apply(key, pv);
if (val != null) {
if (p != null)
p.val = val;
else {
delta = 1;
t.putTreeVal(h, key, val);
}
}
else if (p != null) {
delta = -1;
if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
break;
}
}
}
if (delta != 0)
addCount((long)delta, binCount);
return val;
}
/**
* If the specified key is not already associated with a
* (non-null) value, associates it with the given value.
* Otherwise, replaces the value with the results of the given
* remapping function, or removes if {@code null}. The entire
* method invocation is performed atomically. Some attempted
* update operations on this map by other threads may be blocked
* while computation is in progress, so the computation should be
* short and simple, and must not attempt to update any other
* mappings of this Map.
*
* @param key key with which the specified value is to be associated
* @param value the value to use if absent
* @param remappingFunction the function to recompute a value if present
* @return the new value associated with the specified key, or null if none
* @throws NullPointerException if the specified key or the
* remappingFunction is null
* @throws RuntimeException or Error if the remappingFunction does so,
* in which case the mapping is unchanged
*/
public V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
if (key == null || value == null || remappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int delta = 0;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
if (casTabAt(tab, i, null, new Node<K,V>(h, key, value))) {
delta = 1;
val = value;
break;
}
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f, pred = null;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = remappingFunction.apply(e.val, value);
if (val != null)
e.val = val;
else {
delta = -1;
Node<K,V> en = e.next;
if (pred != null)
pred.next = en;
else
setTabAt(tab, i, en);
}
break;
}
pred = e;
if ((e = e.next) == null) {
delta = 1;
val = value;
pred.next = new Node<K,V>(h, key, val);
break;
}
}
}
else if (f instanceof TreeBin) {
binCount = 2;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r = t.root;
TreeNode<K,V> p = (r == null) ? null :
r.findTreeNode(h, key, null);
val = (p == null) ? value :
remappingFunction.apply(p.val, value);
if (val != null) {
if (p != null)
p.val = val;
else {
delta = 1;
t.putTreeVal(h, key, val);
}
}
else if (p != null) {
delta = -1;
if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
break;
}
}
}
if (delta != 0)
addCount((long)delta, binCount);
return val;
}
// Hashtable legacy methods
/**
* Tests if some key maps into the specified value in this table.
*
* <p>Note that this method is identical in functionality to
* {@link #containsValue(Object)}, and exists solely to ensure
* full compatibility with class {@link java.util.Hashtable},
* which supported this method prior to introduction of the
* Java Collections Framework.
*
* @param value a value to search for
* @return {@code true} if and only if some key maps to the
* {@code value} argument in this table as
* determined by the {@code equals} method;
* {@code false} otherwise
* @throws NullPointerException if the specified value is null
*/
public boolean contains(Object value) {
return containsValue(value);
}
/**
* Returns an enumeration of the keys in this table.
*
* @return an enumeration of the keys in this table
* @see #keySet()
*/
public Enumeration<K> keys() {
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
return new KeyIterator<K,V>(t, f, 0, f, this);
}
/**
* Returns an enumeration of the values in this table.
*
* @return an enumeration of the values in this table
* @see #values()
*/
public Enumeration<V> elements() {
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
return new ValueIterator<K,V>(t, f, 0, f, this);
}
// ConcurrentHashMap-only methods
/**
* Returns the number of mappings. This method should be used
* instead of {@link #size} because a ConcurrentHashMap may
* contain more mappings than can be represented as an int. The
* value returned is an estimate; the actual count may differ if
* there are concurrent insertions or removals.
*
* @return the number of mappings
* @since 1.8
*/
public long mappingCount() {
long n = sumCount();
return (n < 0L) ? 0L : n; // ignore transient negative values
}
/**
* Creates a new {@link Set} backed by a ConcurrentHashMap
* from the given type to {@code Boolean.TRUE}.
*
* @param <K> the element type of the returned set
* @return the new set
* @since 1.8
*/
public static <K> KeySetView<K,Boolean> newKeySet() {
return new KeySetView<K,Boolean>
(new ConcurrentHashMap<K,Boolean>(), Boolean.TRUE);
}
/**
* Creates a new {@link Set} backed by a ConcurrentHashMap
* from the given type to {@code Boolean.TRUE}.
*
* @param initialCapacity The implementation performs internal
* sizing to accommodate this many elements.
* @param <K> the element type of the returned set
* @return the new set
* @throws IllegalArgumentException if the initial capacity of
* elements is negative
* @since 1.8
*/
public static <K> KeySetView<K,Boolean> newKeySet(int initialCapacity) {
return new KeySetView<K,Boolean>
(new ConcurrentHashMap<K,Boolean>(initialCapacity), Boolean.TRUE);
}
/**
* Returns a {@link Set} view of the keys in this map, using the
* given common mapped value for any additions (i.e., {@link
* Collection#add} and {@link Collection#addAll(Collection)}).
* This is of course only appropriate if it is acceptable to use
* the same value for all additions from this view.
*
* @param mappedValue the mapped value to use for any additions
* @return the set view
* @throws NullPointerException if the mappedValue is null
*/
public KeySetView<K,V> keySet(V mappedValue) {
if (mappedValue == null)
throw new NullPointerException();
return new KeySetView<K,V>(this, mappedValue);
}
新的改變
我們對Markdown編輯器進行了一些功能拓展與語法支持,除了標準的Markdown編輯器功能,我們增加了如下幾點新功能,幫助你用它寫博客:
- 全新的界面設計 ,將會帶來全新的寫作體驗;
- 在創作中心設置你喜愛的代碼高亮樣式,Markdown 將代碼片顯示選擇的高亮樣式 進行展示;
- 增加了 圖片拖拽 功能,你可以將本地的圖片直接拖拽到編輯區域直接展示;
- 全新的 KaTeX數學公式 語法;
- 增加了支持甘特圖的mermaid語法1 功能;
- 增加了 多屏幕編輯 Markdown文章功能;
- 增加了 焦點寫作模式、預覽模式、簡潔寫作模式、左右區域同步滾輪設置 等功能,功能按鈕位於編輯區域與預覽區域中間;
- 增加了 檢查列表 功能。
功能快捷鍵
撤銷:Ctrl/Command + Z
重做:Ctrl/Command + Y
加粗:Ctrl/Command + B
斜體:Ctrl/Command + I
標題:Ctrl/Command + Shift + H
無序列表:Ctrl/Command + Shift + U
有序列表:Ctrl/Command + Shift + O
檢查列表:Ctrl/Command + Shift + C
插入代碼:Ctrl/Command + Shift + K
插入鏈接:Ctrl/Command + Shift + L
插入圖片:Ctrl/Command + Shift + G
查找:Ctrl/Command + F
替換:Ctrl/Command + G
合理的創建標題,有助於目錄的生成
直接輸入1次#,並按下space後,將生成1級標題。
輸入2次#,並按下space後,將生成2級標題。
以此類推,我們支持6級標題。有助於使用TOC
語法後生成一個完美的目錄。
如何改變文本的樣式
強調文本 強調文本
加粗文本 加粗文本
標記文本
刪除文本
引用文本
H2O is是液體。
210 運算結果是 1024.
插入鏈接與圖片
鏈接: link.
圖片:
帶尺寸的圖片:
居中的圖片:
居中並且帶尺寸的圖片:
當然,我們爲了讓用戶更加便捷,我們增加了圖片拖拽功能。
如何插入一段漂亮的代碼片
去博客設置頁面,選擇一款你喜歡的代碼片高亮樣式,下面展示同樣高亮的 代碼片
.
// An highlighted block
var foo = 'bar';
生成一個適合你的列表
- 項目
- 項目
- 項目
- 項目
- 項目1
- 項目2
- 項目3
- 計劃任務
- 完成任務
創建一個表格
一個簡單的表格是這麼創建的:
項目 | Value |
---|---|
電腦 | $1600 |
手機 | $12 |
導管 | $1 |
設定內容居中、居左、居右
使用:---------:
居中
使用:----------
居左
使用----------:
居右
第一列 | 第二列 | 第三列 |
---|---|---|
第一列文本居中 | 第二列文本居右 | 第三列文本居左 |
SmartyPants
SmartyPants將ASCII標點字符轉換爲“智能”印刷標點HTML實體。例如:
TYPE | ASCII | HTML |
---|---|---|
Single backticks | 'Isn't this fun?' |
‘Isn’t this fun?’ |
Quotes | "Isn't this fun?" |
“Isn’t this fun?” |
Dashes | -- is en-dash, --- is em-dash |
– is en-dash, — is em-dash |
創建一個自定義列表
- Markdown
- Text-to-HTML conversion tool
- Authors
- John
- Luke
如何創建一個註腳
一個具有註腳的文本。2
註釋也是必不可少的
Markdown將文本轉換爲 HTML。
KaTeX數學公式
您可以使用渲染LaTeX數學表達式 KaTeX:
Gamma公式展示 是通過歐拉積分
你可以找到更多關於的信息 LaTeX 數學表達式here.
新的甘特圖功能,豐富你的文章
- 關於 甘特圖 語法,參考 這兒,
UML 圖表
可以使用UML圖表進行渲染。 Mermaid. 例如下面產生的一個序列圖::
這將產生一個流程圖。:
- 關於 Mermaid 語法,參考 這兒,
FLowchart流程圖
我們依舊會支持flowchart的流程圖:
- 關於 Flowchart流程圖 語法,參考 這兒.
導出與導入
導出
如果你想嘗試使用此編輯器, 你可以在此篇文章任意編輯。當你完成了一篇文章的寫作, 在上方工具欄找到 文章導出 ,生成一個.md文件或者.html文件進行本地保存。
導入
如果你想加載一篇你寫過的.md文件,在上方工具欄可以選擇導入功能進行對應擴展名的文件導入,
繼續你的創作。
註腳的解釋 ↩︎