ConcurrentHashMap從1.7-1.8變化

Java 1.7到1.8,ConcurrentHashMap有了很大的變化。

ConcurrentHashMap的結構變化

1.7的結構

一個ConcurrentHashMap中包含一個Segment<K,V>[] segments 數組。
一個Segment對象中包含一個HashEntry<K,V>[] table數組。
一個HashEntry對象包含hash值,Key,Value,以及下一個HashEntry對象。
這裏寫圖片描述

Segment繼承ReentrantLock重入鎖(Segment<K,V> extends ReentrantLock),也就是說每個Segment 對象都是重入鎖。

ConcurrentHashMap 的默認構造函數,initial capacity是16,load factor是0.75,concurrencyLevel是16。

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor, int concurrencyLevel) {
  if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
    throw new IllegalArgumentException();
  if (concurrencyLevel > MAX_SEGMENTS)//MAX_SEGMENTS = 1 << 16,1左移16位變成65536
    concurrencyLevel = MAX_SEGMENTS;
  // Find power-of-two sizes best matching arguments
  int sshift = 0;
  int ssize = 1;
  while (ssize < concurrencyLevel) {
    ++sshift; //記錄循環次數
    ssize <<= 1;//從1開始,每次左移一位,最高16位。
  }
  this.segmentShift = 32 - sshift;
  this.segmentMask = ssize - 1;
  if (initialCapacity > MAXIMUM_CAPACITY)
    initialCapacity = MAXIMUM_CAPACITY;//MAXIMUM_CAPACITY = 1 << 30
  int c = initialCapacity / ssize;
  if (c * ssize < initialCapacity)
    ++c;
  int cap = MIN_SEGMENT_TABLE_CAPACITY;
  while (cap < c)
    cap <<= 1;
  // create segments and segments[0]
  Segment<K,V> s0 =
    new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                     (HashEntry<K,V>[])new HashEntry[cap]);
  Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];//通過這裏可以看到,Segment數據最大是16個
  UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
  this.segments = ss;
}

1.8的結構

一個ConcurrentHashMap包含 Node<K,V>[] table
一個Node<K,V>對象包含hash值,Key,Value,以及下一個Node<K,V>對象。

1.8中,取消了Segment的結構,ConcurrentHashMap的默認構造函數爲空, 真正的初始化是在put的時候。

public V put(K key, V value) {
  return putVal(key, value, false);
}

/** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
  if (key == null || value == null) throw new NullPointerException();
  int hash = spread(key.hashCode());//二次計算hash,均勻分佈
  int binCount = 0;
  for (Node<K,V>[] tab = table;;) {
    Node<K,V> f; int n, i, fh;
    if (tab == null || (n = tab.length) == 0)
      tab = initTable();//tab爲空,初始化table
    else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {//計算的hash在數組中找不到就新增頭節點
      if (casTabAt(tab, i, null,
                   new Node<K,V>(hash, key, value, null)))
        break;                   // no lock when adding to empty bin
    }
    else if ((fh = f.hash) == MOVED)
      tab = helpTransfer(tab, f);
    else {
      V oldVal = null;
      synchronized (f) {//這裏使用了table數組中節點的對象鎖
        if (tabAt(tab, i) == f) {
          if (fh >= 0) {
            binCount = 1;
            for (Node<K,V> e = f;; ++binCount) {
              K ek;
              if (e.hash == hash &&
                  ((ek = e.key) == key ||
                   (ek != null && key.equals(ek)))) {
                oldVal = e.val;
                if (!onlyIfAbsent)
                  e.val = value;
                break;
              }
              Node<K,V> pred = e;
              if ((e = e.next) == null) {
                pred.next = new Node<K,V>(hash, key,
                                          value, null);
                break;
              }
            }
          }
          else if (f instanceof TreeBin) {//將節點後的鏈表變成紅黑樹
            Node<K,V> p;
            binCount = 2;
            if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
                                                  value)) != null) {
              oldVal = p.val;
              if (!onlyIfAbsent)
                p.val = value;
            }
          }
        }
      }
      if (binCount != 0) {
        if (binCount >= TREEIFY_THRESHOLD)
          treeifyBin(tab, i);
        if (oldVal != null)
          return oldVal;
        break;
      }
    }
  }
  addCount(1L, binCount);
  return null;
}

private final Node<K,V>[] initTable() {
  Node<K,V>[] tab; int sc;
  while ((tab = table) == null || tab.length == 0) {
    if ((sc = sizeCtl) < 0)
      Thread.yield(); // lost initialization race; just spin
    else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
      try {
        if ((tab = table) == null || tab.length == 0) {
          int n = (sc > 0) ? sc : DEFAULT_CAPACITY;//默認容量16
          @SuppressWarnings("unchecked")
          Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
          table = tab = nt;
          sc = n - (n >>> 2);
        }
      } finally {
        sizeCtl = sc;
      }
      break;
    }
  }
  return tab;
}

結構上的總結

從結構上,可以看到從“Segment[]+鏈表”的結構變成了”Node[]+鏈表/紅黑樹“的結構。

1.7中Segment[]最大是16,也就是最大支持16個併發。1.8改成Node[],課並行的數量遠遠大於16。

1.8用的是synchronized,也就是Node[]數組中的節點的對象鎖。使用synchronized可以被JVM更好的優化。

1.8還將鏈表大於8時,轉成紅黑樹。大大減少了hash相同時的查詢時間複雜度。

size方法的不同

1.7在統計size的時候,會先用加鎖的情況下統計2次,然後比較差異,如果有差異,就會獲取所有的Segment鎖,再去統計。

public int size() {
  // Try a few times to get accurate count. On failure due to
  // continuous async changes in table, resort to locking.
  final Segment<K,V>[] segments = this.segments;
  int size;
  boolean overflow; // true if size overflows 32 bits
  long sum;         // sum of modCounts
  long last = 0L;   // previous sum
  int retries = -1; // first iteration isn't retry
  try {
    for (;;) {
      if (retries++ == RETRIES_BEFORE_LOCK) { //RETRIES_BEFORE_LOCK = 2
        for (int j = 0; j < segments.length; ++j)
          ensureSegment(j).lock(); // force creation
      }
      sum = 0L;
      size = 0;
      overflow = false;
      for (int j = 0; j < segments.length; ++j) {
        Segment<K,V> seg = segmentAt(segments, j);
        if (seg != null) {
          sum += seg.modCount;
          int c = seg.count;
          if (c < 0 || (size += c) < 0)
            overflow = true;
        }
      }
      if (sum == last) //通過比較2次計算的count,判斷是否要獲取所有鎖再統計
        break;
      last = sum;
    }
  } finally {
    if (retries > RETRIES_BEFORE_LOCK) {
      for (int j = 0; j < segments.length; ++j)
        segmentAt(segments, j).unlock();
    }
  }
  return overflow ? Integer.MAX_VALUE : size;
}

1.8認爲size是一個瞬間的狀態,所以它將每次修改的計數放在了ConcurrentHashMap的局部變量中,調用size的時候,直接去獲取這個計數,比1.7方便許多。

public int size() {
  long n = sumCount();
  return ((n < 0L) ? 0 :
          (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
          (int)n);
}

final long sumCount() {
  CounterCell[] as = counterCells; CounterCell a;
  long sum = baseCount;
  if (as != null) {
    for (int i = 0; i < as.length; ++i) {
      if ((a = as[i]) != null)
        sum += a.value;
    }
  }
  return sum;
}

private transient volatile CounterCell[] counterCells;
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章