ConcurrentHashMap从1.7-1.8变化

Java 1.7到1.8,ConcurrentHashMap有了很大的变化。

ConcurrentHashMap的结构变化

1.7的结构

一个ConcurrentHashMap中包含一个Segment<K,V>[] segments 数组。
一个Segment对象中包含一个HashEntry<K,V>[] table数组。
一个HashEntry对象包含hash值,Key,Value,以及下一个HashEntry对象。
这里写图片描述

Segment继承ReentrantLock重入锁(Segment<K,V> extends ReentrantLock),也就是说每个Segment 对象都是重入锁。

ConcurrentHashMap 的默认构造函数,initial capacity是16,load factor是0.75,concurrencyLevel是16。

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor, int concurrencyLevel) {
  if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
    throw new IllegalArgumentException();
  if (concurrencyLevel > MAX_SEGMENTS)//MAX_SEGMENTS = 1 << 16,1左移16位变成65536
    concurrencyLevel = MAX_SEGMENTS;
  // Find power-of-two sizes best matching arguments
  int sshift = 0;
  int ssize = 1;
  while (ssize < concurrencyLevel) {
    ++sshift; //记录循环次数
    ssize <<= 1;//从1开始,每次左移一位,最高16位。
  }
  this.segmentShift = 32 - sshift;
  this.segmentMask = ssize - 1;
  if (initialCapacity > MAXIMUM_CAPACITY)
    initialCapacity = MAXIMUM_CAPACITY;//MAXIMUM_CAPACITY = 1 << 30
  int c = initialCapacity / ssize;
  if (c * ssize < initialCapacity)
    ++c;
  int cap = MIN_SEGMENT_TABLE_CAPACITY;
  while (cap < c)
    cap <<= 1;
  // create segments and segments[0]
  Segment<K,V> s0 =
    new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                     (HashEntry<K,V>[])new HashEntry[cap]);
  Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];//通过这里可以看到,Segment数据最大是16个
  UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
  this.segments = ss;
}

1.8的结构

一个ConcurrentHashMap包含 Node<K,V>[] table
一个Node<K,V>对象包含hash值,Key,Value,以及下一个Node<K,V>对象。

1.8中,取消了Segment的结构,ConcurrentHashMap的默认构造函数为空, 真正的初始化是在put的时候。

public V put(K key, V value) {
  return putVal(key, value, false);
}

/** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
  if (key == null || value == null) throw new NullPointerException();
  int hash = spread(key.hashCode());//二次计算hash,均匀分布
  int binCount = 0;
  for (Node<K,V>[] tab = table;;) {
    Node<K,V> f; int n, i, fh;
    if (tab == null || (n = tab.length) == 0)
      tab = initTable();//tab为空,初始化table
    else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {//计算的hash在数组中找不到就新增头节点
      if (casTabAt(tab, i, null,
                   new Node<K,V>(hash, key, value, null)))
        break;                   // no lock when adding to empty bin
    }
    else if ((fh = f.hash) == MOVED)
      tab = helpTransfer(tab, f);
    else {
      V oldVal = null;
      synchronized (f) {//这里使用了table数组中节点的对象锁
        if (tabAt(tab, i) == f) {
          if (fh >= 0) {
            binCount = 1;
            for (Node<K,V> e = f;; ++binCount) {
              K ek;
              if (e.hash == hash &&
                  ((ek = e.key) == key ||
                   (ek != null && key.equals(ek)))) {
                oldVal = e.val;
                if (!onlyIfAbsent)
                  e.val = value;
                break;
              }
              Node<K,V> pred = e;
              if ((e = e.next) == null) {
                pred.next = new Node<K,V>(hash, key,
                                          value, null);
                break;
              }
            }
          }
          else if (f instanceof TreeBin) {//将节点后的链表变成红黑树
            Node<K,V> p;
            binCount = 2;
            if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
                                                  value)) != null) {
              oldVal = p.val;
              if (!onlyIfAbsent)
                p.val = value;
            }
          }
        }
      }
      if (binCount != 0) {
        if (binCount >= TREEIFY_THRESHOLD)
          treeifyBin(tab, i);
        if (oldVal != null)
          return oldVal;
        break;
      }
    }
  }
  addCount(1L, binCount);
  return null;
}

private final Node<K,V>[] initTable() {
  Node<K,V>[] tab; int sc;
  while ((tab = table) == null || tab.length == 0) {
    if ((sc = sizeCtl) < 0)
      Thread.yield(); // lost initialization race; just spin
    else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
      try {
        if ((tab = table) == null || tab.length == 0) {
          int n = (sc > 0) ? sc : DEFAULT_CAPACITY;//默认容量16
          @SuppressWarnings("unchecked")
          Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
          table = tab = nt;
          sc = n - (n >>> 2);
        }
      } finally {
        sizeCtl = sc;
      }
      break;
    }
  }
  return tab;
}

结构上的总结

从结构上,可以看到从“Segment[]+链表”的结构变成了”Node[]+链表/红黑树“的结构。

1.7中Segment[]最大是16,也就是最大支持16个并发。1.8改成Node[],课并行的数量远远大于16。

1.8用的是synchronized,也就是Node[]数组中的节点的对象锁。使用synchronized可以被JVM更好的优化。

1.8还将链表大于8时,转成红黑树。大大减少了hash相同时的查询时间复杂度。

size方法的不同

1.7在统计size的时候,会先用加锁的情况下统计2次,然后比较差异,如果有差异,就会获取所有的Segment锁,再去统计。

public int size() {
  // Try a few times to get accurate count. On failure due to
  // continuous async changes in table, resort to locking.
  final Segment<K,V>[] segments = this.segments;
  int size;
  boolean overflow; // true if size overflows 32 bits
  long sum;         // sum of modCounts
  long last = 0L;   // previous sum
  int retries = -1; // first iteration isn't retry
  try {
    for (;;) {
      if (retries++ == RETRIES_BEFORE_LOCK) { //RETRIES_BEFORE_LOCK = 2
        for (int j = 0; j < segments.length; ++j)
          ensureSegment(j).lock(); // force creation
      }
      sum = 0L;
      size = 0;
      overflow = false;
      for (int j = 0; j < segments.length; ++j) {
        Segment<K,V> seg = segmentAt(segments, j);
        if (seg != null) {
          sum += seg.modCount;
          int c = seg.count;
          if (c < 0 || (size += c) < 0)
            overflow = true;
        }
      }
      if (sum == last) //通过比较2次计算的count,判断是否要获取所有锁再统计
        break;
      last = sum;
    }
  } finally {
    if (retries > RETRIES_BEFORE_LOCK) {
      for (int j = 0; j < segments.length; ++j)
        segmentAt(segments, j).unlock();
    }
  }
  return overflow ? Integer.MAX_VALUE : size;
}

1.8认为size是一个瞬间的状态,所以它将每次修改的计数放在了ConcurrentHashMap的局部变量中,调用size的时候,直接去获取这个计数,比1.7方便许多。

public int size() {
  long n = sumCount();
  return ((n < 0L) ? 0 :
          (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
          (int)n);
}

final long sumCount() {
  CounterCell[] as = counterCells; CounterCell a;
  long sum = baseCount;
  if (as != null) {
    for (int i = 0; i < as.length; ++i) {
      if ((a = as[i]) != null)
        sum += a.value;
    }
  }
  return sum;
}

private transient volatile CounterCell[] counterCells;
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章