常用集合:
- List: ArrayList,,LinkedList,Stack
- Map:HashMap,LinkedHashMap,ConcurrentHashMap
- Set:HashSet,LinkedHashSet
- Queue:ConcurrentLinkedQueue
1、ArrayList,LinkedList區別?
通常情況下區別有以下幾點:
1)ArrayList是實現了基於動態數組的數據結構,而LinkedList是基於鏈表的數據結構;
2)對於隨機訪問get,set,ArrayList優於LinkedList,因爲LinkedList需要移動指針;
3)對於添加(add)和刪除(remove)操作,LinkedList優於ArrayList,因爲ArrayList需要移動數據。
實驗一下:
package List;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
public class ArrayAndLink {
public static void main(String[] args) {
test();
}
public static void test() {
List<Integer> alist = new ArrayList<>();
List<Integer> llist = new LinkedList<>();
System.out.println("add操作:");
long lTime = System.nanoTime();
for (int i = 0; i < 1000; i++) {
alist.add(i);
}
System.out.println("ArrayList : " + (System.nanoTime() - lTime));
long aTime = System.nanoTime();
for (int i = 0; i < 1000; i++) {
llist.add(i);
}
System.out.println("LinkedList : " + (System.nanoTime() - aTime));
System.out.println("remove操作:");
long lTime2 = System.nanoTime();
for (int i = 0; i < alist.size(); i++) {
if(i % 2 == 0) {
alist.remove(i);
}
}
System.out.println("ArrayList : " + (System.nanoTime() - lTime2));
long aTime2 = System.nanoTime();
for (int i = 0; i < llist.size(); i++) {
if(i % 2 == 0) {
llist.remove(i);
}
}
System.out.println("LinkedList : " + (System.nanoTime() - aTime2));
}
}
運行結果:
add操作:
ArrayList : 482694
LinkedList : 386396
remove操作:
ArrayList : 267158
LinkedList : 577784
每次運行的結果可能不相同,但是基本上add操作ArrayList會較大,而remove操作LinkedList比較大。當然以上結論是有一定的條件的。在某些條件下,時間長度缺恰好相反。具體可參考Java中ArrayList和LinkedList區別
再來看源碼(摘取部分展示)
- ArrayList:
public class ArrayList<E> extends AbstractList<E>
implements List<E>, RandomAccess, Cloneable, java.io.Serializable
{
/**
* Default initial capacity.
*/
private static final int DEFAULT_CAPACITY = 10;
/**
* Shared empty array instance used for empty instances.
*/
private static final Object[] EMPTY_ELEMENTDATA = {};
/**
* Shared empty array instance used for default sized empty instances. We
* distinguish this from EMPTY_ELEMENTDATA to know how much to inflate when
* first element is added.
*/
private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {};
/**
* The array buffer into which the elements of the ArrayList are stored.
* The capacity of the ArrayList is the length of this array buffer. Any
* empty ArrayList with elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA
* will be expanded to DEFAULT_CAPACITY when the first element is added.
*/
transient Object[] elementData; // non-private to simplify nested class access
/**
* The size of the ArrayList (the number of elements it contains).
*
* @serial
*/
private int size;
/**
* Appends the specified element to the end of this list.
*
* @param e element to be appended to this list
* @return <tt>true</tt> (as specified by {@link Collection#add})
*/
public boolean add(E e) {
ensureCapacityInternal(size + 1); // Increments modCount!!
elementData[size++] = e;
return true;
}
private void ensureCapacityInternal(int minCapacity) {
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
minCapacity = Math.max(DEFAULT_CAPACITY, minCapacity);
}
ensureExplicitCapacity(minCapacity);
}
private void ensureExplicitCapacity(int minCapacity) {
modCount++;
// overflow-conscious code
if (minCapacity - elementData.length > 0)
grow(minCapacity);
}
public boolean remove(Object o) {
if (o == null) {
for (int index = 0; index < size; index++)
if (elementData[index] == null) {
fastRemove(index);
return true;
}
} else {
for (int index = 0; index < size; index++)
if (o.equals(elementData[index])) {
fastRemove(index);
return true;
}
}
return false;
}
/*
* Private remove method that skips bounds checking and does not
* return the value removed.
*/
private void fastRemove(int index) {
modCount++;
int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
elementData[--size] = null; // clear to let GC do its work
}
public E remove(int index) {
rangeCheck(index);
modCount++;
E oldValue = elementData(index);
int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
elementData[--size] = null; // clear to let GC do its work
return oldValue;
}
/**
* Removes all of the elements from this list. The list will
* be empty after this call returns.
*/
public void clear() {
modCount++;
// clear to let GC do its work
for (int i = 0; i < size; i++)
elementData[i] = null;
size = 0;
}
@SuppressWarnings("unchecked")
E elementData(int index) {
return (E) elementData[index];
}
/**
* Returns the element at the specified position in this list.
*
* @param index index of the element to return
* @return the element at the specified position in this list
* @throws IndexOutOfBoundsException {@inheritDoc}
*/
public E get(int index) {
rangeCheck(index);
return elementData(index);
}
remove(int index):用了 System.arraycopy(),來移動後面的元素。時間複雜度約等於是O(n);
關於 System.arraycopy()複製過程可參見: System.arraycopy()方法詳解
add(E e):操作是每次加在數組的末尾。時間複雜度是O(1);
get(int index):是直接索引,時間複雜度爲O(1);
關於擴容:
/**
* Increases the capacity to ensure that it can hold at least the
* number of elements specified by the minimum capacity argument.
*
* @param minCapacity the desired minimum capacity
*/
private void grow(int minCapacity) {
// overflow-conscious code
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + (oldCapacity >> 1);
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
// minCapacity is usually close to size, so this is a win:
elementData = Arrays.copyOf(elementData, newCapacity);
}
java.util.Arrays.copyOf(elementData,newCapacity)方法,複製指定的數組,或者用0填充(如果有必要),以使副本具有指定的長度。對於在原數組和副本中都有效的所有索引,這兩個數組包含相同的值。對於副本中有效而在遠數組無效的索引,副本包含0L.當且僅當指定長度大於原數組的長度時,這些索引存在。
每次擴容的長度是 int newCapacity = oldCapacity + (oldCapacity >> 1); 加上原先的一半。
- LinkedList
public class LinkedList<E>
extends AbstractSequentialList<E>
implements List<E>, Deque<E>, Cloneable, java.io.Serializable
{
transient int size = 0;
/**
* Pointer to first node.
* Invariant: (first == null && last == null) ||
* (first.prev == null && first.item != null)
*/
transient Node<E> first;
/**
* Pointer to last node.
* Invariant: (first == null && last == null) ||
* (last.next == null && last.item != null)
*/
transient Node<E> last;
/**
* Appends the specified element to the end of this list.
*
* <p>This method is equivalent to {@link #addLast}.
*
* @param e element to be appended to this list
* @return {@code true} (as specified by {@link Collection#add})
*/
public boolean add(E e) {
linkLast(e);
return true;
}
public void add(int index, E element) {
checkPositionIndex(index);
if (index == size)
linkLast(element);
else
linkBefore(element, node(index));
}
/**
* Inserts element e before non-null Node succ.
*/
void linkBefore(E e, Node<E> succ) {
// assert succ != null;
final Node<E> pred = succ.prev;
final Node<E> newNode = new Node<>(pred, e, succ);
succ.prev = newNode;
if (pred == null)
first = newNode;
else
pred.next = newNode;
size++;
modCount++;
}
/**
* Links e as first element.
*/
private void linkFirst(E e) {
final Node<E> f = first;
final Node<E> newNode = new Node<>(null, e, f);
first = newNode;
if (f == null)
last = newNode;
else
f.prev = newNode;
size++;
modCount++;
}
/**
* Links e as last element.
*/
void linkLast(E e) {
final Node<E> l = last;
final Node<E> newNode = new Node<>(l, e, null);
last = newNode;
if (l == null)
first = newNode;
else
l.next = newNode;
size++;
modCount++;
}
public boolean remove(Object o) {
if (o == null) {
for (Node<E> x = first; x != null; x = x.next) {
if (x.item == null) {
unlink(x);
return true;
}
}
} else {
for (Node<E> x = first; x != null; x = x.next) {
if (o.equals(x.item)) {
unlink(x);
return true;
}
}
}
return false;
}
/**
* Unlinks non-null node x.
*/
E unlink(Node<E> x) {
// assert x != null;
final E element = x.item;
final Node<E> next = x.next;
final Node<E> prev = x.prev;
if (prev == null) {
first = next;
} else {
prev.next = next;
x.prev = null;
}
if (next == null) {
last = prev;
} else {
next.prev = prev;
x.next = null;
}
x.item = null;
size--;
modCount++;
return element;
}
/**
* Returns the element at the specified position in this list.
*
* @param index index of the element to return
* @return the element at the specified position in this list
* @throws IndexOutOfBoundsException {@inheritDoc}
*/
public E get(int index) {
checkElementIndex(index);
return node(index).item;
}
/**
* Returns the (non-null) Node at the specified element index.
*/
Node<E> node(int index) {
// assert isElementIndex(index);
if (index < (size >> 1)) {
Node<E> x = first;
for (int i = 0; i < index; i++)
x = x.next;
return x;
} else {
Node<E> x = last;
for (int i = size - 1; i > index; i--)
x = x.prev;
return x;
}
}
private static class Node<E> {
E item;
Node<E> next;
Node<E> prev;
Node(Node<E> prev, E element, Node<E> next) {
this.item = element;
this.next = next;
this.prev = prev;
}
}
LinkedList是基於鏈表的,所以會有節點Node,每個節點都包含了前一個節點prev,後一個節點next,item就是當前節點。
add(E e):操作會直接在隊尾增加元素,增加的時候會先判斷原先的尾節點是否爲空。若是,即爲整個鏈表爲空。會將頭節點指向改元素,不爲空,直接在後面追加即可。(其實在 first 之前,還有一個爲 null 的 head 節點。head 節點的 next 纔是 first 節點)時間複雜度爲O(1)。
add(E e, int index) :主要是 linkBefore(E e, Node succ) 該方法就是產生一個新的節點。然後把該節點插入到index位置。
remove(int index)調用了unlink(Node x),把當前節點執空。node(int index)是爲了找到索引的節點,用了一個二分,一個是從頭遍歷,一個是從尾開始。時間複雜度O(n)
get(int index):也用到了remove裏面的方法node(int index)去找到對應的節點。時間複雜度爲O(n)。
2、Stack的使用
棧的特點是先進後出,那麼是怎麼實現的呢?結合源碼,一探究竟。
public
class Stack<E> extends Vector<E> {
/**
* Creates an empty Stack.
*/
public Stack() {
}
public E push(E item) {
addElement(item);
return item;
}
public synchronized E pop() {
E obj;
int len = size();
obj = peek();
removeElementAt(len - 1);
return obj;
}
public synchronized E peek() {
int len = size();
if (len == 0)
throw new EmptyStackException();
return elementAt(len - 1);
}
public synchronized int search(Object o) {
int i = lastIndexOf(o);
if (i >= 0) {
return size() - i;
}
return -1;
}
push(E item):壓入棧,調用了父類的 addElement(E obj)
壓入也是記錄在一個數組裏面,數組的擴容也是遵循ArrayList的擴容。
public synchronized void addElement(E obj) {
modCount++;
ensureCapacityHelper(elementCount + 1);
elementData[elementCount++] = obj;
}
pop():出棧,調用了父類的 elementAt(int index)
index是數組的長度,直接返回數組的最後一個元素。時間複雜度O(1)
public synchronized E elementAt(int index) {
if (index >= elementCount) {
throw new ArrayIndexOutOfBoundsException(index + " >= " + elementCount);
}
return elementData(index);
}
@SuppressWarnings("unchecked")
E elementData(int index) {
return (E) elementData[index];
}
search(Object o):查找對應元素的位置,找不到就返回-1。調用了父類的 lastIndexOf(Object o),內部使用了for循環,時間複雜度O(n)。
public synchronized int lastIndexOf(Object o) {
return lastIndexOf(o, elementCount-1);
}
public synchronized int lastIndexOf(Object o, int index) {
if (index >= elementCount)
throw new IndexOutOfBoundsException(index + " >= "+ elementCount);
if (o == null) {
for (int i = index; i >= 0; i--)
if (elementData[i]==null)
return i;
} else {
for (int i = index; i >= 0; i--)
if (o.equals(elementData[i]))
return i;
}
return -1;
}
Stack內部方法都是被 synchronized修飾,所以也是線程安全的,可在多線程環境下使用。
3、HashMap
HashMap是最常用的集合,它繼承了AbstractMap<K,V>抽象類,同時實現了 Map<K,V>, Cloneable, Serializable 接口,HashMap是用Key-Value的形式儲存的,允許有null值和null鍵的存在。在面試過程中一般會有這樣的幾個問題。
1)hash 方法原理
2)HashMap中解決碰撞的方法
3)equals()和hashCode()的應用,以及它們在HashMap中的重要性
4)重新調整HashMap的大小(如果HashMap的大小超過了負載因子(load factor)定義的容量,怎麼辦?)
5)Java1.8修復HashMap在多線程出現死循環的問題
在解答這些問題之前,先介紹一下HashMap幾個重要的參數:
/**
* 默認初始容量(必須是2的冪)
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
*最大容量,如果較高的值由帶參數的任何構造函數隱式指定,則使用該值。
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 構造函數中沒有指定時使用的負載因子
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* 容量極限
*/
int threshold;
還有三個參數是與紅黑樹相關
//一個桶的樹化閾值
//當桶中元素個數超過這個值時,需要使用紅黑樹節點替換鏈表節點
//這個值必須爲 8,要不然頻繁轉換效率也不高
static final int TREEIFY_THRESHOLD = 8;
//一個樹的鏈表還原閾值
//當擴容時,桶中元素個數小於這個值,就會把樹形的桶元素 還原(切分)爲鏈表結構
//這個值應該比上面那個小,至少爲 6,避免頻繁轉換
static final int UNTREEIFY_THRESHOLD = 6;
//哈希表的最小樹形化容量
//當哈希表中的容量大於這個值時,表中的桶才能進行樹形化
//否則桶內元素太多時會擴容,而不是樹形化
//爲了避免進行擴容、樹形化選擇的衝突,這個值不能小於 4 * TREEIFY_THRESHOLD
static final int MIN_TREEIFY_CAPACITY = 64;
1) hash 方法原理
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
這段代碼叫"擾動函數"。key.hashCode()函數調用的是key鍵值類型自帶的哈希函數,返回int型的散列值。理論上該值是一個int,如果直接拿散列值爲下標訪問HashMap主數組的話,考慮到2進制的32位帶符號的int值是-2147483648到2147483648的範圍。前後加起來大概40億的映射空間。只要哈希函數映射的比較均勻鬆散,一般引用是很難出現碰撞的。
但是一個40億的數組內存是放不下的,HashMap數組初始化大小才16.所以散列值不能直接拿來用。而是先對數組長度做取摸運算,得到餘數才能用來訪問數組下標。
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else {
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
if ((p = tab[i = (n - 1) & hash]) == null) tab[i] = newNode(hash, key, value, null) 這段代碼就是在取摸運算,再來看get(Object key)
public V get(Object key) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
也是用同樣的計算方式去找到角標的。
這裏也可以解釋爲什麼容量的值一定要取2的整次冪。int index = hash%(length -1),對於length = 16的二進制 “1 0000” ,length-1爲 “01111” 假設此時 hash = 17 .使用 (length - 1) & hash 也就是 01111 & 10001 結果是1,我們發現因爲 01111低位都是1,進行&操作正好能保留 hash的低位,將高位的都丟棄,低位的結果正好是 0-15 之間的數,也剛好是長度length = 16 的所有索引。
00000000 00000000 00010001 hash
& 00000000 00000000 00001111 16-1
---------------------------------------------------------
00000000 00000000 00000001
不管hash前面多少位 有效的只能是後面的 最大也就是15。以此類推,假如增加到32的長度 hash&(32 -1)的大小也會只在 0- 31之間。
既然是擾動函數,它的好處就是減少碰撞率。但是也無法避免碰撞。
2)HashMap中解決碰撞的方法
HashMap中處理衝突方法實際就是鏈地址法,內部數據結構數是數組 + 單鏈表。鏈表使用節點表示的 Node<K,V>[] tab,而Node有四個成員變量。
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
之前我有一個誤解,hash衝突是怎麼能得到value呢,我們都知道key是唯一的,hash只不過是要找到數組對應的角標,正真的value是存儲在鏈表裏面的。還是要通過key值去索引的。可以看到get()方法,當 if ((e = first.next) != null) 時,會做一個while((e = e.next) != null)循環,只有當 ((k = e.key) == key || (key != null && key.equals(k)))) 時纔算找到了。在循環之前還有一個判斷, if (first instanceof TreeNode) return ((TreeNode<K,V>)first).getTreeNode(hash, key); 這個可參考Java 8 HashMap中的TreeNode.putTreeVal方法分析
如果key值會有一樣的嗎?肯定是不一樣的,可以看到 putVal(int hash, K key, V value, boolean onlyIfAbsent,boolean evict) 源碼中,如果計算到hash值相同,且key值也相同, if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; 後面又會判斷
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
e.value = value; 會把原先的value值覆蓋掉。至於 afterNodeAccess(e),被子類linkedHashMap實現。
3)equals()和hashCode()的應用,以及它們在HashMap中的重要性
我們都知道,equals()和hashCode()一般用來比較元素是否是同一元素,但是這裏牽扯到,值相等或者內存地址相等的問題,這個在HashMap中主要是用來比較Key值,因爲要求key值是要具有唯一性的。
可參考: Java面試題整理及答案解析(基礎篇),HashMap的工作原理-hashcode和equals的區別
4)重新調整HashMap的大小(如果HashMap的大小超過了負載因子(load factor)定義的容量,怎麼辦?)
這裏主要問的是擴容。在putVal(int hash, K key, V value, boolean onlyIfAbsent,boolean evict)方法中,有一個resize()方法。看一下實現
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
由於HashMap擴容開銷很大(需要創建數組,重新哈希,分配),因此與擴容相關的因素,就是容量(數組的數量),加載因子(決定了HashMap中元素佔有多少比例時擴容)。這成爲HashMap最重要的部分之一,它們決定了Hashmap什麼時候擴容。默認的加載因子是0.75,這是在時間,空間兩方面均衡考慮的結果。加載因子太大發生衝的可能性就會大,查找效率變低。太小的話頻繁觸發擴容,導致性能降低。至於爲什麼是0.75,不是0.6,0.8呢?可參考:HashMap的loadFactor爲什麼是0.75?
當然其實HashMap在1.8之後最出名的還是紅黑樹的參入,紅黑樹大大的提高了性能。
更加詳細的介紹HashMap可參考:Java 集合深入理解(16):HashMap 主要特點和關鍵方法源碼解讀
5)Java1.8修復HashMap在多線程出現死循環的問題
至於Java1.7出現的死循環,可以參考老生常談,HashMap的死循環 。
而Java1.8只有做了什麼優化呢,在擴容的時候,會創建一個數組,把原先的數據重新計算索引值 newTab[e.hash & (newCap - 1)] = e,放入新的數組中。對於鏈表的處理,分爲 loHead (low Head) 和 hiHead (high head)。擴容是 newThr = oldThr << 1; 兩倍擴容,因此存在低位不扥 0-(N-1),高位部分N-(2N-1),Java1.7產生死循環是因爲產生新的鏈表順序跟舊的鏈表完全相反,所以只要保證新鏈還是按照原來的循序的話,就不會產生循環。
4、LinkedHashMap
LinkedHashMap是HashMap的子類,特點就是,存儲數據是一個有序的集合。怎麼保證有序的呢?
在put的時候,LinkedHashMap重寫了父類的一個 newNode(int hash, K key, V value, Node<K,V> e)方法。每當有新的key被放入集合時,HashMap的做法是
Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
return new Node<>(hash, key, value, next);
}
而LinkedHashMap的做法是
Node<K,V> newNode(int hash, K key, V value, Node<K,V> e) {
LinkedHashMap.Entry<K,V> p =
new LinkedHashMap.Entry<K,V>(hash, key, value, e);
linkNodeLast(p);
return p;
}
/**
* HashMap.Node subclass for normal LinkedHashMap entries.
*/
static class Entry<K,V> extends HashMap.Node<K,V> {
Entry<K,V> before, after;
Entry(int hash, K key, V value, Node<K,V> next) {
super(hash, key, value, next);
}
}
// link at the end of list
private void linkNodeLast(LinkedHashMap.Entry<K,V> p) {
LinkedHashMap.Entry<K,V> last = tail;
tail = p;
if (last == null)
head = p;
else {
p.before = last;
last.after = p;
}
}
可見產生新的節點的時候HashMap數組裏面存放的是一個 Node<K,V>,而LinkedHashMap存放的是一個 Entry<K,V>而它繼承了HashMap.Node<K,V>,新增了一個Entry<K,V> before, after,在linkNodeLast(LinkedHashMap.Entry<K,V> p)中,把之前的節點賦值給自己的before,自己的節點賦值給after。完成了對接。
既然是有序,那麼是怎麼迭代呢,其實是在Entry<K,V> before, after上做文章。這裏只是展示下keySet(),而values()也是這個原理。
public Set<K> keySet() {
Set<K> ks = keySet;
if (ks == null) {
ks = new LinkedKeySet();
keySet = ks;
}
return ks;
}
final class LinkedKeySet extends AbstractSet<K> {
public final int size() { return size; }
public final void clear() { LinkedHashMap.this.clear(); }
public final Iterator<K> iterator() {
return new LinkedKeyIterator();
}
public final boolean contains(Object o) { return containsKey(o); }
public final boolean remove(Object key) {
return removeNode(hash(key), key, null, false, true) != null;
}
public final Spliterator<K> spliterator() {
return Spliterators.spliterator(this, Spliterator.SIZED |
Spliterator.ORDERED |
Spliterator.DISTINCT);
}
public final void forEach(Consumer<? super K> action) {
if (action == null)
throw new NullPointerException();
int mc = modCount;
for (LinkedHashMap.Entry<K,V> e = head; e != null; e = e.after)
action.accept(e.key);
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
5、ConcurrentHashMap
都知道,ConcurrentHashMap是一個線程安全的集合,常用於多線程的環境下。內容比較複雜可以參見大佬講解,詳細 ConcurrentHashMap源碼分析(JDK8版本)
6、HashSet,LinkedHashSet
HashSet常用來去重,不看不知道,一看嚇我一跳,原來HashSet是利用了HashMap的key具有唯一性值去重的。直接上源碼
public class HashSet<E>
extends AbstractSet<E>
implements Set<E>, Cloneable, java.io.Serializable
{
static final long serialVersionUID = -5024744406713321676L;
private transient HashMap<E,Object> map;
// Dummy value to associate with an Object in the backing Map
private static final Object PRESENT = new Object();
/**
* Constructs a new, empty set; the backing <tt>HashMap</tt> instance has
* default initial capacity (16) and load factor (0.75).
*/
public HashSet() {
map = new HashMap<>();
}
/**
* Constructs a new set containing the elements in the specified
* collection. The <tt>HashMap</tt> is created with default load factor
* (0.75) and an initial capacity sufficient to contain the elements in
* the specified collection.
*
* @param c the collection whose elements are to be placed into this set
* @throws NullPointerException if the specified collection is null
*/
public HashSet(Collection<? extends E> c) {
map = new HashMap<>(Math.max((int) (c.size()/.75f) + 1, 16));
addAll(c);
}
public Iterator<E> iterator() {
return map.keySet().iterator();
}
public boolean add(E e) {
return map.put(e, PRESENT)==null;
}
LinkedHashSet是繼承了HashSet,但是LinkedHashSet具有順序的,而HashSet中有一個這樣的構造器
HashSet(int initialCapacity, float loadFactor, boolean dummy) {
map = new LinkedHashMap<>(initialCapacity, loadFactor);
}
LinkedHashSet一共五個方法,四個是重載了構造器。
public class LinkedHashSet<E>
extends HashSet<E>
implements Set<E>, Cloneable, java.io.Serializable {
private static final long serialVersionUID = -2851667679971038690L;
/**
* Constructs a new, empty linked hash set with the specified initial
* capacity and load factor.
*
* @param initialCapacity the initial capacity of the linked hash set
* @param loadFactor the load factor of the linked hash set
* @throws IllegalArgumentException if the initial capacity is less
* than zero, or if the load factor is nonpositive
*/
public LinkedHashSet(int initialCapacity, float loadFactor) {
super(initialCapacity, loadFactor, true);
}
/**
* Constructs a new, empty linked hash set with the specified initial
* capacity and the default load factor (0.75).
*
* @param initialCapacity the initial capacity of the LinkedHashSet
* @throws IllegalArgumentException if the initial capacity is less
* than zero
*/
public LinkedHashSet(int initialCapacity) {
super(initialCapacity, .75f, true);
}
/**
* Constructs a new, empty linked hash set with the default initial
* capacity (16) and load factor (0.75).
*/
public LinkedHashSet() {
super(16, .75f, true);
}
/**
* Constructs a new linked hash set with the same elements as the
* specified collection. The linked hash set is created with an initial
* capacity sufficient to hold the elements in the specified collection
* and the default load factor (0.75).
*
* @param c the collection whose elements are to be placed into
* this set
* @throws NullPointerException if the specified collection is null
*/
public LinkedHashSet(Collection<? extends E> c) {
super(Math.max(2*c.size(), 11), .75f, true);
addAll(c);
}
/**
* Creates a <em><a href="Spliterator.html#binding">late-binding</a></em>
* and <em>fail-fast</em> {@code Spliterator} over the elements in this set.
*
* <p>The {@code Spliterator} reports {@link Spliterator#SIZED},
* {@link Spliterator#DISTINCT}, and {@code ORDERED}. Implementations
* should document the reporting of additional characteristic values.
*
* @implNote
* The implementation creates a
* <em><a href="Spliterator.html#binding">late-binding</a></em> spliterator
* from the set's {@code Iterator}. The spliterator inherits the
* <em>fail-fast</em> properties of the set's iterator.
* The created {@code Spliterator} additionally reports
* {@link Spliterator#SUBSIZED}.
*
* @return a {@code Spliterator} over the elements in this set
* @since 1.8
*/
@Override
public Spliterator<E> spliterator() {
return Spliterators.spliterator(this, Spliterator.DISTINCT | Spliterator.ORDERED);
}
7、ConcurrentLinkedQueue
這個集合常在多線程的環境先被用來模擬消息隊列,所以拿出來看一看。
詳情參考 ConcurrentLinkedQueue 源碼分析 (基於Java 8)
後記:
1、後面原想着自己去一點點分析源碼,但是看見人家分析的超級詳細,所以就直接上鍊接了,後續有機會的話。再去查漏補缺吧!
2、由於自己也是初學者,所以有錯的,或者不當之處,歡迎留言批評指正。