JAVA源碼分析之HashMap

前言

從事了好長時間的開發工作,平時只注重業務代碼的開發,而忽略了java本身一些基礎,所以從現在開始閱讀以下jdk的源代碼,首先從集合開始吧!這一篇先看下HashMap的源碼。


java集合架構

             這一段是引用了  http://blog.csdn.net/ns_code/article/details/35564663大神文章中的內容。

   Java集合工具包爲於Java.util包下,包含了很多常用的數據結構,如數組、鏈表、棧、隊列、集合、哈希表等。學習Java集合框架大致可以分爲以下五個部分:List列表、Set集合、Map、迭代器(Iterator、Enumeration)、工具類(Arrays、Collections)。

    Java集合類的整體框架如下:


        

從上圖中可以看出,集合類主要分爲兩大類:Collection和Map。

    Collection是List、Set等集合高度抽象出來的接口,它包含了這些集合的基本操作,它主要又分爲兩大部分:List和Set。

    List接口通常表示一個列表(數組、隊列、鏈表、棧等),其中的元素可以重複,常用實現類爲ArrayList和LinkedList,另外還有不常用的Vector。另外,LinkedList還是實現了Queue接口,因此也可以作爲隊列使用。

    Set接口通常表示一個集合,其中的元素不允許重複(通過hashcode和equals函數保證),常用實現類有HashSet和TreeSet,HashSet是通過Map中的HashMap實現的,而TreeSet是通過Map中的TreeMap實現的。另外,TreeSet還實現了SortedSet接口,因此是有序的集合(集合中的元素要實現Comparable接口,並覆寫Compartor函數才行)。

    我們看到,抽象類AbstractCollection、AbstractList和AbstractSet分別實現了Collection、List和Set接口,這就是在Java集合框架中用的很多的適配器設計模式,用這些抽象類去實現接口,在抽象類中實現接口中的若干或全部方法,這樣下面的一些類只需直接繼承該抽象類,並實現自己需要的方法即可,而不用實現接口中的全部抽象方法。

    Map是一個映射接口,其中的每個元素都是一個key-value鍵值對,同樣抽象類AbstractMap通過適配器模式實現了Map接口中的大部分函數,TreeMap、HashMap、WeakHashMap等實現類都通過繼承AbstractMap來實現,另外,不常用的HashTable直接實現了Map接口,它和Vector都是JDK1.0就引入的集合類。

    Iterator是遍歷集合的迭代器(不能遍歷Map,只用來遍歷Collection),Collection的實現類都實現了iterator()函數,它返回一個Iterator對象,用來遍歷集合,ListIterator則專門用來遍歷List。而Enumeration則是JDK1.0時引入的,作用與Iterator相同,但它的功能比Iterator要少,它只能再Hashtable、Vector和Stack中使用。

    Arrays和Collections是用來操作數組、集合的兩個工具類,例如在ArrayList和Vector中大量調用了Arrays.Copyof()方法,而Collections中有很多靜態方法可以返回各集合類的synchronized版本,即線程安全的版本,當然了,如果要用線程安全的結合類,首選Concurrent併發包下的對應的集合類。


HashMap簡介

HashMap是基於哈希表實現的,他的底層是一個Entry鏈表數組,並且數組的長度是2的倍數,在執行put操作的時候,將key進行hash操作, 將hash值h跟數組長度length-1做與操作h & (length-1),值就是key在Entry數組中下標,但是前提條件就是數組長度必須爲2的倍數。當如果發生哈希碰撞(就是說key做哈希後得到的下標位置衝突),那麼判斷下,在下標對應鏈表中是否存在當前key,如果存在那麼將其對應值替換,否則,創建一個新的Entry對象放到下標位置,並且將他的下一個元素指向原來位置上的Entry。

HashTable實現了Map接口中所有的方法,並且允許key和value都爲null。HashMap大致上和HashTable一樣,實現了Map接口所有的方法,但是HashMap是線程不安全的,並且允許key可value爲null。並且HashMap不保證集合中元素的順序和也不保證在不同時間段集合中元素順序是一致的。

容量(capacity)和負載因子(loadFactor)

capacity表示的是哈希表中bucket的數量,loadFactor則表示的是哈希表中bucket填滿數量佔哈希表總數量的百分比。當哈希表中存有extry的bucket數量大於等於capacity*loadFactor時,哈希表容量增大一倍,也就是bucket數量爲當前2倍的最小2的次冪。注意,在擴容前後capacity都是2的次冪,這是因爲在獲取key對應hash值在哈希表中下標時公式h&(length-1)就是建立在length是2的次冪的前提下。

 源碼解析

import java.io.*;


public class HashMap1<K,V>
    extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable
{

    /**
     * The default initial capacity - MUST be a power of two.
		默認初始化容量16, 並且容量必須是2的整數次冪
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * The maximum capacity, used if a higher value is implicitly specified
     * by either of the constructors with arguments.
     * MUST be a power of two <= 1<<30.
	   通過參數構建HashMap時,最大容量是2的30次冪,傳入容量過大時將會被這個值替換
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The load factor used when none specified in constructor.
	   負載因子默認值是0.75f
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * An empty table instance to share when the table is not inflated.
	   哈希表沒有被初始化時,先初始化爲一個空表
     */
    static final Entry<?,?>[] EMPTY_TABLE = {};

    /**
     * The table, resized as necessary. Length MUST Always be a power of two.
	   哈希表,有必要時會擴容,長度必須是2的整數次冪
     */
    transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;

    /**
     * The number of key-value mappings contained in this map.
	    map中存放鍵值對的數量
     */
    transient int size;

    /**
     * The next size value at which to resize (capacity * load factor).
     * HashMap閾值,用於判斷是否需要擴容(threshold = capacity*loadfactor)
     */
	//初始化HashMap時,哈希表爲空,此時threshold爲容量capacity大小,
	//當inflate初始化哈希表時,threshold賦值爲capacity*loadfactor
    int threshold;

	//負載因子
    final float loadFactor;

    //HashMap改動的次數
    transient int modCount;

    /**
     * The default threshold of map capacity above which alternative hashing is
     * used for String keys. Alternative hashing reduces the incidence of
     * collisions due to weak hash code calculation for String keys.
     * <p/>
     * This value may be overridden by defining the system property
     * {@code jdk.map.althashing.threshold}. A property value of {@code 1}
     * forces alternative hashing to be used at all times whereas
     * {@code -1} value ensures that alternative hashing is never used.
     */
    static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;

    /**
     * holds values which can't be initialized until after VM is booted.
     */
    private static class Holder {

        /**
         * Table capacity above which to switch to use alternative hashing.
         */
        static final int ALTERNATIVE_HASHING_THRESHOLD;

        static {
            String altThreshold = java.security.AccessController.doPrivileged(
                new sun.security.action.GetPropertyAction(
                    "jdk.map.althashing.threshold"));

            int threshold;
            try {
                threshold = (null != altThreshold)
                        ? Integer.parseInt(altThreshold)
                        : ALTERNATIVE_HASHING_THRESHOLD_DEFAULT;

                // disable alternative hashing if -1
                if (threshold == -1) {
                    threshold = Integer.MAX_VALUE;
                }

                if (threshold < 0) {
                    throw new IllegalArgumentException("value must be positive integer.");
                }
            } catch(IllegalArgumentException failed) {
                throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed);
            }

            ALTERNATIVE_HASHING_THRESHOLD = threshold;
        }
    }

    /**
     * A randomizing value associated with this instance that is applied to
     * hash code of keys to make hash collisions harder to find. If 0 then
     * alternative hashing is disabled.
     */
    transient int hashSeed = 0;

    /**
     * Constructs an empty <tt>HashMap</tt> with the specified initial
     * capacity and load factor.
		根據指定的容量和負載因子創建一個空的HashMap,在進行put操作時會判斷哈希表是否爲空,空就初始化哈希表
     */
    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);

        this.loadFactor = loadFactor;
        threshold = initialCapacity;
        init();
    }

    /**
     * Constructs an empty <tt>HashMap</tt> with the specified initial
     * capacity and the default load factor (0.75).
		根據指定的容量和默認的負載因子0.75創建一個空的HashMap
     */
    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

    /**
     * Constructs an empty <tt>HashMap</tt> with the default initial capacity
     * (16) and the default load factor (0.75).
     */
    public HashMap() {
        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
    }

    /**
     * Constructs a new <tt>HashMap</tt> with the same mappings as the
     * specified <tt>Map</tt>.  The <tt>HashMap</tt> is created with
     * default load factor (0.75) and an initial capacity sufficient to
     * hold the mappings in the specified <tt>Map</tt>.
     *
     * @param   m the map whose mappings are to be placed in this map
     * @throws  NullPointerException if the specified map is null
     */
    public HashMap(Map<? extends K, ? extends V> m) {
        this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
                      DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);
        inflateTable(threshold);

        putAllForCreate(m);
    }

    private static int roundUpToPowerOf2(int number) {
        // assert number >= 0 : "number must be non-negative";
        int rounded = number >= MAXIMUM_CAPACITY
                ? MAXIMUM_CAPACITY
                : (rounded = Integer.highestOneBit(number)) != 0
                    ? (Integer.bitCount(number) > 1) ? rounded << 1 : rounded
                    : 1;

        return rounded;
    }

    /**
     * Inflates the table.
     */
    private void inflateTable(int toSize) {
        // Find a power of 2 >= toSize
		//找到大於toSize最小的2的整數次冪
        int capacity = roundUpToPowerOf2(toSize);

        threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
        table = new Entry[capacity];
        initHashSeedAsNeeded(capacity);
    }

    void init() {
    }

    final boolean initHashSeedAsNeeded(int capacity) {
        boolean currentAltHashing = hashSeed != 0;
        boolean useAltHashing = sun.misc.VM.isBooted() &&
                (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
        boolean switching = currentAltHashing ^ useAltHashing;
        if (switching) {
            hashSeed = useAltHashing
                ? sun.misc.Hashing.randomHashSeed(this)
                : 0;
        }
        return switching;
    }

    /**
     * Retrieve object hash code and applies a supplemental hash function to the
     * result hash, which defends against poor quality hash functions.  This is
     * critical because HashMap uses power-of-two length hash tables, that
     * otherwise encounter collisions for hashCodes that do not differ
     * in lower bits. Note: Null keys always map to hash 0, thus index 0.
     */
    final int hash(Object k) {
        int h = hashSeed;
        if (0 != h && k instanceof String) {
            return sun.misc.Hashing.stringHash32((String) k);
        }

        h ^= k.hashCode();

        // This function ensures that hashCodes that differ only by
        // constant multiples at each bit position have a bounded
        // number of collisions (approximately 8 at default load factor).
        h ^= (h >>> 20) ^ (h >>> 12);
        return h ^ (h >>> 7) ^ (h >>> 4);
    }

    /**
     * 根據key對應hash值和哈希表長度獲取key在哈希表中下標
     */
    static int indexFor(int h, int length) {
        // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
        return h & (length-1);
    }

    /**
     * 獲取map中鍵值對數量
     */
    public int size() {
        return size;
    }

    public boolean isEmpty() {
        return size == 0;
    }

    /**
    
     */
    public V get(Object key) {
        if (key == null)
            return getForNullKey();
        Entry<K,V> entry = getEntry(key);

        return null == entry ? null : entry.getValue();
    }

    /**
     * Offloaded version of get() to look up null keys.  Null keys map
     * to index 0.  This null case is split out into separate methods
     * for the sake of performance in the two most commonly used
     * operations (get and put), but incorporated with conditionals in
     * others.
     */
    private V getForNullKey() {
        if (size == 0) {
            return null;
        }
        for (Entry<K,V> e = table[0]; e != null; e = e.next) {
            if (e.key == null)
                return e.value;
        }
        return null;
    }

    /**
     * Returns <tt>true</tt> if this map contains a mapping for the
     * specified key.
     *
     * @param   key   The key whose presence in this map is to be tested
     * @return <tt>true</tt> if this map contains a mapping for the specified
     * key.
     */
    public boolean containsKey(Object key) {
        return getEntry(key) != null;
    }

    /**
    	根據key獲取entry對象,如果 沒有那麼返回null
    	整體操作就是首先根據key的哈希值獲取在table中下標,然後拿到鏈表後遍歷鏈表
     */
    final Entry<K,V> getEntry(Object key) {
        if (size == 0) {
            return null;
        }

        int hash = (key == null) ? 0 : hash(key);
        for (Entry<K,V> e = table[indexFor(hash, table.length)];
             e != null;
             e = e.next) {
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k))))
                return e;
        }
        return null;
    }

	//存儲key-value鍵值對
    public V put(K key, V value) {
		//判斷下如果table爲空那麼初始化哈希表
        if (table == EMPTY_TABLE) {
            inflateTable(threshold);
        }
        if (key == null)
            return putForNullKey(value);
        int hash = hash(key);
		//獲取在哈希表中下標
        int i = indexFor(hash, table.length);
		//拿到下標位置的Entry鏈表對象,遍歷鏈表所有元素,判斷鏈表上是否已存在key爲當前插入key的Entry對象
        for (Entry<K,V> e = table[i]; e != null; e = e.next) {
            Object k;
			//如果鏈表上已存在當前插入的key那麼將原來value替換掉
            if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
                V oldValue = e.value;
                e.value = value;
                e.recordAccess(this);
                return oldValue;
            }
        }
		//如果鏈表上不存在當前key,創建Entry
        modCount++;
        addEntry(hash, key, value, i);
        return null;
    }

    /**
     * 插入key爲null的value
     */
    private V putForNullKey(V value) {
    	//講key爲null的鍵值對放到table下標爲0的鏈表中
        for (Entry<K,V> e = table[0]; e != null; e = e.next) {
            if (e.key == null) {
                V oldValue = e.value;
                e.value = value;
                e.recordAccess(this);
                return oldValue;
            }
        }
        modCount++;
        addEntry(0, null, value, 0);
        return null;
    }

    /**
     * This method is used instead of put by constructors and
     * pseudoconstructors (clone, readObject).  It does not resize the table,
     * check for comodification, etc.  It calls createEntry rather than
     * addEntry.
     */
    private void putForCreate(K key, V value) {
        int hash = null == key ? 0 : hash(key);
        int i = indexFor(hash, table.length);

        /**
         * Look for preexisting entry for key.  This will never happen for
         * clone or deserialize.  It will only happen for construction if the
         * input Map is a sorted map whose ordering is inconsistent w/ equals.
         */
        for (Entry<K,V> e = table[i]; e != null; e = e.next) {
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k)))) {
                e.value = value;
                return;
            }
        }

        createEntry(hash, key, value, i);
    }

    private void putAllForCreate(Map<? extends K, ? extends V> m) {
        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
            putForCreate(e.getKey(), e.getValue());
    }

    /**
     * Rehashes the contents of this map into a new array with a
     * larger capacity.  This method is called automatically when the
     * number of keys in this map reaches its threshold.
     *
     * If current capacity is MAXIMUM_CAPACITY, this method does not
     * resize the map, but sets threshold to Integer.MAX_VALUE.
     * This has the effect of preventing future calls.
     *
     * @param newCapacity the new capacity, MUST be a power of two;
     *        must be greater than current capacity unless current
     *        capacity is MAXIMUM_CAPACITY (in which case value
     *        is irrelevant).
     */
    void resize(int newCapacity) {
        Entry[] oldTable = table;
        int oldCapacity = oldTable.length;
        if (oldCapacity == MAXIMUM_CAPACITY) {
            threshold = Integer.MAX_VALUE;
            return;
        }

        Entry[] newTable = new Entry[newCapacity];
        transfer(newTable, initHashSeedAsNeeded(newCapacity));
        table = newTable;
        threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
    }

    /**
     * Transfers all entries from current table to newTable.
     */
    void transfer(Entry[] newTable, boolean rehash) {
        int newCapacity = newTable.length;
        for (Entry<K,V> e : table) {
            while(null != e) {
                Entry<K,V> next = e.next;
                if (rehash) {
                    e.hash = null == e.key ? 0 : hash(e.key);
                }
                int i = indexFor(e.hash, newCapacity);
                e.next = newTable[i];
                newTable[i] = e;
                e = next;
            }
        }
    }

    /**
     * Copies all of the mappings from the specified map to this map.
     * These mappings will replace any mappings that this map had for
     * any of the keys currently in the specified map.
     *
     * @param m mappings to be stored in this map
     * @throws NullPointerException if the specified map is null
     */
    public void putAll(Map<? extends K, ? extends V> m) {
        int numKeysToBeAdded = m.size();
        if (numKeysToBeAdded == 0)
            return;

        if (table == EMPTY_TABLE) {
            inflateTable((int) Math.max(numKeysToBeAdded * loadFactor, threshold));
        }

        /*
         * Expand the map if the map if the number of mappings to be added
         * is greater than or equal to threshold.  This is conservative; the
         * obvious condition is (m.size() + size) >= threshold, but this
         * condition could result in a map with twice the appropriate capacity,
         * if the keys to be added overlap with the keys already in this map.
         * By using the conservative calculation, we subject ourself
         * to at most one extra resize.
         */
        if (numKeysToBeAdded > threshold) {
            int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);
            if (targetCapacity > MAXIMUM_CAPACITY)
                targetCapacity = MAXIMUM_CAPACITY;
            int newCapacity = table.length;
            while (newCapacity < targetCapacity)
                newCapacity <<= 1;
            if (newCapacity > table.length)
                resize(newCapacity);
        }

        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
            put(e.getKey(), e.getValue());
    }

    /**
     * Removes the mapping for the specified key from this map if present.
     *
     * @param  key key whose mapping is to be removed from the map
     * @return the previous value associated with <tt>key</tt>, or
     *         <tt>null</tt> if there was no mapping for <tt>key</tt>.
     *         (A <tt>null</tt> return can also indicate that the map
     *         previously associated <tt>null</tt> with <tt>key</tt>.)
     */
    public V remove(Object key) {
        Entry<K,V> e = removeEntryForKey(key);
        return (e == null ? null : e.value);
    }

    /**
     * Removes and returns the entry associated with the specified key
     * in the HashMap.  Returns null if the HashMap contains no mapping
     * for this key.
     */
    final Entry<K,V> removeEntryForKey(Object key) {
        if (size == 0) {
            return null;
        }
        int hash = (key == null) ? 0 : hash(key);
        int i = indexFor(hash, table.length);
        Entry<K,V> prev = table[i];
        Entry<K,V> e = prev;

        while (e != null) {
            Entry<K,V> next = e.next;
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k)))) {
                modCount++;
                size--;
                if (prev == e)
                    table[i] = next;
                else
                    prev.next = next;
                e.recordRemoval(this);
                return e;
            }
            prev = e;
            e = next;
        }

        return e;
    }

    /**
     * Special version of remove for EntrySet using {@code Map.Entry.equals()}
     * for matching.
     */
    final Entry<K,V> removeMapping(Object o) {
        if (size == 0 || !(o instanceof Map.Entry))
            return null;

        Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
        Object key = entry.getKey();
        int hash = (key == null) ? 0 : hash(key);
        int i = indexFor(hash, table.length);
        Entry<K,V> prev = table[i];
        Entry<K,V> e = prev;

        while (e != null) {
            Entry<K,V> next = e.next;
            if (e.hash == hash && e.equals(entry)) {
                modCount++;
                size--;
                if (prev == e)
                    table[i] = next;
                else
                    prev.next = next;
                e.recordRemoval(this);
                return e;
            }
            prev = e;
            e = next;
        }

        return e;
    }

    /**
     * Removes all of the mappings from this map.
     * The map will be empty after this call returns.
     */
    public void clear() {
        modCount++;
        Arrays.fill(table, null);
        size = 0;
    }

    /**
     * Returns <tt>true</tt> if this map maps one or more keys to the
     * specified value.
     *
     * @param value value whose presence in this map is to be tested
     * @return <tt>true</tt> if this map maps one or more keys to the
     *         specified value
     */
    public boolean containsValue(Object value) {
        if (value == null)
            return containsNullValue();

        Entry[] tab = table;
        for (int i = 0; i < tab.length ; i++)
            for (Entry e = tab[i] ; e != null ; e = e.next)
                if (value.equals(e.value))
                    return true;
        return false;
    }

    /**
     * Special-case code for containsValue with null argument
     */
    private boolean containsNullValue() {
        Entry[] tab = table;
        for (int i = 0; i < tab.length ; i++)
            for (Entry e = tab[i] ; e != null ; e = e.next)
                if (e.value == null)
                    return true;
        return false;
    }

    /**
     * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
     * values themselves are not cloned.
     *
     * @return a shallow copy of this map
     */
    public Object clone() {
        HashMap<K,V> result = null;
        try {
            result = (HashMap<K,V>)super.clone();
        } catch (CloneNotSupportedException e) {
            // assert false;
        }
        if (result.table != EMPTY_TABLE) {
            result.inflateTable(Math.min(
                (int) Math.min(
                    size * Math.min(1 / loadFactor, 4.0f),
                    // we have limits...
                    HashMap.MAXIMUM_CAPACITY),
               table.length));
        }
        result.entrySet = null;
        result.modCount = 0;
        result.size = 0;
        result.init();
        result.putAllForCreate(this);

        return result;
    }

    static class Entry<K,V> implements Map.Entry<K,V> {
        final K key;
        V value;
        Entry<K,V> next;
        int hash;

        /**
         * Creates new entry.
         */
        Entry(int h, K k, V v, Entry<K,V> n) {
            value = v;
            next = n;
            key = k;
            hash = h;
        }

        public final K getKey() {
            return key;
        }

        public final V getValue() {
            return value;
        }

        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }

        public final boolean equals(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry e = (Map.Entry)o;
            Object k1 = getKey();
            Object k2 = e.getKey();
            if (k1 == k2 || (k1 != null && k1.equals(k2))) {
                Object v1 = getValue();
                Object v2 = e.getValue();
                if (v1 == v2 || (v1 != null && v1.equals(v2)))
                    return true;
            }
            return false;
        }

        public final int hashCode() {
            return Objects.hashCode(getKey()) ^ Objects.hashCode(getValue());
        }

        public final String toString() {
            return getKey() + "=" + getValue();
        }

        /**
         * This method is invoked whenever the value in an entry is
         * overwritten by an invocation of put(k,v) for a key k that's already
         * in the HashMap.
         */
        void recordAccess(HashMap<K,V> m) {
        }

        /**
         * This method is invoked whenever the entry is
         * removed from the table.
         */
        void recordRemoval(HashMap<K,V> m) {
        }
    }

    /**
     * Adds a new entry with the specified key, value and hash code to
     * the specified bucket.  It is the responsibility of this
     * method to resize the table if appropriate.
     *
     * Subclass overrides this to alter the behavior of put method.
     */
    void addEntry(int hash, K key, V value, int bucketIndex) {
		//如果哈希表大小已經達到擴容閾值,並且下標對應值不爲空,那麼將哈希表擴容爲原來兩倍
        if ((size >= threshold) && (null != table[bucketIndex])) {
            resize(2 * table.length);
            hash = (null != key) ? hash(key) : 0;
            bucketIndex = indexFor(hash, table.length);
        }

        createEntry(hash, key, value, bucketIndex);
    }

    /**
    	創建新的entry對象
     */
    void createEntry(int hash, K key, V value, int bucketIndex) {
    	//首先獲取哈希表中bucketIndex下標對應的Entry對象
        Entry<K,V> e = table[bucketIndex];
        //然後根據新傳進來的key-value鍵值對創建一個entry對象,並且next屬性指向原來bucketIndex下標位置上的entry對象,
        //然後將新創建的entry對象,放到哈希表bucketIndex位置。
        table[bucketIndex] = new Entry<>(hash, key, value, e);
        size++;
    }

    private abstract class HashIterator<E> implements Iterator<E> {
        Entry<K,V> next;        // next entry to return
        int expectedModCount;   // For fast-fail
        int index;              // current slot
        Entry<K,V> current;     // current entry

        HashIterator() {
            expectedModCount = modCount;
            if (size > 0) { // advance to first entry
                Entry[] t = table;
                while (index < t.length && (next = t[index++]) == null)
                    ;
            }
        }

        public final boolean hasNext() {
            return next != null;
        }

        final Entry<K,V> nextEntry() {
            if (modCount != expectedModCount)
                throw new ConcurrentModificationException();
            Entry<K,V> e = next;
            if (e == null)
                throw new NoSuchElementException();

            if ((next = e.next) == null) {
                Entry[] t = table;
                while (index < t.length && (next = t[index++]) == null)
                    ;
            }
            current = e;
            return e;
        }

        public void remove() {
            if (current == null)
                throw new IllegalStateException();
            if (modCount != expectedModCount)
                throw new ConcurrentModificationException();
            Object k = current.key;
            current = null;
            HashMap.this.removeEntryForKey(k);
            expectedModCount = modCount;
        }
    }

    private final class ValueIterator extends HashIterator<V> {
        public V next() {
            return nextEntry().value;
        }
    }

    private final class KeyIterator extends HashIterator<K> {
        public K next() {
            return nextEntry().getKey();
        }
    }

    private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {
        public Map.Entry<K,V> next() {
            return nextEntry();
        }
    }

    // Subclass overrides these to alter behavior of views' iterator() method
    Iterator<K> newKeyIterator()   {
        return new KeyIterator();
    }
    Iterator<V> newValueIterator()   {
        return new ValueIterator();
    }
    Iterator<Map.Entry<K,V>> newEntryIterator()   {
        return new EntryIterator();
    }


    // Views

    private transient Set<Map.Entry<K,V>> entrySet = null;

    /**
     * Returns a {@link Set} view of the keys contained in this map.
     * The set is backed by the map, so changes to the map are
     * reflected in the set, and vice-versa.  If the map is modified
     * while an iteration over the set is in progress (except through
     * the iterator's own <tt>remove</tt> operation), the results of
     * the iteration are undefined.  The set supports element removal,
     * which removes the corresponding mapping from the map, via the
     * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
     * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
     * operations.  It does not support the <tt>add</tt> or <tt>addAll</tt>
     * operations.
     */
    public Set<K> keySet() {
        Set<K> ks = keySet;
        return (ks != null ? ks : (keySet = new KeySet()));
    }

    private final class KeySet extends AbstractSet<K> {
        public Iterator<K> iterator() {
            return newKeyIterator();
        }
        public int size() {
            return size;
        }
        public boolean contains(Object o) {
            return containsKey(o);
        }
        public boolean remove(Object o) {
            return HashMap.this.removeEntryForKey(o) != null;
        }
        public void clear() {
            HashMap.this.clear();
        }
    }

    /**
     * Returns a {@link Collection} view of the values contained in this map.
     * The collection is backed by the map, so changes to the map are
     * reflected in the collection, and vice-versa.  If the map is
     * modified while an iteration over the collection is in progress
     * (except through the iterator's own <tt>remove</tt> operation),
     * the results of the iteration are undefined.  The collection
     * supports element removal, which removes the corresponding
     * mapping from the map, via the <tt>Iterator.remove</tt>,
     * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
     * <tt>retainAll</tt> and <tt>clear</tt> operations.  It does not
     * support the <tt>add</tt> or <tt>addAll</tt> operations.
     */
    public Collection<V> values() {
        Collection<V> vs = values;
        return (vs != null ? vs : (values = new Values()));
    }

    private final class Values extends AbstractCollection<V> {
        public Iterator<V> iterator() {
            return newValueIterator();
        }
        public int size() {
            return size;
        }
        public boolean contains(Object o) {
            return containsValue(o);
        }
        public void clear() {
            HashMap.this.clear();
        }
    }

    /**
     * Returns a {@link Set} view of the mappings contained in this map.
     * The set is backed by the map, so changes to the map are
     * reflected in the set, and vice-versa.  If the map is modified
     * while an iteration over the set is in progress (except through
     * the iterator's own <tt>remove</tt> operation, or through the
     * <tt>setValue</tt> operation on a map entry returned by the
     * iterator) the results of the iteration are undefined.  The set
     * supports element removal, which removes the corresponding
     * mapping from the map, via the <tt>Iterator.remove</tt>,
     * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
     * <tt>clear</tt> operations.  It does not support the
     * <tt>add</tt> or <tt>addAll</tt> operations.
     *
     * @return a set view of the mappings contained in this map
     */
    public Set<Map.Entry<K,V>> entrySet() {
        return entrySet0();
    }

    private Set<Map.Entry<K,V>> entrySet0() {
        Set<Map.Entry<K,V>> es = entrySet;
        return es != null ? es : (entrySet = new EntrySet());
    }

    private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
        public Iterator<Map.Entry<K,V>> iterator() {
            return newEntryIterator();
        }
        public boolean contains(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry<K,V> e = (Map.Entry<K,V>) o;
            Entry<K,V> candidate = getEntry(e.getKey());
            return candidate != null && candidate.equals(e);
        }
        public boolean remove(Object o) {
            return removeMapping(o) != null;
        }
        public int size() {
            return size;
        }
        public void clear() {
            HashMap.this.clear();
        }
    }

    /**
     * Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
     * serialize it).
     *
     * @serialData The <i>capacity</i> of the HashMap (the length of the
     *             bucket array) is emitted (int), followed by the
     *             <i>size</i> (an int, the number of key-value
     *             mappings), followed by the key (Object) and value (Object)
     *             for each key-value mapping.  The key-value mappings are
     *             emitted in no particular order.
     */
    private void writeObject(java.io.ObjectOutputStream s)
        throws IOException
    {
        // Write out the threshold, loadfactor, and any hidden stuff
        s.defaultWriteObject();

        // Write out number of buckets
        if (table==EMPTY_TABLE) {
            s.writeInt(roundUpToPowerOf2(threshold));
        } else {
           s.writeInt(table.length);
        }

        // Write out size (number of Mappings)
        s.writeInt(size);

        // Write out keys and values (alternating)
        if (size > 0) {
            for(Map.Entry<K,V> e : entrySet0()) {
                s.writeObject(e.getKey());
                s.writeObject(e.getValue());
            }
        }
    }

    private static final long serialVersionUID = 362498820763181265L;

    /**
     * Reconstitute the {@code HashMap} instance from a stream (i.e.,
     * deserialize it).
     */
    private void readObject(java.io.ObjectInputStream s)
         throws IOException, ClassNotFoundException
    {
        // Read in the threshold (ignored), loadfactor, and any hidden stuff
        s.defaultReadObject();
        if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
            throw new InvalidObjectException("Illegal load factor: " +
                                               loadFactor);
        }

        // set other fields that need values
        table = (Entry<K,V>[]) EMPTY_TABLE;

        // Read in number of buckets
        s.readInt(); // ignored.

        // Read number of mappings
        int mappings = s.readInt();
        if (mappings < 0)
            throw new InvalidObjectException("Illegal mappings count: " +
                                               mappings);

        // capacity chosen by number of mappings and desired load (if >= 0.25)
        int capacity = (int) Math.min(
                    mappings * Math.min(1 / loadFactor, 4.0f),
                    // we have limits...
                    HashMap.MAXIMUM_CAPACITY);

        // allocate the bucket array;
        if (mappings > 0) {
            inflateTable(capacity);
        } else {
            threshold = capacity;
        }

        init();  // Give subclass a chance to do its thing.

        // Read the keys and values, and put the mappings in the HashMap
        for (int i = 0; i < mappings; i++) {
            K key = (K) s.readObject();
            V value = (V) s.readObject();
            putForCreate(key, value);
        }
    }

    // These methods are used when serializing HashSets
    int   capacity()     { return table.length; }
    float loadFactor()   { return loadFactor;   }
}


上面代碼太多,有點亂,下面總結幾點:

下面內容轉載的http://blog.csdn.net/ns_code/article/details/36034955大神的文章。

 1、首先要清楚HashMap的存儲結構,如下圖所示:


    圖中,紫色部分即代表哈希表table,也稱爲哈希數組,數組的每個元素都是一個單鏈表的頭節點,鏈表是用來解決衝突的,如果不同的key映射到了數組的同一位置處,就將其放入單鏈表中。

 2、首先看鏈表中節點的數據結構:

[java] view plain copy
  1. // Entry是單向鏈表。    
  2. // 它是 “HashMap鏈式存儲法”對應的鏈表。    
  3. // 它實現了Map.Entry 接口,即實現getKey(), getValue(), setValue(V value), equals(Object o), hashCode()這些函數    
  4. static class Entry<K,V> implements Map.Entry<K,V> {    
  5.     final K key;    
  6.     V value;    
  7.     // 指向下一個節點    
  8.     Entry<K,V> next;    
  9.     final int hash;    
  10.   
  11.     // 構造函數。    
  12.     // 輸入參數包括"哈希值(h)", "鍵(k)", "值(v)", "下一節點(n)"    
  13.     Entry(int h, K k, V v, Entry<K,V> n) {    
  14.         value = v;    
  15.         next = n;    
  16.         key = k;    
  17.         hash = h;    
  18.     }    
  19.   
  20.     public final K getKey() {    
  21.         return key;    
  22.     }    
  23.   
  24.     public final V getValue() {    
  25.         return value;    
  26.     }    
  27.   
  28.     public final V setValue(V newValue) {    
  29.         V oldValue = value;    
  30.         value = newValue;    
  31.         return oldValue;    
  32.     }    
  33.   
  34.     // 判斷兩個Entry是否相等    
  35.     // 若兩個Entry的“key”和“value”都相等,則返回true。    
  36.     // 否則,返回false    
  37.     public final boolean equals(Object o) {    
  38.         if (!(o instanceof Map.Entry))    
  39.             return false;    
  40.         Map.Entry e = (Map.Entry)o;    
  41.         Object k1 = getKey();    
  42.         Object k2 = e.getKey();    
  43.         if (k1 == k2 || (k1 != null && k1.equals(k2))) {    
  44.             Object v1 = getValue();    
  45.             Object v2 = e.getValue();    
  46.             if (v1 == v2 || (v1 != null && v1.equals(v2)))    
  47.                 return true;    
  48.         }    
  49.         return false;    
  50.     }    
  51.   
  52.     // 實現hashCode()    
  53.     public final int hashCode() {    
  54.         return (key==null   ? 0 : key.hashCode()) ^    
  55.                (value==null ? 0 : value.hashCode());    
  56.     }    
  57.   
  58.     public final String toString() {    
  59.         return getKey() + "=" + getValue();    
  60.     }    
  61.   
  62.     // 當向HashMap中添加元素時,繪調用recordAccess()。    
  63.     // 這裏不做任何處理    
  64.     void recordAccess(HashMap<K,V> m) {    
  65.     }    
  66.   
  67.     // 當從HashMap中刪除元素時,繪調用recordRemoval()。    
  68.     // 這裏不做任何處理    
  69.     void recordRemoval(HashMap<K,V> m) {    
  70.     }    
  71. }    
    它的結構元素除了key、value、hash外,還有next,next指向下一個節點。另外,這裏覆寫了equals和hashCode方法來保證鍵值對的獨一無二。

    3、HashMap共有四個構造方法。構造方法中提到了兩個很重要的參數:初始容量和加載因子。這兩個參數是影響HashMap性能的重要參數,其中容量表示哈希表中槽的數量(即哈希數組的長度),初始容量是創建哈希表時的容量(從構造函數中可以看出,如果不指明,則默認爲16),加載因子是哈希表在其容量自動增加之前可以達到多滿的一種尺度,當哈希表中的條目數超出了加載因子與當前容量的乘積時,則要對該哈希表進行 resize 操作(即擴容)。

    下面說下加載因子,如果加載因子越大,對空間的利用更充分,但是查找效率會降低(鏈表長度會越來越長);如果加載因子太小,那麼表中的數據將過於稀疏(很多空間還沒用,就開始擴容了),對空間造成嚴重浪費。如果我們在構造方法中不指定,則系統默認加載因子爲0.75,這是一個比較理想的值,一般情況下我們是無需修改的。

    另外,無論我們指定的容量爲多少,構造方法都會將實際容量設爲不小於指定容量的2的次方的一個數,且最大值不能超過2的30次方

4、HashMap中key和value都允許爲null。

    5、要重點分析下HashMap中用的最多的兩個方法put和get。先從比較簡單的get方法着手,源碼如下:

 //根據key獲取value 
    public V get(Object key) {
        if (key == null)
            return getForNullKey();
        Entry<K,V> entry = getEntry(key);
        //因爲可能不存在key對應的entry對象,所以需要處理entry爲null
        return null == entry ? null : entry.getValue();
    } 
     //獲取key爲null的值
     private V getForNullKey() {
        if (size == 0) {
            return null;
        }
	    //HashMap將key爲null的值放在table[0]的位置的鏈表,但是不是定是鏈表的第一個位置
        for (Entry<K,V> e = table[0]; e != null; e = e.next) {
            if (e.key == null)
                return e.value;
        }
        return null;
    }
     //根據key獲取key對應的Entry對象
      final Entry<K,V> getEntry(Object key) {
	    //如果table的size爲0直接返回空
        if (size == 0) {
            return null;
        }
	   //否則根據key獲取哈希表中鏈表,遍歷,獲取鏈表中key和hash值相同的Entry的value
        int hash = (key == null) ? 0 : hash(key);
        for (Entry<K,V> e = table[indexFor(hash, table.length)];
             e != null;
             e = e.next) {
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k))))
                return e;
        }
        return null;
    }

    首先,如果key爲null,則直接從哈希表的第一個位置table[0]對應的鏈表上查找。記住,key爲null的鍵值對永遠都放在以table[0]爲頭結點的鏈表中,當然不一定是存放在頭結點table[0]中。

    如果key不爲null,則先求的key的hash值,根據hash值找到在table中的索引,在該索引對應的單鏈表中查找是否有鍵值對的key與目標key相等,有就返回對應的value,沒有則返回null。

    put方法稍微複雜些,代碼如下:

[java] view plain copy
  1.   // 將“key-value”添加到HashMap中    
  2.   public V put(K key, V value) {  
  3.       //若table爲EMPTY_TABLE那麼表示哈希表table沒有被初始化  
  4. if (table == EMPTY_TABLE) {            inflateTable(threshold);      }
  5.       // 若“key爲null”,則將該鍵值對添加到table[0]中。    
  6.       if (key == null)    
  7.           return putForNullKey(value);    
  8.       // 若“key不爲null”,則計算該key的哈希值,然後將其添加到該哈希值對應的鏈表中。    
  9.       int hash = hash(key.hashCode());    
  10.       int i = indexFor(hash, table.length);    
  11.       for (Entry<K,V> e = table[i]; e != null; e = e.next) {    
  12.           Object k;    
  13.           // 若“該key”對應的鍵值對已經存在,則用新的value取代舊的value。然後退出!    
  14.           if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {    
  15.               V oldValue = e.value;    
  16.               e.value = value;    
  17.               e.recordAccess(this);    
  18.               return oldValue;    
  19.           }    
  20.       }    
  21.   
  22.       // 若“該key”對應的鍵值對不存在,則將“key-value”添加到table中    
  23.       modCount++;  
  24. //將key-value添加到table[i]處  
  25.       addEntry(hash, key, value, i);    
  26.       return null;    
  27.   }   
    如果table爲EMPTY_TABLE,那麼,初始化table,inflateTable的源碼如下:
   private void inflateTable(int toSize) {
        // 設置比toSize大的2的整數次冪的值爲初始化容量
        int capacity = roundUpToPowerOf2(toSize);

        threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
        table = new Entry[capacity];
        initHashSeedAsNeeded(capacity);
    }


    如果key爲null,則將其添加到table[0]對應的鏈表中,putForNullKey的源碼如下:

[java] view plain copy
  1. // putForNullKey()的作用是將“key爲null”鍵值對添加到table[0]位置    
  2. private V putForNullKey(V value) {    
  3.     for (Entry<K,V> e = table[0]; e != null; e = e.next) {    
  4.         if (e.key == null) {    
  5.             V oldValue = e.value;    
  6.             e.value = value;    
  7.             e.recordAccess(this);    
  8.             return oldValue;    
  9.         }    
  10.     }    
  11.     // 如果沒有存在key爲null的鍵值對,則直接題阿見到table[0]處!    
  12.     modCount++;    
  13.     addEntry(0null, value, 0);    
  14.     return null;    
  15. }   
    如果key不爲null,則同樣先求出key的hash值,根據hash值得出在table中的索引,而後遍歷對應的單鏈表,如果單鏈表中存在與目標key相等的鍵值對,則將新的value覆蓋舊的value,比將舊的value返回,如果找不到與目標key相等的鍵值對,或者該單鏈表爲空,則將該鍵值對插入到改單鏈表的頭結點位置(每次新插入的節點都是放在頭結點的位置),該操作是有addEntry方法實現的,它的源碼如下: 
//新增entry。將key-value插入指定位置,bucketIndex是鏈表位置索引
    void addEntry(int hash, K key, V value, int bucketIndex) {
	//如果HashMap的size達到擴容閾值,並且索引所在位置鏈表不爲空,那麼擴容
        if ((size >= threshold) && (null != table[bucketIndex])) {
	    //擴容,數組長度變爲原來的兩倍
            resize(2 * table.length);
            hash = (null != key) ? hash(key) : 0;
	    //獲取key在新生成哈希表中位置索引
            bucketIndex = indexFor(hash, table.length);
        }

        createEntry(hash, key, value, bucketIndex);
    }
    void createEntry(int hash, K key, V value, int bucketIndex) {
	    //保存bucketIndex位置的值到entry
        Entry<K,V> e = table[bucketIndex];
            //設置bucketIndex位置元素爲“新Entry”,並且設置e爲“新Entry”的下一個節點
        table[bucketIndex] = new Entry<>(hash, key, value, e);
        size++;
    }

   注意這裏倒數第二行的構造方法,將key-value鍵值對賦給table[bucketIndex],並將其next指向元素e,這便將key-value放到了頭結點中,並將之前的頭結點接在了它的後面。該方法也說明,每次put鍵值對的時候,總是將新的該鍵值對放在table[bucketIndex]處(即頭結點處)。

    兩外注意addEntry方法中第一行代碼,每次加入鍵值對時,都要判斷當前已用的槽的數目是否大於等於閥值(容量*加載因子),如果大於等於,則進行擴容,將容量擴爲原來容量的2倍。

    6、關於擴容。上面我們看到了擴容的方法,resize方法,它的源碼如下:

[java] view plain copy
  1. // 重新調整HashMap的大小,newCapacity是調整後的單位    
  2. void resize(int newCapacity) {    
  3.     Entry[] oldTable = table;    
  4.     int oldCapacity = oldTable.length;    
  5.     if (oldCapacity == MAXIMUM_CAPACITY) {    
  6.         threshold = Integer.MAX_VALUE;    
  7.         return;    
  8.     }    
  9.   
  10.     // 新建一個HashMap,將“舊HashMap”的全部元素添加到“新HashMap”中,    
  11.     // 然後,將“新HashMap”賦值給“舊HashMap”。    
  12.     Entry[] newTable = new Entry[newCapacity];    
  13.     transfer(newTable);    
  14.     table = newTable;    
  15.     threshold = (int)(newCapacity * loadFactor);    
  16. }    
    很明顯,是新建了一個HashMap的底層數組,而後調用transfer方法,將就HashMap的全部元素添加到新的HashMap中(要重新計算元素在新的數組中的索引位置)。transfer方法的源碼如下:

[java] view plain copy
  1. // 將HashMap中的全部元素都添加到newTable中    
  2. void transfer(Entry[] newTable) {    
  3.     Entry[] src = table;    
  4.     int newCapacity = newTable.length;    
  5.     for (int j = 0; j < src.length; j++) {    
  6.         Entry<K,V> e = src[j];    
  7.         if (e != null) {    
  8.             src[j] = null;    
  9.             do {    
  10.                 Entry<K,V> next = e.next;    
  11.                 int i = indexFor(e.hash, newCapacity);    
  12.                 e.next = newTable[i];    
  13.                 newTable[i] = e;    
  14.                 e = next;    
  15.             } while (e != null);    
  16.         }    
  17.     }    
  18. }    
    很明顯,擴容是一個相當耗時的操作,因爲它需要重新計算這些元素在新的數組中的位置並進行復制處理。因此,我們在用HashMap的時,最好能提前預估下HashMap中元素的個數,這樣有助於提高HashMap的性能。

    7、注意containsKey方法和containsValue方法。前者直接可以通過key的哈希值將搜索範圍定位到指定索引對應的鏈表,而後者要對哈希數組的每個鏈表進行搜索。

    8、我們重點來分析下求hash值和索引值的方法,這兩個方法便是HashMap設計的最爲核心的部分,二者結合能保證哈希表中的元素儘可能均勻地散列。

    計算哈希值的方法如下:

[java] view plain copy
  1. static int hash(int h) {  
  2.         h ^= (h >>> 20) ^ (h >>> 12);  
  3.         return h ^ (h >>> 7) ^ (h >>> 4);  
  4.     }  
    它只是一個數學公式,IDK這樣設計對hash值的計算,自然有它的好處,至於爲什麼這樣設計,我們這裏不去追究,只要明白一點,用的位的操作使hash值的計算效率很高。

    由hash值找到對應索引的方法如下:

[java] view plain copy
  1. static int indexFor(int h, int length) {  
  2.         return h & (length-1);  
  3.     }  
    這個我們要重點說下,我們一般對哈希表的散列很自然地會想到用hash值對length取模(即除法散列法),Hashtable中也是這樣實現的,這種方法基本能保證元素在哈希表中散列的比較均勻,但取模會用到除法運算,效率很低,HashMap中則通過h&(length-1)的方法來代替取模,同樣實現了均勻的散列,但效率要高很多,這也是HashMap對Hashtable的一個改進。

    接下來,我們分析下爲什麼哈希表的容量一定要是2的整數次冪。首先,length爲2的整數次冪的話,h&(length-1)就相當於對length取模,這樣便保證了散列的均勻,同時也提升了效率;其次,length爲2的整數次冪的話,爲偶數,這樣length-1爲奇數,奇數的最後一位是1,這樣便保證了h&(length-1)的最後一位可能爲0,也可能爲1(這取決於h的值),即與後的結果可能爲偶數,也可能爲奇數,這樣便可以保證散列的均勻性,而如果length爲奇數的話,很明顯length-1爲偶數,它的最後一位是0,這樣h&(length-1)的最後一位肯定爲0,即只能爲偶數,這樣任何hash值都只會被散列到數組的偶數下標位置上,這便浪費了近一半的空間,因此,length取2的整數次冪,是爲了使不同hash值發生碰撞的概率較小,這樣就能使元素在哈希表中均勻地散列。


閱讀更多 登錄後自動展開
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章