<html>
 <head>
  <meta charset="UTF-8">
 </head>
 <body>
  <p data-lake-id="u00c1491d" id="u00c1491d"><span data-lake-id="u49f9d0aa" id="u49f9d0aa">这是一个非常典型的面试问题，但是只会出现在1.7及以前的版本，1.8之后就被修复了</span></p>
  <h1 data-lake-id="NfCBM" id="NfCBM"><span data-lake-id="u4ab18086" id="u4ab18086">典型回答</span></h1>
  <h2 data-lake-id="qsiSY" id="qsiSY"><span data-lake-id="u21fe8257" id="u21fe8257">扩容过程</span></h2>
  <p data-lake-id="uec977e4b" id="uec977e4b"><span data-lake-id="uda5a7653" id="uda5a7653">HashMap在扩容的时候，会将元素插入链表头部，即头插法。如下图，原来是A-&gt;B-&gt;C，扩容后会变成C-&gt;B-&gt;A”</span></p>
  <p data-lake-id="u41b87578" id="u41b87578"><span data-lake-id="u81277f24" id="u81277f24">​</span><br></p>
  <p data-lake-id="u3be9b3b9" id="u3be9b3b9"><span data-lake-id="uc09676cc" id="uc09676cc">如下图所示：</span></p>
  <p data-lake-id="u57b3cc76" id="u57b3cc76"><img src="https://cdn.nlark.com/yuque/0/2022/png/719664/1668913906521-7dbb1c3c-ed05-4d16-a8ae-e85866115acb.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_21%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="ua188f5f8" id="ua188f5f8"><br></p>
  <p data-lake-id="uddc901d8" id="uddc901d8"><span data-lake-id="u409cad8d" id="u409cad8d">之所以选择使用头插法，是因为JDK的开发者认为，后插入的数据被使用到的概率更高，更容易成为热点数据，而通过头插法把它们放在队列头部，就可以使查询效率更高。</span></p>
  <p data-lake-id="ub08caa25" id="ub08caa25"><br></p>
  <p data-lake-id="u73aa3f0c" id="u73aa3f0c"><span data-lake-id="u194cc4e1" id="u194cc4e1">源代码如下：</span></p>
  <pre lang="java"><code>
void transfer(Entry[] newTable) {
    Entry[] src = table;
    int newCapacity = newTable.length;
    for (int j = 0; j &lt; src.length; j++) {
        Entry&lt;K,V&gt; e = src[j];
        if (e != null) {
            src[j] = null;
            do {
                Entry&lt;K,V&gt; next = e.next;
                int i = indexFor(e.hash, newCapacity);
                // 节点直接作为新链表的根节点
                e.next = newTable[i];
                newTable[i] = e;
                e = next;
            } while (e != null);
        }
    }
} 
</code></pre>
  <h2 data-lake-id="QHW2D" id="QHW2D"><span data-lake-id="u17ba59d8" id="u17ba59d8">并发现象</span></h2>
  <p data-lake-id="u4bd28537" id="u4bd28537"><span data-lake-id="ue6e74612" id="ue6e74612">但是，正是由于直接把当前节点作为链表根节点的这种操作，导致了在多线程并发扩容的时候，产生了循环引用的问题。</span></p>
  <p data-lake-id="u68dffcf3" id="u68dffcf3"><span data-lake-id="u8b3b17ee" id="u8b3b17ee">假如说此时有两个线程进行扩容，thread-1执行到</span><code data-lake-id="u9b33b02c" id="u9b33b02c"><span data-lake-id="ud6113597" id="ud6113597">Entry&lt;K,V&gt; next = e.next;</span></code><span data-lake-id="ue846699d" id="ue846699d">的时候被hang住，如下图所示：</span></p>
  <p data-lake-id="u066999b6" id="u066999b6"><img src="https://cdn.nlark.com/yuque/0/2022/png/719664/1668916452747-a9fda85d-73ce-4f68-a000-e61983cf04bd.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_23%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u76006c8c" id="u76006c8c"><span data-lake-id="u9bad5f1f" id="u9bad5f1f">此时thread-2开始执行，当thread-2扩容完成后，结果如下：</span></p>
  <p data-lake-id="ubf28f773" id="ubf28f773"><img src="https://cdn.nlark.com/yuque/0/2022/png/719664/1668916616017-f57c993c-352d-4da3-8668-25801c91fc58.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_26%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u86e29881" id="u86e29881"><span data-lake-id="u407d82be" id="u407d82be">此时thread-1抢占到执行时间，开始执行：</span><code data-lake-id="u2d45efc7" id="u2d45efc7"><span data-lake-id="u952d89f5" id="u952d89f5">e.next = newTable[i]; newTable[i] = e; e = next;</span></code><span data-lake-id="u75d6061f" id="u75d6061f">后，会变成如下样式：</span></p>
  <p data-lake-id="uedd00528" id="uedd00528"><img src="https://cdn.nlark.com/yuque/0/2022/png/719664/1668916882051-03cebaa0-7f9a-446d-8ed0-b089dcdf58cc.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_27%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u93659f60" id="u93659f60"><span data-lake-id="ua151eaf1" id="ua151eaf1">接着，进行下一次循环，继续执行</span><code data-lake-id="u4cf72b04" id="u4cf72b04"><span data-lake-id="ubc14f783" id="ubc14f783">e.next = newTable[i]; newTable[i] = e; e = next;</span></code><span data-lake-id="u8a3b0aee" id="u8a3b0aee">，如下图所示</span></p>
  <p data-lake-id="ub734a5ef" id="ub734a5ef"><img src="https://cdn.nlark.com/yuque/0/2022/png/719664/1668917031363-c9e2a3a8-8528-402c-94e2-25c5f34f2038.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_26%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u40b36bdd" id="u40b36bdd"><span data-lake-id="u1ae2ebb1" id="u1ae2ebb1">因为此时e!=null，且e.next = null，开始执行最后一次循环，结果如下：</span></p>
  <p data-lake-id="u58946441" id="u58946441"><img src="https://cdn.nlark.com/yuque/0/2022/png/719664/1668917228601-d50e0cff-5b7c-48b3-9c4d-74eabbb69ff8.png?x-oss-process=image%2Fwatermark%2Ctype_d3F5LW1pY3JvaGVp%2Csize_26%2Ctext_SmF2YSA4IEd1IFA%3D%2Ccolor_FFFFFF%2Cshadow_50%2Ct_80%2Cg_se%2Cx_10%2Cy_10"></p>
  <p data-lake-id="u1fd1d482" id="u1fd1d482"><span data-lake-id="u2182ad61" id="u2182ad61">可以看到，a和b已经形成环状，当下次get该桶的数据时候，如果get不到，则会一直在a和b直接循环遍历，导致CPU飙升到100%</span></p>
  <h1 data-lake-id="dCJB6" id="dCJB6"><span data-lake-id="uefba8ebd" id="uefba8ebd">知识扩展</span></h1>
  <h2 data-lake-id="nRnaA" id="nRnaA"><span data-lake-id="u7cd5f3d4" id="u7cd5f3d4">1.7为什么要将rehash的节点作为新链表的根节点</span></h2>
  <p data-lake-id="ubd0cc273" id="ubd0cc273"><span data-lake-id="u5f72549a" id="u5f72549a">在重新映射的过程中，如果不将rehash的节点作为新链表的根节点，而是使用普通的做法，遍历新链表中的每一个节点，然后将rehash的节点放到新链表的尾部，伪代码如下：</span></p>
  <pre lang="java"><code>
void transfer(Entry[] newTable) {
    for (int j = 0; j &lt; src.length; j++) {
        Entry&lt;K,V&gt; e = src[j];
        if (e != null) {
            src[j] = null;
            do {
                Entry&lt;K,V&gt; next = e.next;
                int i = indexFor(e.hash, newCapacity);
                // 如果新桶中没有数值，则直接放进去
                if (newTable[i] == null) {
                    newTable[i] = e;
                    continue;
                }
                // 如果有，则遍历新桶的链表
                else {
                    Entry&lt;K,V&gt; newTableEle = newTable[i];
                    while(newTableEle != null) {
                        Entry&lt;K,V&gt; newTableNext = newTableEle.next;
                        // 如果和新桶中链表中元素相同，则直接替换
                        if(newTableEle.equals(e)) {
                            newTableEle = e;
                            break;
                        }
                        newTableEle = newTableNext;
                    }
                    // 如果链表遍历完还没有相同的节点，则直接插入
                    if(newTableEle == null) {
                        newTableEle = e;
                    }
                }
            } while (e != null);
        }
    }
}
</code></pre>
  <p data-lake-id="u1208b116" id="u1208b116"><span data-lake-id="u4761b736" id="u4761b736">通过上面的代码我们可以看到，这种做法不仅需要遍历老桶中的链表，还需要遍历新桶中的链表，时间复杂度是O(n^2)，显然是不太符合预期的，所以需要将rehash的节点作为新桶中链表的根节点，这样就不需要二次遍历，时间复杂度就会降低到O(N)</span></p>
  <h2 data-lake-id="XPNhN" id="XPNhN"><span data-lake-id="u4302c1da" id="u4302c1da">1.8是如何解决这个问题的</span></h2>
  <p data-lake-id="ua943df0b" id="ua943df0b"><span data-lake-id="u5d54214f" id="u5d54214f">​</span><br></p>
  <p data-lake-id="u44279479" id="u44279479"><span data-lake-id="u0c8cac33" id="u0c8cac33">前面提到，之所以会发生这个死循环问题，是因为在JDK 1.8之前的版本中，HashMap是采用头插法进行扩容的，这个问题其实在JDK 1.8中已经被修复了，改用尾插法！JDK 1.8中的resize代码如下：</span></p>
  <p data-lake-id="u2ff819ce" id="u2ff819ce"><br></p>
  <pre lang="java"><code>
final Node&lt;K,V&gt;[] resize() {
    Node&lt;K,V&gt;[] oldTab = table;
    int oldCap = (oldTab == null) ? 0 : oldTab.length;
    int oldThr = threshold;
    int newCap, newThr = 0;
    if (oldCap &gt; 0) {
        if (oldCap &gt;= MAXIMUM_CAPACITY) {
            threshold = Integer.MAX_VALUE;
            return oldTab;
        }
        else if ((newCap = oldCap &lt;&lt; 1) &lt; MAXIMUM_CAPACITY &amp;&amp;
                 oldCap &gt;= DEFAULT_INITIAL_CAPACITY)
            newThr = oldThr &lt;&lt; 1; // double threshold
    }
    else if (oldThr &gt; 0) // initial capacity was placed in threshold
        newCap = oldThr;
    else {               // zero initial threshold signifies using defaults
        newCap = DEFAULT_INITIAL_CAPACITY;
        newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
    }
    if (newThr == 0) {
        float ft = (float)newCap * loadFactor;
        newThr = (newCap &lt; MAXIMUM_CAPACITY &amp;&amp; ft &lt; (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    }
    threshold = newThr;
    @SuppressWarnings({"rawtypes","unchecked"})
        Node&lt;K,V&gt;[] newTab = (Node&lt;K,V&gt;[])new Node[newCap];
    table = newTab;
    if (oldTab != null) {
        for (int j = 0; j &lt; oldCap; ++j) {
            Node&lt;K,V&gt; e;
            if ((e = oldTab[j]) != null) {
                oldTab[j] = null;
                if (e.next == null)
                    newTab[e.hash &amp; (newCap - 1)] = e;
                else if (e instanceof TreeNode)
                    ((TreeNode&lt;K,V&gt;)e).split(this, newTab, j, oldCap);
                else { // preserve order
                    Node&lt;K,V&gt; loHead = null, loTail = null;
                    Node&lt;K,V&gt; hiHead = null, hiTail = null;
                    Node&lt;K,V&gt; next;
                    do {
                        next = e.next;
                        if ((e.hash &amp; oldCap) == 0) {
                            if (loTail == null)
                                loHead = e;
                            else
                                loTail.next = e;
                            loTail = e;
                        }
                        else {
                            if (hiTail == null)
                                hiHead = e;
                            else
                                hiTail.next = e;
                            hiTail = e;
                        }
                    } while ((e = next) != null);
                    if (loTail != null) {
                        loTail.next = null;
                        newTab[j] = loHead;
                    }
                    if (hiTail != null) {
                        hiTail.next = null;
                        newTab[j + oldCap] = hiHead;
                    }
                }
            }
        }
    }
    return newTab;
}

</code></pre>
  <h2 data-lake-id="cYr0N" id="cYr0N"><span data-lake-id="u2e5b985a" id="u2e5b985a">除了并发死循环，HashMap在并发环境还有啥问题</span></h2>
  <ol list="u50d266e2">
   <li fid="u123051dc" data-lake-id="u26a17f3b" id="u26a17f3b"><span data-lake-id="u6d422b2f" id="u6d422b2f">多线程put的时候，size的个数和真正的个数不一样</span></li>
   <li fid="u123051dc" data-lake-id="u0fea1801" id="u0fea1801"><span data-lake-id="u894e17ab" id="u894e17ab">多线程put的时候，可能会把上一个put的值覆盖掉</span></li>
   <li fid="u123051dc" data-lake-id="u2e580f57" id="u2e580f57"><span data-lake-id="u23dfffa0" id="u23dfffa0">和其他不支持并发的集合一样，HashMap也采用了fail-fast操作，当多个线程同时put和get的时候，会抛出并发异常</span></li>
   <li fid="u123051dc" data-lake-id="u65c779b8" id="u65c779b8"><span data-lake-id="udee54803" id="udee54803">当既有get操作，又有扩容操作的时候，有可能数据刚好被扩容换了桶，导致get不到数据</span></li>
  </ol>
  <p data-lake-id="uc3410221" id="uc3410221"><br></p>
 </body>
</html>