package java.util.stream.learn;

import java.io.IOException;
import java.io.InvalidObjectException;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.*;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;

import sun.misc.SharedSecrets;

/**
 * Hash table based implementation of the <tt>Map</tt> interface.
 * 基于Hash表的Map接口的实现
 * This implementation provides all of the optional map operations,
 * 此实现提供了所有可选的映射操作，
 * and permits <tt>null</tt> values and the <tt>null</tt> key.
 * 并允许 null值和 null键。
 * (The <tt>HashMap</tt> class is roughly equivalent to <tt>Hashtable</tt>,
 * except that it is unsynchronized and permits nulls.)
 * HashMap除了不是同步的，并且允许空值之外，大致相当于HashTable。
 * This class makes no guarantees as to the order of the map;
 * in particular, it does not guarantee that the order  will remain constant over time.
 * 不能保证map中元素的数据.特别是，它不保证元素的顺序会随着时间的推移保持不变。
 *
 * <p>This implementation provides constant-time performance for the basic operations (<tt>get</tt> and <tt>put</tt>),
 * assuming the hash function disperses the elements properly among the buckets.
 * 在散列函数(hash方法)正确地分散元素于桶之间(整个hash表)的情况下, 这个实现为get/set方法提供了恒定的时间性能。
 * Iteration over collection views requires time proportional to the "capacity" of the
 * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number of key-value mappings).
 * 对集合视图的迭代需要与HashMap实例的“容量”（桶数）加上其大小（键值映射的数量）成比例的时间。
 * Thus, it's very important not to set the initial capacity too high (or the load factor too low)
 * if iteration performance is important.
 * 因此，如果迭代性能很重要，则不要将初始容量设置得太高（或负载因子太低）非常重要。
 *
 * <p>An instance of <tt>HashMap</tt> has two parameters that affect its
 * performance: <i>initial capacity</i> and <i>load factor</i>.
 * 一个HashMap实例有两个决定其性能的参数, 初始大小和负载因子。
 * The <i>capacity</i> is the number of buckets in the hash table, and the initial
 * capacity is simply the capacity at the time the hash table is created.
 * capacity 是哈希表中的桶数，初始容量只是创建哈希表时的容量。
 * <p>
 * The <i>load factor</i> is a measure of how full the hash table is allowed to
 * get before its capacity is automatically increased.
 * 加载因子 衡量哈希表在其容量自动增加之前可以获得多长。
 * When the number of entries in the hash table exceeds the product of the load factor and the
 * current capacity, the hash table is <i>rehashed</i> (that is, internal data
 * structures are rebuilt) so that the hash table has approximately twice the
 * number of buckets.
 * 当哈希表中的桶数超过 加载因子和当前容量的乘积 时，哈希表进行重新哈希(即重建内部数据结构),
 * 以便新的哈希表大约有之前桶数量的两倍。
 *
 * <p>As a general rule, the default load factor (.75) offers a good
 * tradeoff between time and space costs.
 * 作为一般的规格, 默认的负载因子(0.75)在时间和空间成本之间提供了良好的权衡。
 * Higher values decrease the space overhead but increase the lookup cost (reflected in most of
 * the operations of the <tt>HashMap</tt> class, including <tt>get</tt> and <tt>put</tt>).
 * 较高的值会减少空间开销，但会增加查找成本（反映在<tt> HashMap </ tt>类的大多数操作中，包括<tt> get </ tt>和<tt> put </ tt>）。
 * The expected number of entries in  the map and its load factor should be taken into account when setting its
 * initial capacity,
 * so as to minimize the number of rehash operations.
 * 在设置其初始容量时，应考虑映射中的预期条目数及其加载因子，以便最小化重新散列操作的数量。
 * If the initial capacity is greater than the maximum number of entries divided by the load factor,
 * no rehash operations will ever occur.
 * 如果初始容量大于最大条目数除以加载因子，则不会发生重新加载操作。
 *
 * <p>If many mappings are to be stored in a <tt>HashMap</tt>  instance,
 * creating it with a sufficiently large capacity will allow
 * the mappings to be stored more efficiently than letting it perform
 * automatic rehashing as needed to grow the table.
 * 如果要将多个映射存储在HashMap实例中，那么使用足够大的容量创建 Map 将会使映射更有效地存储，
 * 而不是根据需要执行自动重新散列来扩展 HashTable.
 * Note that using many keys with the same {@code hashCode()} is a sure way to slow down performance of any hash table.
 * 请注意，使用具有相同{@code hashCode（）}的许多键是降低任何哈希表性能的可靠方法。
 * To ameliorate impact, when keys are {@link Comparable}, this class may use comparison order among keys to help
 * break ties.
 * 为了改善影响，当 key为 Comparable 时，此类可以使用key之间的比较顺序来帮助打破关系。
 *
 * <p><strong>Note that this implementation is not synchronized.</strong>
 * 请注意，此实现是不同步的。
 * If multiple threads access a hash map concurrently, and at least one of
 * the threads modifies the map structurally, it <i>must</i> be synchronized externally.
 * 如果多个线程同时访问哈希映射，并且至少有一个线程在结构上修改了映射，则<i>必须</ i>在外部进行同步。
 * (A structural modification is any operation that adds or deletes one or more mappings;
 * merely changing the value associated with a key that an instance already contains is not a structural modification.)
 * 结构修改是指添加或删除一个或多个映射的任何操作; 仅更改与实例已包含的键关联的值不是结构修改。
 * This is typically accomplished by synchronizing on some object that naturally encapsulates the map.
 * 这通常通过在自然封装Map的某个对象上进行同步来实现。
 * <p>
 * If no such object exists, the map should be "wrapped" using the
 * {@link Collections#synchronizedMap Collections.synchronizedMap}
 * method.  This is best done at creation time, to prevent accidental
 * unsynchronized access to the map:<pre>
 *   Map m = Collections.synchronizedMap(new HashMap(...));</pre>
 * 如果不存在此类对象, 则应该使用Collections#synchronizedMap Collections.synchronizedMap 方法来包装map。
 * 最好是在创建的时候使用。以防止意外的不同步访问Map。
 * Map m = Collections.synchronizedMap(new HashMap(...))
 *
 * <p>The iterators returned by all of this class's "collection view methods"
 * are <i>fail-fast</i>: if the map is structurally modified at any time after
 * the iterator is created, in any way except through the iterator's own
 * <tt>remove</tt> method, the iterator will throw a
 * {@link ConcurrentModificationException}.  Thus, in the face of concurrent
 * modification, the iterator fails quickly and cleanly, rather than risking
 * arbitrary, non-deterministic behavior at an undetermined time in the
 * future.
 *
 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 * as it is, generally speaking, impossible to make any hard guarantees in the
 * presence of unsynchronized concurrent modification.  Fail-fast iterators
 * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
 * Therefore, it would be wrong to write a program that depended on this
 * exception for its correctness: <i>the fail-fast behavior of iterators
 * should be used only to detect bugs.</i>
 * 所有这个类的“集合视图方法”返回的迭代器都是fail-fast：
 * 如果在创建迭代器之后的任何时候对映射进行结构修改，除了通过迭代器自己的之外的任何方式remove方法，迭代器将抛出 ConcurrentModificationException。
 * 因此，在并发修改的情况下，迭代器快速而干净地失败，而不是在未来的未确定时间冒着任意的，非确定性行为的风险。
 *
 * <p>This class is a member of the
 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 * Java Collections Framework</a>.
 *
 * @param <K> the type of keys maintained by this map
 * @param <V> the type of mapped values
 * @author Doug Lea
 * @author Josh Bloch
 * @author Arthur van Hoff
 * @author Neal Gafter
 * @see Object#hashCode()
 * @see Collection
 * @see Map
 * @see TreeMap
 * @see Hashtable
 * @since 1.2
 */
public class HashMap<K, V> extends AbstractMap<K, V>
        implements Map<K, V>, Cloneable, Serializable {

    private static final long serialVersionUID = 362498820763181265L;

    /*
     * Implementation notes.
     * 实现要点.
     *
     * This map usually acts as a binned (bucketed) hash table, but
     * 这个map通常用于分区(分箱)的哈希表,但是
     * when bins get too large, they are transformed into bins of TreeNodes
     * 当桶的数量太大的时候,他就会转换成TreeNode的桶。
     * , each structured similarly to those in java.util.TreeMap.
     * 和TreeMap的节点类似。
     *  Most methods try to use normal bins,
     * 大多数的方法都尝试使用普通的桶。(链表形式的桶)
     * but relay to TreeNode methods when applicable (simply by checking instanceof a node).
     * 但是在适用的时候，使用的是TreeNode的方法。简单地通过检查节点的实例
     * Bins of TreeNodes may be traversed and used like any others,
     * TreeNodes的桶可以向其他容器一样遍历和使用。
     * but additionally support faster lookup when overpopulated.
     * 当节点数量过大的时候，还支持更快的查找
     * However, since the vast majority of bins in normal use are not overpopulated,
     * 然而, 由于使用的绝大部分情况是节点数量很大的时候,
     *  checking for existence of tree bins may be delayed in the course of table methods.
     *  所以，在表格方法中检查树桶的存在可能会延迟。
     * Tree bins (i.e., bins whose elements are all TreeNodes) are  ordered primarily by hashCode,
     * 树桶主要是按照hash值进行排序
     * but in the case of ties,
     * 但是在某些某些关系情况下。
     * if two elements are of the same "class C implements Comparable<C>",
     * 如果两个元素是相同的class C的实例
     * type then their compareTo method is used for ordering.
     * 类型就会使用他们compareTo方法进行排序。
     * (We conservatively check generic types via reflection to validate this -- see method comparableClassFor).
     * 我们保守的通过检查泛型类型来验证这一点. 请参考comparableClassFor方法。
     * The added complexity of tree bins is worthwhile in providing worst-case O(log n) operations
     * when keys either have distinct hashes or are  orderable,
     * 当key 具有不同的hashCode或者是可以排序的时候, 为了提供最坏情况下，O(log n)的时间复杂度，增加的树节点的时间复杂度也是合算的。
     * Thus, performance degrades gracefully under accidental or malicious usages in which hashCode() methods
     * return values that are poorly distributed, as well as those in which many keys share a hashCode,
     * so long as they are also Comparable.
     * 因此，在hashCode() 方法返回分布不均的值的偶然或恶意用法中，以及许多key共享hashCode的情况下，
     * 只要它们也是可比较的，性能就会极大的降低.
     * (If neither of these apply, we may waste about a factor of two in time and space compared to taking no
     * precautions.
     * But the only known cases stem from poor user programming practices that are already so slow that this makes
     * little difference.)
     * （如果这些都不适用，与不采取任何预防措施相比，我们可能在时间和空间上浪费大约两倍。
     * 但是，唯一已知的案例源于糟糕的用户编程实践，这些实践已经非常缓慢，这几乎没有什么区别。）
     * Because TreeNodes are about twice the size of regular nodes,
     *  因为TreeNodes大约是普通节点的两倍。
     * we use them only when bins contain enough nodes to warrant use (see TREEIFY_THRESHOLD).
     * 我们仅当桶中包含足够多的节点时候，才会使用到它。 参见TREEIFY_THRESHOLD.
     * And when they become too small (due to removal or resizing) they are converted back to plain bins.
     * 并且当他们变得足够小的时候(由于移除元素或者调整大小)，就会重新转换回普通的桶(链表)
     * In usages with well-distributed user hashCodes,
     * 在具有良好的hashCode分布的用户使用中,
     * tree bins are rarely used.
     * 树桶很少被使用.
     * Ideally, under random hashCodes, the frequency of nodes in bins follows a Poisson distribution
     * (http://en.wikipedia.org/wiki/Poisson_distribution) with a parameter of about 0.5 on average for the default
     * resizing threshold of 0.75,
     * although with a large variance because of resizing granularity.
     * Ignoring variance, the expected occurrences of list size k are (exp(-0.5) * pow(0.5, k) / factorial(k)).
     * 理想情况下，在随机hashCode的情况下，桶中节点的频率遵循Poisson分布（http://en.wikipedia.org/wiki/Poisson_distribution），
     * 参数平均约为0.5，默认调整阈值为0.75，虽然由于调整粒度而具有很大的差异，忽略方差，
     * 列表大小k的预期出现次数是(exp(-0.5) * pow(0.5, k) / factorial(k))
     *
     * The first values are:
     * 第一个值是:
     *
     * 0:    0.60653066
     * 1:    0.30326533
     * 2:    0.07581633
     * 3:    0.01263606
     * 4:    0.00157952
     * 5:    0.00015795
     * 6:    0.00001316
     * 7:    0.00000094
     * 8:    0.00000006
     * more: less than 1 in ten million
     * 更多：不到千万分之一
     *
     * The root of a tree bin is normally its first node.
     *  树桶中的根节点通常是它的第一个节点。
     * However, sometimes (currently only upon Iterator.remove), the root might be elsewhere,
     * but can be recovered following parent links (method TreeNode.root()).
     * 但是，有时(目前仅在Iterator.remove方法调用的时候)，根可能在其他地方，
     * 但可以在父链接之后恢复(调用方法TreeNode.root())。
     *
     * All applicable internal methods accept a hash code as an
     * argument (as normally supplied from a public method), allowing
     * them to call each other without recomputing user hashCodes.
     * Most internal methods also accept a "tab" argument, that is
     * normally the current table, but may be a new or old one when
     * resizing or converting.
     * 所有适用的内部方法都接受hashCode作为参数(通常从公共方法提供)，
     * 允许它们相互调用而无需重新计算用户的hashCode。
     * 大多数内部方法也接受“tab”参数，通常是当前表，但在调整大小或转换时可能是新的或旧的。
     *
     * When bin lists are treeified, split, or untreeified, we keep
     * them in the same relative access/traversal order (i.e., field
     * Node.next) to better preserve locality,
     * 当桶列表是树化的,拆分的或未解析的,我们将它们保持在相同的相对访问/遍历顺序（即字段Node.next）中以更好地保留局部性，
     * and to slightly simplify handling of splits and traversals that invoke iterator.remove.
     * 并略微简化调用iterator.remove的拆分和遍历的处理。
     * When using comparators on insertion, to keep a  total ordering (or as close as is required here) across
     * rebalancings,
     * 当在插入时使用比较器时，为了保持整个重新排序的总排序（或者在这里需要尽可能接近）
     * we compare classes and identityHashCodes as tie-breakers.
     * 我们将类和identityHashCodes作为绑定器进行比较。
     *
     * The use and transitions among plain vs tree modes is complicated by the existence of subclass LinkedHashMap.
     * 普通vs树模式之间的使用和转换由于子类LinkedHashMap的存在而变得复杂。
     * See below for hook methods defined to be invoked
     * upon insertion, removal and access that allow LinkedHashMap internals to otherwise remain independent of these
     *  mechanics.
     * (This also requires that a map instance be passed to some utility methods that may create new nodes.)
     * 在插入，删除和访问时允许LinkedHashMap内部以其他方式保持独立于这些机制。
     * (这还要求将map实例传递给可能创建新节点的一些实用方法。)
     * The concurrent-programming-like SSA-based coding style helps avoid aliasing errors amid all of the twisty
     * pointer operations.
     * 类似于并发编程的基于SSA的编码风格有助于避免在所有扭曲指针操作中出现混叠错误。
     */

    /**
     * The default initial capacity - MUST be a power of two.
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * The maximum capacity, used if a higher value is implicitly specified
     * by either of the constructors with arguments.
     * MUST be a power of two <= 1<<30.
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The load factor used when none specified in constructor.
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2 and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     * 使用树而不是链表的计数阈值。
     * 将元素添加到具有至少这么多节点的桶时，桶被转换为树。
     * 该值必须大于2且应至少为8才能与树木移除中的假设相关联，以便在收缩时转换回普通桶。
     */
    static final int TREEIFY_THRESHOLD = 8;

    /**
     * The bin count threshold for untreeifying a (split) bin during a resize operation.
     * 用于在调整大小操作期间解除（拆分）bin的bin计数阈值。(untreeifying不是一个英语单词,这里的以是非树化,即转换成普通列表的过程)
     * Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     * 应该小于TREEIFY_THRESHOLD，并且最多6个与去除时的收缩检测网格。
     */
    static final int UNTREEIFY_THRESHOLD = 6;

    /**
     * The smallest table capacity for which bins may be treeified.
     * 容器可以树化的最小容量
     * (Otherwise the table is resized if too many nodes in a bin.)
     * (否则，如果bin中的节点太多，则会调整表的大小.)
     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     * between resizing and treeification thresholds.
     * 应该至少为 4 * TREEIFY_THRESHOLD，以避免调整大小和树化阈值之间的冲突。
     */
    static final int MIN_TREEIFY_CAPACITY = 64;

    /**
     * Basic hash bin node, used for most entries.  (See below for
     * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
     */
    static class Node<K, V> implements Map.Entry<K, V> {
        final int hash; // 避免重复计算key的hash值
        final K key;
        V value;
        HashMap.Node<K, V> next;

        Node(int hash, K key, V value, HashMap.Node<K, V> next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }

        @Override
        public final K getKey() {
            return key;
        }

        @Override
        public final V getValue() {
            return value;
        }

        @Override
        public final String toString() {
            return key + "=" + value;
        }

        @Override
        public final int hashCode() {
            return Objects.hashCode(key) ^ Objects.hashCode(value);
        }

        @Override
        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }

        public final boolean equals(Object o) {
            if (o == this)
                return true;
            if (o instanceof Map.Entry) {
                Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
                if (Objects.equals(key, e.getKey()) &&
                        Objects.equals(value, e.getValue()))
                    return true;
            }
            return false;
        }
    }

    /* ---------------- Static utilities -------------- */

    /**
     * Computes key.hashCode() and spreads (XORs) higher bits of hash to lower.
     * 计算key的hashCode并且和hashCode值高16位进行异或运算。(异或: 相同为0，不同为1)
     * Because the table uses power-of-two masking,
     * hash表使用的是2次幂做掩码。
     * sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)
     * <p>
     * So we apply a transform that spreads the impact of higher bits downward.
     * There is a tradeoff between speed, utility, and  quality of bit-spreading.
     * Because many common sets of hashes are already reasonably distributed
     * (so don't benefit from spreading), and because we use trees to handle large sets of collisions in bins,
     * we just XOR some shifted bits in the  cheapest possible way to reduce systematic lossage,
     * as well as to incorporate impact of the highest bits
     * that would otherwise never be used in index calculations because of table bounds.
     */
    static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }

    /**
     * Returns x's Class if it is of the form "class C implements
     * Comparable<C>", else null.
     * 如果实现了Comparable,返回它的类，否则返回null。
     * 判断一个类是否实现了Comparable。
     */
    static Class<?> comparableClassFor(Object x) {
        if (x instanceof Comparable) {
            Class<?> c;
            Type[] ts, as;
            Type t;
            ParameterizedType p;
            if ((c = x.getClass()) == String.class) // bypass checks
                return c;
            if ((ts = c.getGenericInterfaces()) != null) {
                for (int i = 0; i < ts.length; ++i) {
                    if (((t = ts[i]) instanceof ParameterizedType) &&
                            ((p = (ParameterizedType) t).getRawType() ==
                                    Comparable.class) &&
                            (as = p.getActualTypeArguments()) != null &&
                            as.length == 1 && as[0] == c) // type arg is c
                        return c;
                }
            }
        }
        return null;
    }

    /**
     * Returns k.compareTo(x) if x matches kc (k's screened comparable
     * class), else 0.
     * // 确定key是不是同一个。
     */
    @SuppressWarnings({"rawtypes", "unchecked"}) // for cast to Comparable
    static int compareComparables(Class<?> kc, Object k, Object x) {
        return (x == null || x.getClass() != kc ? 0 :
                ((Comparable) k).compareTo(x));
    }

    /**
     * Returns a power of two size for the given target capacity.
     */
    static final int tableSizeFor(int cap) {
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }

    /* ---------------- Fields -------------- */

    /**
     * The table, initialized on first use, and resized as
     * necessary. When allocated, length is always a power of two.
     * 该表在首次使用时初始化，并根据需要调整大小。分配时，长度始终是2的幂
     * (We also tolerate length zero in some operations to allow
     * bootstrapping mechanics that are currently not needed.)
     * (我们还在一些操作中容忍长度为零，以允许当前不需要的自举机制。）
     */
    transient HashMap.Node<K, V>[] table;

    /**
     * Holds cached entrySet().
     * 保存缓存的 entrySet().
     * Note that AbstractMap fields are used for keySet() and values().
     * 注意: 这个AbstractMap 的字段会被 keySet() 和 values()使用。
     */
    transient Set<Map.Entry<K, V>> entrySet;

    /**
     * The number of key-value mappings contained in this map.
     */
    transient int size;

    /**
     * The number of times this HashMap has been structurally modified.
     * HashMap经过结构修改的次数
     * Structural modifications are those that change the number of mappings in
     * the HashMap or otherwise modify its internal structure (e.g.,rehash).
     * 结构修改是那些改变HashMap中映射数量或以其他方式修改其内部结构（例如，rehash）的修改。
     * This field is used to make iterators on Collection-views of
     * the HashMap fail-fast.  (See ConcurrentModificationException).
     * 此字段用于在HashMap的Collection-views上快速生成迭代器(见ConcurrentModificationException)
     */
    transient int modCount;

    /**
     * The next size value at which to resize (capacity * load factor).
     * 下一次调整容器大小的阈值. threshold=capacity * load factor
     *
     * @serial
     */
    // (The javadoc description is true upon serialization.
    // Additionally, if the table array has not been allocated, this
    // field holds the initial array capacity, or zero signifying
    // DEFAULT_INITIAL_CAPACITY.)
    int threshold;

    /**
     * The load factor for the hash table.
     * hash表的负载因子，在实例化hashTable的时候指定,该对象内不能变更(final);
     *
     * @serial
     */
    final float loadFactor;

    /* ---------------- Public operations -------------- */

    /**
     * Constructs an empty <tt>HashMap</tt> with the specified initial
     * capacity and load factor.
     *
     * @param initialCapacity the initial capacity
     * @param loadFactor      the load factor
     * @throws IllegalArgumentException if the initial capacity is negative
     *                                  or the load factor is nonpositive
     */
    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                    initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                    loadFactor);
        this.loadFactor = loadFactor;
        this.threshold = tableSizeFor(initialCapacity);
    }

    /**
     * Constructs an empty <tt>HashMap</tt> with the specified initial
     * capacity and the default load factor (0.75).
     *
     * @param initialCapacity the initial capacity.
     * @throws IllegalArgumentException if the initial capacity is negative.
     */
    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

    /**
     * Constructs an empty <tt>HashMap</tt> with the default initial capacity
     * (16) and the default load factor (0.75).
     */
    public HashMap() {
        this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
    }

    /**
     * Constructs a new <tt>HashMap</tt> with the same mappings as the
     * specified <tt>Map</tt>.  The <tt>HashMap</tt> is created with
     * default load factor (0.75) and an initial capacity sufficient to
     * hold the mappings in the specified <tt>Map</tt>.
     *
     * @param m the map whose mappings are to be placed in this map
     * @throws NullPointerException if the specified map is null
     */
    public HashMap(Map<? extends K, ? extends V> m) {
        this.loadFactor = DEFAULT_LOAD_FACTOR;
        putMapEntries(m, false);
    }

    /**
     * Implements Map.putAll and Map constructor.
     *
     * @param m     the map
     * @param evict false when initially constructing this map, else
     *              true (relayed to method afterNodeInsertion).
     */
    final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
        int s = m.size();
        if (s > 0) {
            if (table == null) { // pre-size
                float ft = ((float) s / loadFactor) + 1.0F;
                int t = ((ft < (float) MAXIMUM_CAPACITY) ?
                        (int) ft : MAXIMUM_CAPACITY);
                if (t > threshold)
                    threshold = tableSizeFor(t);
            } else if (s > threshold)
                resize();
            for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
                K key = e.getKey();
                V value = e.getValue();
                putVal(hash(key), key, value, false, evict);
            }
        }
    }

    /**
     * Returns the number of key-value mappings in this map.
     *
     * @return the number of key-value mappings in this map
     */
    @Override
    public int size() {
        return size;
    }

    /**
     * Returns <tt>true</tt> if this map contains no key-value mappings.
     *
     * @return <tt>true</tt> if this map contains no key-value mappings
     */
    @Override
    public boolean isEmpty() {
        return size == 0;
    }

    /**
     * Returns the value to which the specified key is mapped,
     * or {@code null} if this map contains no mapping for the key.
     * 根据指定的key返回映射的Value，当没有包含key的映射时，会返回 null
     * <p>More formally, if this map contains a mapping from a key
     * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
     * key.equals(k))}, then this method returns {@code v}; otherwise
     * it returns {@code null}.  (There can be at most one such mapping.)
     * 更正式的情况下： 如果存在一个key(K)的映射 value (V),使得 key==null?K==null:key.equals(K),
     * 如果等式的值为 true, 那么返回V
     * 否则返回 null
     * <p>A return value of {@code null} does not <i>necessarily</i>
     * indicate that the map contains no mapping for the key; it's also
     * possible that the map explicitly maps the key to {@code null}.
     * The {@link #containsKey containsKey} operation may be used to
     * distinguish these two cases.
     * 不能通过 返回值为null 来判断是否含有<K,V> 映射，因为HashMap允许value为null。
     *
     * @see #put(Object, Object)
     */
    public V get(Object key) {
        HashMap.Node<K, V> e;
        // 如果对应的节点(Node/TreeNode)存在则返回value,=如果不存在的返回null
        return (e = getNode(hash(key), key)) == null ? null : e.value;
    }

    /**
     * Implements Map.get and related methods.
     *
     * @param hash hash for key
     * @param key  the key
     * @return the node, or null if none
     */
    final HashMap.Node<K, V> getNode(int hash, Object key) {
        HashMap.Node<K, V>[] tab;
        HashMap.Node<K, V> first, e;
        int n;
        K k;
        // 数组不为空，并且对应的桶(bin)不为null
        if ((tab = table) != null && (n = tab.length) > 0 &&
                (first = tab[(n - 1) & hash]) != null) {
            // 检查是否为桶内的第一个节点是否满足
            // 为什么要检测第一个节点，直接进入循环或者树节点的检测不行吗?
            // 假设第一个节点是目的节点,可以直接返回，少执行一次if判断，来判断其是树节点还是链表节点。
            // 如果不是第一个节点，循环也会少执行一次,树节点的遍历，也会少遍历一次。
            if (first.hash == hash && // always check first node
                    ((k = first.key) == key || (key != null && key.equals(k))))
                return first;
            //如果第一个节点不是要查找的节点
            if ((e = first.next) != null) {
                // 如果是树节点
                if (first instanceof HashMap.TreeNode) {
                    return ((HashMap.TreeNode<K, V>) first).getTreeNode(hash, key);
                }
                // 如果是链表,遍历查找。
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k))))
                        return e;
                } while ((e = e.next) != null);
            }
        }
        return null;
    }

    /**
     * Returns <tt>true</tt> if this map contains a mapping for the
     * specified key.
     * 如果Map中包含一个映射关系,则返回true,注意是包含映射关系就会返回ture.
     * hashMap.put("1",null),也会返回true
     *
     * @param key The key whose presence in this map is to be tested
     * @return <tt>true</tt> if this map contains a mapping for the specified
     * key.
     */
    public boolean containsKey(Object key) {
        return getNode(hash(key), key) != null;
    }

    /**
     * Associates the specified value with the specified key in this map.
     * If the map previously contained a mapping for the key, the old
     * value is replaced.
     * 将指定的value和key关联在map中。
     * 如果map中已经存在了key,那么将会替换掉老的value。
     *
     * @param key   key with which the specified value is to be associated
     * @param value value to be associated with the specified key
     * @return the previous value associated with <tt>key</tt>, or
     * <tt>null</tt> if there was no mapping for <tt>key</tt>.
     * (A <tt>null</tt> return can also indicate that the map
     * previously associated <tt>null</tt> with <tt>key</tt>.)
     * 如果返回了value，就说明map中原来和key关联是有值的。如果返回null就说明没有value。
     */
    public V put(K key, V value) {
        return putVal(hash(key), key, value, false, true);
    }

    /**
     * Implements Map.put and related methods.
     * 实现Map.put相关的方法。
     *
     * @param hash         hash for key
     * @param key          the key
     * @param value        the value to put
     * @param onlyIfAbsent if true, don't change existing value
     *                     如果是true的,不会修改存在的值。返回老的值。
     * @param evict        if false, the table is in creation mode.
     *                     如果为false的时候,表属于创建模式,第一次新增元素的时候。
     * @return previous value, or null if none
     */
    final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {

        HashMap.Node<K, V>[] tab;
        HashMap.Node<K, V> p;
        int n, i;
        if ((tab = table) == null || (n = tab.length) == 0)
            // 如果数组为null,或者数组长度为0的时候，数组需要调整大小。
            n = (tab = resize()).length;
        if ((p = tab[i = (n - 1) & hash]) == null)
            // 定位到数组的桶为null的时候,创建桶内的第一个元素。next=null;
            tab[i] = newNode(hash, key, value, null);
        else {
            // 如果桶不为null，则创建链表
            HashMap.Node<K, V> e;
            K k;
            // p表示当前桶的第一个元素。
            // 如果新增的元素和第一个元素相等的话(出现hash冲突),暂存已经存在的元素到变量e中。
            if (p.hash == hash &&
                    ((k = p.key) == key || (key != null && key.equals(k))))
                e = p;
            else if (p instanceof HashMap.TreeNode)
                // 如果是树节点。
                e = ((HashMap.TreeNode<K, V>) p).putTreeVal(this, tab, hash, key, value);
            else {
                // 链表元素新增的过程了。
                for (int binCount = 0; ; ++binCount) {
                    if ((e = p.next) == null) {
                        p.next = newNode(hash, key, value, null);
                        if (binCount >= TREEIFY_THRESHOLD - 1)
                            // 如果桶内的元素数量达到树化的阈值,将链表转换成树。
                            treeifyBin(tab, hash);
                        break;
                    }
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k))))
                        // 如果第一个元素和要新增的元素hash,key都相等的话,直接进行新增操作。
                        break;
                    p = e;
                }
            }

            if (e != null) { // existing mapping for key
                // 如果原来的元素不为空,保留原来的值。
                V oldValue = e.value;
                if (!onlyIfAbsent || oldValue == null)
                    // 覆盖掉原来的value;
                    e.value = value;
                // 留一个无方法体的方法，供子类扩展
                afterNodeAccess(e);
                return oldValue;
            }
        }
        ++modCount;
        if (++size > threshold)
            // 如果table中的桶的数量超过了阈值。扩容。
            resize();
        // 供子类扩展的方法。
        afterNodeInsertion(evict);
        return null;
    }

    /**
     * Initializes or doubles table size.
     * 初始化，或者加倍表格的大小
     * If null, allocates in accord with initial capacity target held in field threshold.
     * 如果为null时候，根据字段threshold的初始容量进行分配
     * Otherwise, because we are using power-of-two expansion, the
     * elements from each bin must either stay at same index, or move
     * with a power of two offset in the new table.
     * 否则，因为我们正在使用二次幂扩展，所以每个bin中的元素必须保持相同的索引，或者在新表中以两个偏移的幂移动
     *
     * @return the table
     */
    final HashMap.Node<K, V>[] resize() {
        HashMap.Node<K, V>[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        // 如果旧表的大小大于0
        if (oldCap > 0) {

            if (oldCap >= MAXIMUM_CAPACITY) {
                // hash表达到最大容量
                threshold = Integer.MAX_VALUE;
                return oldTab;
            } else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                    oldCap >= DEFAULT_INITIAL_CAPACITY) {
                // 如果翻倍后旧表大小<最大表长度，并且旧表长度>默认初始化长度。
                // 扩容的阈值也翻倍。 还是等级 table.length*loadFactor
                newThr = oldThr << 1; // double threshold
            }
        } else if (oldThr > 0) { // initial capacity was placed in threshold
            // 旧表长度<=0,旧的threshold>0,
            // 就把threshold设置为表长度。
            newCap = oldThr;
        } else {               // zero initial threshold signifies using defaults
            // 设置为默认值。
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }

        if (newThr == 0) {
            // 如果新的扩缩容阈值等于0,设置新的扩缩容阈值为新的容量*负载因子.
            float ft = (float) newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float) MAXIMUM_CAPACITY ?
                    (int) ft : Integer.MAX_VALUE);
        }
        threshold = newThr;

        // 重新创建新的hash表
        @SuppressWarnings({"rawtypes", "unchecked"})
        HashMap.Node<K, V>[] newTab = (HashMap.Node<K, V>[]) new HashMap.Node[newCap];
        table = newTab;
        // 如果旧表不为空,进行扩容.
        // 否则(旧表为空)就进行初始化过程.
        if (oldTab != null) {
            for (int j = 0; j < oldCap; ++j) {
                HashMap.Node<K, V> e;
                if ((e = oldTab[j]) != null) {
                    oldTab[j] = null;
                    if (e.next == null) {
                        // 如果当前桶只有一个节点。
                        newTab[e.hash & (newCap - 1)] = e;
                    } else if (e instanceof HashMap.TreeNode) {
                        // 如果当前桶是棵红黑树
                        ((HashMap.TreeNode<K, V>) e).split(this, newTab, j, oldCap);
                    } else { // preserve order
                        // 桶是链表,将该桶内的元素重新分配到表中。

                        HashMap.Node<K, V> loHead = null, loTail = null;
                        HashMap.Node<K, V> hiHead = null, hiTail = null;
                        HashMap.Node<K, V> next;

                        // 遍历桶内的元素，将元素重新分配到hash表内的各个桶中。
                        // 具体的实现过程是: 将当前的元素的hash值和容量取&,如果>0,那就说明该元素应该分配到新的桶内。
                        // 桶的位置就是: oldCap+j.即桶原来容器+该元素所在的桶的下标。(hiHead所标识的位置)
                        // 反之如果hash值是==0的,那么该元素就应该还在当前桶内。(loHead所标识的位置)
                        // 这里所说的位置都是指桶的下标,整个表都是新的了,位置肯定都变了。
                        // 为什么可以这么实现呢？
                        // 因为扩容的时候，使用的是原来容量的2倍进行扩容的。所以就可以使用(oldCap+j)的方式来确定元素的新位置了。
                        do {
                            next = e.next;
                            if ((e.hash & oldCap) == 0) {
                                // 还在原桶中
                                if (loTail == null)
                                    loHead = e;
                                else {
                                    // 位置最后一个节点为空,使用e=next的时候，next为null的情况。
                                    // 在桶内元素遍历完成后,会把桶的最后一个元素的next置为null。
                                    loTail.next = e;
                                }
                                loTail = e;
                            } else {
                                // 放置到新的桶内。
                                if (hiTail == null)
                                    hiHead = e;
                                else {
                                    // 位置最后一个节点为空,使用e=next的时候，next为null的情况。
                                    // 在桶内元素遍历完成后,会把桶的最后一个元素的next置为null。
                                    hiTail.next = e;
                                }
                                hiTail = e;
                            }
                        } while ((e = next) != null);

                        if (loTail != null) {
                            loTail.next = null;
                            // 原来桶的位置。
                            newTab[j] = loHead;
                        }

                        if (hiTail != null) {
                            hiTail.next = null;
                            // 确定新桶的位置
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }

    /**
     * 将链表转换成树。
     * Replaces all linked nodes in bin at index for given hash unless table is too small, in which case resizes
     * instead.
     * 替换给定hash值的索引处的桶的所有节点，如果表太小(table.length小于64),就调整大小.
     * 这里其实是对hash表的一种优化,防止因为表长度太小而转换成树,造成性能浪费
     *
     * @param hash 用于确定桶的位置。
     */
    final void treeifyBin(HashMap.Node<K, V>[] tab, int hash) {
        int n, index;
        // 链表的节点
        HashMap.Node<K, V> e;
        // 如果hash表为空或者hash表的长度小于最小化的树化容量(64)，这时会重调整大小。
        // 将容量扩大为原来的两倍。
        if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) {
            resize();
        } else if ((e = tab[index = (n - 1) & hash]) != null) {
            HashMap.TreeNode<K, V> hd = null, tl = null;
            do {
                // 构建一个树的节点。
                HashMap.TreeNode<K, V> p = replacementTreeNode(e, null);
                // 如果尾为null,说明这个节点是该桶中的第一个元素，
                // 所以要将其赋于头节点。
                if (tl == null) {
                    hd = p;
                } else {
                    // 将该节点放在尾节点后。
                    p.prev = tl;
                    tl.next = p;
                }
                // 当前节点作为尾节点。
                tl = p;
            } while ((e = e.next) != null);

            // 如果该桶中有元素，则进行树化。
            if ((tab[index] = hd) != null) {
                hd.treeify(tab);
            }
        }
    }

    /**
     * Copies all of the mappings from the specified map to this map.
     * These mappings will replace any mappings that this map had for
     * any of the keys currently in the specified map.
     *
     * @param m mappings to be stored in this map
     * @throws NullPointerException if the specified map is null
     */
    public void putAll(Map<? extends K, ? extends V> m) {
        putMapEntries(m, true);
    }

    /**
     * Removes the mapping for the specified key from this map if present.
     * 从map中删除指定的key,如果key存在的话
     *
     * @param key key whose mapping is to be removed from the map
     * @return value 如果key存在，返回key对应的Value，如果不存在返回null
     */
    public V remove(Object key) {
        HashMap.Node<K, V> e;
        return (e = removeNode(hash(key), key, null, false, true)) == null ?
                null : e.value;
    }

    /**
     * Implements Map.remove and related methods.
     * 实现Map.remove相关的方法
     *
     * @param hash       hash for key
     * @param key        the key
     * @param value      the value to match if matchValue, else ignored
     * @param matchValue if true only remove if value is equal 如果是true，仅在value相等的时候删除。
     * @param movable    if false do not move other nodes while removing // 如果为false，则在删除节点的时候不移动其他节点。
     * @return the node, or null if none 返回删除的节点
     */
    final HashMap.Node<K, V> removeNode(int hash, Object key, Object value,
                                        boolean matchValue, boolean movable) {
        HashMap.Node<K, V>[] tab;
        HashMap.Node<K, V> p;
        int n, index;
        if ((tab = table) != null && (n = tab.length) > 0 &&
                (p = tab[index = (n - 1) & hash]) != null) {
            HashMap.Node<K, V> node = null, e;
            K k;
            V v;
            if (p.hash == hash &&
                    ((k = p.key) == key || (key != null && key.equals(k))))
                node = p;
            else if ((e = p.next) != null) {
                if (p instanceof HashMap.TreeNode) {
                    // 找到红黑树中的节点
                    node = ((HashMap.TreeNode<K, V>) p).getTreeNode(hash, key);
                } else {
                    // 删除链表中的节点1: 查找到节点的位置。
                    do {
                        if (e.hash == hash &&
                                ((k = e.key) == key ||
                                        (key != null && key.equals(k)))) {
                            node = e;
                            break;
                        }
                        p = e;
                    } while ((e = e.next) != null);
                }
            }
            // 真正的去删除的过程。
            if (node != null && (!matchValue || (v = node.value) == value ||
                    (value != null && value.equals(v)))) {
                if (node instanceof HashMap.TreeNode) {
                    // 删除红黑树的节点
                    ((HashMap.TreeNode<K, V>) node).removeTreeNode(this, tab, movable);
                } else if (node == p) {
                    // 桶中只有当前的节点。
                    tab[index] = node.next;
                } else {
                    // 链表中节点的删除
                    p.next = node.next;
                }
                // 修改次数+1
                ++modCount;
                --size;
                afterNodeRemoval(node);
                return node;
            }
        }
        return null;
    }

    /**
     * Removes all of the mappings from this map.
     * The map will be empty after this call returns.
     */
    public void clear() {
        HashMap.Node<K, V>[] tab;
        modCount++;
        if ((tab = table) != null && size > 0) {
            size = 0;
            for (int i = 0; i < tab.length; ++i)
                tab[i] = null;
        }
    }

    /**
     * Returns <tt>true</tt> if this map maps one or more keys to the
     * specified value.
     * 如果存在指定的Value,就会返回true
     *
     * @param value value whose presence in this map is to be tested
     * @return <tt>true</tt> if this map maps one or more keys to the
     * specified value
     */
    public boolean containsValue(Object value) {
        HashMap.Node<K, V>[] tab;
        V v;
        // 如果table的长度不为空
        if ((tab = table) != null && size > 0) {
            // 遍历每个bin
            for (int i = 0; i < tab.length; ++i) {
                // 遍历bin下的每个Node
                for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next) {
                    if ((v = e.value) == value ||
                            (value != null && value.equals(v)))
                        return true;
                }
            }
        }
        return false;
    }

    /**
     * Returns a {@link Set} view of the keys contained in this map.
     * The set is backed by the map, so changes to the map are
     * reflected in the set, and vice-versa.  If the map is modified
     * while an iteration over the set is in progress (except through
     * the iterator's own <tt>remove</tt> operation), the results of
     * the iteration are undefined.  The set supports element removal,
     * which removes the corresponding mapping from the map, via the
     * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
     * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
     * operations.  It does not support the <tt>add</tt> or <tt>addAll</tt>
     * operations.
     *
     * @return a set view of the keys contained in this map
     */
    public Set<K> keySet() {
        Set<K> ks = keySet;
        if (ks == null) {
            ks = new HashMap.KeySet();
            keySet = ks;
        }
        return ks;
    }

    final class KeySet extends AbstractSet<K> {
        public final int size() {
            return size;
        }

        public final void clear() {
            HashMap.this.clear();
        }

        public final Iterator<K> iterator() {
            return new HashMap.KeyIterator();
        }

        public final boolean contains(Object o) {
            return containsKey(o);
        }

        public final boolean remove(Object key) {
            return removeNode(hash(key), key, null, false, true) != null;
        }

        public final Spliterator<K> spliterator() {
            return new HashMap.KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
        }

        public final void forEach(Consumer<? super K> action) {
            HashMap.Node<K, V>[] tab;
            if (action == null)
                throw new NullPointerException();
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next)
                        action.accept(e.key);
                }
                if (modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }
    }

    /**
     * Returns a {@link Collection} view of the values contained in this map.
     * The collection is backed by the map, so changes to the map are
     * reflected in the collection, and vice-versa.  If the map is
     * modified while an iteration over the collection is in progress
     * (except through the iterator's own <tt>remove</tt> operation),
     * the results of the iteration are undefined.  The collection
     * supports element removal, which removes the corresponding
     * mapping from the map, via the <tt>Iterator.remove</tt>,
     * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
     * <tt>retainAll</tt> and <tt>clear</tt> operations.  It does not
     * support the <tt>add</tt> or <tt>addAll</tt> operations.
     *
     * @return a view of the values contained in this map
     */
    public Collection<V> values() {
        Collection<V> vs = values;
        if (vs == null) {
            vs = new HashMap.Values();
            values = vs;
        }
        return vs;
    }

    final class Values extends AbstractCollection<V> {
        public final int size() {
            return size;
        }

        public final void clear() {
            HashMap.this.clear();
        }

        public final Iterator<V> iterator() {
            return new HashMap.ValueIterator();
        }

        public final boolean contains(Object o) {
            return containsValue(o);
        }

        public final Spliterator<V> spliterator() {
            return new HashMap.ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
        }

        public final void forEach(Consumer<? super V> action) {
            HashMap.Node<K, V>[] tab;
            if (action == null)
                throw new NullPointerException();
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next)
                        action.accept(e.value);
                }
                if (modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }
    }

    /**
     * Returns a {@link Set} view of the mappings contained in this map.
     * 返回这个Map里映射关系的一个 Set 视图.
     * The set is backed by the map, so changes to the map are
     * reflected in the set, and vice-versa.
     * 修改HashMap会影响这个 Set 视图, 同样的，在这个视图里修改, 也会影响HashMap
     * If the map is modified while an iteration over the set is in progress (except through the iterator's own
     * <tt>remove</tt> operation, or through the <tt>setValue</tt> operation on a map entry returned by the
     * iterator) the results of the iteration are undefined.
     * 如果通过对视图的迭代过程来修改HashMap(除了迭代器自身的remove方法,或者对迭代器返回的Entry的setValue操作),
     * 修改的结果是不确定的。
     * The set supports element removal, which removes the corresponding mapping from the map
     * 这个 set 视图, 支持元素的删除, 也会从 HashMap 中删除对应的元素.
     * , via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
     * <tt>clear</tt> operations.
     * 支持 Iterator.remove, Set.remove,  Set.removeAll, Set.retainAll, Set.clear 等操作
     * It does not support the <tt>add</tt> or <tt>addAll</tt> operations.
     * 但是它不支持 add, addAll 操作
     *
     * @return a set view of the mappings contained in this map
     */
    public Set<Map.Entry<K, V>> entrySet() {
        Set<Map.Entry<K, V>> es;
        return (es = entrySet) == null ? (entrySet = new HashMap.EntrySet()) : es;
    }

    /**
     * EntrySet 继承的是 AbstractSet,
     * 泛型传入
     */
    final class EntrySet extends AbstractSet<Map.Entry<K, V>> {

        public final int size() {
            return size;
        }

        public final void clear() {
            HashMap.this.clear();
        }

        /**
         * 直接调用 HashMap 迭代器
         *
         * @return
         */
        public final Iterator<Map.Entry<K, V>> iterator() {
            return new HashMap.EntryIterator();
        }

        /**
         * 判断是否存在,
         * 直接调用的是hashMap的getNode方法
         */
        public final boolean contains(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
            Object key = e.getKey();
            HashMap.Node<K, V> candidate = getNode(hash(key), key);
            return candidate != null && candidate.equals(e);
        }

        /**
         * 直接调用的 HashMap的删除方法.
         * 从这个方法中也可以看出来, 我们在 set试图中删除元素是会直接影响Hashmap的。
         */
        public final boolean remove(Object o) {
            if (o instanceof Map.Entry) {
                Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
                Object key = e.getKey();
                Object value = e.getValue();
                return removeNode(hash(key), key, value, true, true) != null;
            }
            return false;
        }

        /**
         * entrySet 的分隔器
         */
        public final Spliterator<Map.Entry<K, V>> spliterator() {
            return new HashMap.EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
        }

        /**
         * entrySet 的迭代器
         */
        public final void forEach(Consumer<? super Map.Entry<K, V>> action) {
            HashMap.Node<K, V>[] tab;
            if (action == null)
                throw new NullPointerException();
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next)
                        action.accept(e);
                }
                if (modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }
    }

    // Overrides of JDK8 Map extension methods

    @Override
    public V getOrDefault(Object key, V defaultValue) {
        HashMap.Node<K, V> e;
        return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
    }

    @Override
    public V putIfAbsent(K key, V value) {
        return putVal(hash(key), key, value, true, true);
    }

    @Override
    public boolean remove(Object key, Object value) {
        return removeNode(hash(key), key, value, true, true) != null;
    }

    @Override
    public boolean replace(K key, V oldValue, V newValue) {
        HashMap.Node<K, V> e;
        V v;
        if ((e = getNode(hash(key), key)) != null &&
                ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
            e.value = newValue;
            afterNodeAccess(e);
            return true;
        }
        return false;
    }

    @Override
    public V replace(K key, V value) {
        HashMap.Node<K, V> e;
        if ((e = getNode(hash(key), key)) != null) {
            V oldValue = e.value;
            e.value = value;
            afterNodeAccess(e);
            return oldValue;
        }
        return null;
    }

    @Override
    public V computeIfAbsent(K key,
                             Function<? super K, ? extends V> mappingFunction) {
        if (mappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        HashMap.Node<K, V>[] tab;
        HashMap.Node<K, V> first;
        int n, i;
        int binCount = 0;
        HashMap.TreeNode<K, V> t = null;
        HashMap.Node<K, V> old = null;
        if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof HashMap.TreeNode)
                old = (t = (HashMap.TreeNode<K, V>) first).getTreeNode(hash, key);
            else {
                HashMap.Node<K, V> e = first;
                K k;
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
            V oldValue;
            if (old != null && (oldValue = old.value) != null) {
                afterNodeAccess(old);
                return oldValue;
            }
        }
        V v = mappingFunction.apply(key);
        if (v == null) {
            return null;
        } else if (old != null) {
            old.value = v;
            afterNodeAccess(old);
            return v;
        } else if (t != null)
            t.putTreeVal(this, tab, hash, key, v);
        else {
            tab[i] = newNode(hash, key, v, first);
            if (binCount >= TREEIFY_THRESHOLD - 1)
                treeifyBin(tab, hash);
        }
        ++modCount;
        ++size;
        afterNodeInsertion(true);
        return v;
    }

    public V computeIfPresent(K key,
                              BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null)
            throw new NullPointerException();
        HashMap.Node<K, V> e;
        V oldValue;
        int hash = hash(key);
        if ((e = getNode(hash, key)) != null &&
                (oldValue = e.value) != null) {
            V v = remappingFunction.apply(key, oldValue);
            if (v != null) {
                e.value = v;
                afterNodeAccess(e);
                return v;
            } else
                removeNode(hash, key, null, false, true);
        }
        return null;
    }

    @Override
    public V compute(K key,
                     BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        HashMap.Node<K, V>[] tab;
        HashMap.Node<K, V> first;
        int n, i;
        int binCount = 0;
        HashMap.TreeNode<K, V> t = null;
        HashMap.Node<K, V> old = null;
        if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof HashMap.TreeNode)
                old = (t = (HashMap.TreeNode<K, V>) first).getTreeNode(hash, key);
            else {
                HashMap.Node<K, V> e = first;
                K k;
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        V oldValue = (old == null) ? null : old.value;
        V v = remappingFunction.apply(key, oldValue);
        if (old != null) {
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            } else
                removeNode(hash, key, null, false, true);
        } else if (v != null) {
            if (t != null)
                t.putTreeVal(this, tab, hash, key, v);
            else {
                tab[i] = newNode(hash, key, v, first);
                if (binCount >= TREEIFY_THRESHOLD - 1)
                    treeifyBin(tab, hash);
            }
            ++modCount;
            ++size;
            afterNodeInsertion(true);
        }
        return v;
    }

    @Override
    public V merge(K key, V value,
                   BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
        if (value == null)
            throw new NullPointerException();
        if (remappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        HashMap.Node<K, V>[] tab;
        HashMap.Node<K, V> first;
        int n, i;
        int binCount = 0;
        HashMap.TreeNode<K, V> t = null;
        HashMap.Node<K, V> old = null;
        if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof HashMap.TreeNode)
                old = (t = (HashMap.TreeNode<K, V>) first).getTreeNode(hash, key);
            else {
                HashMap.Node<K, V> e = first;
                K k;
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        if (old != null) {
            V v;
            if (old.value != null)
                v = remappingFunction.apply(old.value, value);
            else
                v = value;
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            } else
                removeNode(hash, key, null, false, true);
            return v;
        }
        if (value != null) {
            if (t != null)
                t.putTreeVal(this, tab, hash, key, value);
            else {
                tab[i] = newNode(hash, key, value, first);
                if (binCount >= TREEIFY_THRESHOLD - 1)
                    treeifyBin(tab, hash);
            }
            ++modCount;
            ++size;
            afterNodeInsertion(true);
        }
        return value;
    }

    @Override
    public void forEach(BiConsumer<? super K, ? super V> action) {
        HashMap.Node<K, V>[] tab;
        if (action == null)
            throw new NullPointerException();
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next)
                    action.accept(e.key, e.value);
                action.accept(e.key, e.value);
            }
            if (modCount != mc)
                throw new ConcurrentModificationException();
        }
    }

    @Override
    public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
        HashMap.Node<K, V>[] tab;
        if (function == null)
            throw new NullPointerException();
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next) {
                    e.value = function.apply(e.key, e.value);
                }
            }
            if (modCount != mc)
                throw new ConcurrentModificationException();
        }
    }

    /* ------------------------------------------------------------ */
    // Cloning and serialization

    /**
     * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
     * values themselves are not cloned.
     *
     * @return a shallow copy of this map
     */
    @SuppressWarnings("unchecked")
    @Override
    public Object clone() {
        HashMap<K, V> result;
        try {
            result = (HashMap<K, V>) super.clone();
        } catch (CloneNotSupportedException e) {
            // this shouldn't happen, since we are Cloneable
            throw new InternalError(e);
        }
        result.reinitialize();
        result.putMapEntries(this, false);
        return result;
    }

    // These methods are also used when serializing HashSets
    final float loadFactor() {
        return loadFactor;
    }

    final int capacity() {
        return (table != null) ? table.length :
                (threshold > 0) ? threshold :
                        DEFAULT_INITIAL_CAPACITY;
    }

    /**
     * Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
     * serialize it).
     *
     * @serialData The <i>capacity</i> of the HashMap (the length of the
     * bucket array) is emitted (int), followed by the
     * <i>size</i> (an int, the number of key-value
     * mappings), followed by the key (Object) and value (Object)
     * for each key-value mapping.  The key-value mappings are
     * emitted in no particular order.
     */
    private void writeObject(java.io.ObjectOutputStream s)
            throws IOException {
        int buckets = capacity();
        // Write out the threshold, loadfactor, and any hidden stuff
        s.defaultWriteObject();
        s.writeInt(buckets);
        s.writeInt(size);
        internalWriteEntries(s);
    }

    /**
     * Reconstitutes this map from a stream (that is, deserializes it).
     *
     * @param s the stream
     * @throws ClassNotFoundException if the class of a serialized object
     *                                could not be found
     * @throws IOException            if an I/O error occurs
     */
    private void readObject(java.io.ObjectInputStream s)
            throws IOException, ClassNotFoundException {
        // Read in the threshold (ignored), loadfactor, and any hidden stuff
        s.defaultReadObject();
        reinitialize();
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new InvalidObjectException("Illegal load factor: " +
                    loadFactor);
        s.readInt();                // Read and ignore number of buckets
        int mappings = s.readInt(); // Read number of mappings (size)
        if (mappings < 0)
            throw new InvalidObjectException("Illegal mappings count: " +
                    mappings);
        else if (mappings > 0) { // (if zero, use defaults)
            // Size the table using given load factor only if within
            // range of 0.25...4.0
            float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
            float fc = (float) mappings / lf + 1.0f;
            int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
                    DEFAULT_INITIAL_CAPACITY :
                    (fc >= MAXIMUM_CAPACITY) ?
                            MAXIMUM_CAPACITY :
                            tableSizeFor((int) fc));
            float ft = (float) cap * lf;
            threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
                    (int) ft : Integer.MAX_VALUE);

            // Check Map.Entry[].class since it's the nearest public type to
            // what we're actually creating.
            SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);
            @SuppressWarnings({"rawtypes", "unchecked"})
            HashMap.Node<K, V>[] tab = (HashMap.Node<K, V>[]) new HashMap.Node[cap];
            table = tab;

            // Read the keys and values, and put the mappings in the HashMap
            for (int i = 0; i < mappings; i++) {
                @SuppressWarnings("unchecked")
                K key = (K) s.readObject();
                @SuppressWarnings("unchecked")
                V value = (V) s.readObject();
                putVal(hash(key), key, value, false, false);
            }
        }
    }

    /* ------------------------------------------------------------ */
    // iterators

    abstract class HashIterator {
        HashMap.Node<K, V> next;        // next entry to return
        HashMap.Node<K, V> current;     // current entry
        int expectedModCount;  // for fast-fail
        int index;             // current slot

        HashIterator() {
            expectedModCount = modCount;
            HashMap.Node<K, V>[] t = table;
            current = next = null;
            index = 0;
            if (t != null && size > 0) { // advance to first entry
                do {
                } while (index < t.length && (next = t[index++]) == null);
            }
        }

        public final boolean hasNext() {
            return next != null;
        }

        final HashMap.Node<K, V> nextNode() {
            HashMap.Node<K, V>[] t;
            HashMap.Node<K, V> e = next;
            if (modCount != expectedModCount)
                throw new ConcurrentModificationException();
            if (e == null)
                throw new NoSuchElementException();
            if ((next = (current = e).next) == null && (t = table) != null) {
                do {
                } while (index < t.length && (next = t[index++]) == null);
            }
            return e;
        }

        public final void remove() {
            HashMap.Node<K, V> p = current;
            if (p == null)
                throw new IllegalStateException();
            if (modCount != expectedModCount)
                throw new ConcurrentModificationException();
            current = null;
            K key = p.key;
            removeNode(hash(key), key, null, false, false);
            expectedModCount = modCount;
        }
    }

    final class KeyIterator extends HashMap.HashIterator
            implements Iterator<K> {
        public final K next() {
            return nextNode().key;
        }
    }

    final class ValueIterator extends HashMap.HashIterator
            implements Iterator<V> {
        public final V next() {
            return nextNode().value;
        }
    }

    final class EntryIterator extends HashMap.HashIterator
            implements Iterator<Map.Entry<K, V>> {
        public final Map.Entry<K, V> next() {
            return nextNode();
        }
    }

    /* ------------------------------------------------------------ */
    // spliterators

    static class HashMapSpliterator<K, V> {
        final HashMap<K, V> map;
        HashMap.Node<K, V> current;          // current node
        int index;                  // current index, modified on advance/split
        int fence;                  // one past last index
        int est;                    // size estimate
        int expectedModCount;       // for comodification checks

        HashMapSpliterator(HashMap<K, V> m, int origin,
                           int fence, int est,
                           int expectedModCount) {
            this.map = m;
            this.index = origin;
            this.fence = fence;
            this.est = est;
            this.expectedModCount = expectedModCount;
        }

        final int getFence() { // initialize fence and size on first use
            int hi;
            if ((hi = fence) < 0) {
                HashMap<K, V> m = map;
                est = m.size;
                expectedModCount = m.modCount;
                HashMap.Node<K, V>[] tab = m.table;
                hi = fence = (tab == null) ? 0 : tab.length;
            }
            return hi;
        }

        public final long estimateSize() {
            getFence(); // force init
            return (long) est;
        }
    }

    static final class KeySpliterator<K, V>
            extends HashMap.HashMapSpliterator<K, V>
            implements Spliterator<K> {
        KeySpliterator(HashMap<K, V> m, int origin, int fence, int est,
                       int expectedModCount) {
            super(m, origin, fence, est, expectedModCount);
        }

        public HashMap.KeySpliterator<K, V> trySplit() {
            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
            return (lo >= mid || current != null) ? null :
                    new HashMap.KeySpliterator<>(map, lo, index = mid, est >>>= 1,
                            expectedModCount);
        }

        public void forEachRemaining(Consumer<? super K> action) {
            int i, hi, mc;
            if (action == null)
                throw new NullPointerException();
            HashMap<K, V> m = map;
            HashMap.Node<K, V>[] tab = m.table;
            if ((hi = fence) < 0) {
                mc = expectedModCount = m.modCount;
                hi = fence = (tab == null) ? 0 : tab.length;
            } else
                mc = expectedModCount;
            if (tab != null && tab.length >= hi &&
                    (i = index) >= 0 && (i < (index = hi) || current != null)) {
                HashMap.Node<K, V> p = current;
                current = null;
                do {
                    if (p == null)
                        p = tab[i++];
                    else {
                        action.accept(p.key);
                        p = p.next;
                    }
                } while (p != null || i < hi);
                if (m.modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }

        public boolean tryAdvance(Consumer<? super K> action) {
            int hi;
            if (action == null)
                throw new NullPointerException();
            HashMap.Node<K, V>[] tab = map.table;
            if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
                while (current != null || index < hi) {
                    if (current == null)
                        current = tab[index++];
                    else {
                        K k = current.key;
                        current = current.next;
                        action.accept(k);
                        if (map.modCount != expectedModCount)
                            throw new ConcurrentModificationException();
                        return true;
                    }
                }
            }
            return false;
        }

        public int characteristics() {
            return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
                    Spliterator.DISTINCT;
        }
    }

    static final class ValueSpliterator<K, V>
            extends HashMap.HashMapSpliterator<K, V>
            implements Spliterator<V> {
        ValueSpliterator(HashMap<K, V> m, int origin, int fence, int est,
                         int expectedModCount) {
            super(m, origin, fence, est, expectedModCount);
        }

        public HashMap.ValueSpliterator<K, V> trySplit() {
            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
            return (lo >= mid || current != null) ? null :
                    new HashMap.ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
                            expectedModCount);
        }

        public void forEachRemaining(Consumer<? super V> action) {
            int i, hi, mc;
            if (action == null)
                throw new NullPointerException();
            HashMap<K, V> m = map;
            HashMap.Node<K, V>[] tab = m.table;
            if ((hi = fence) < 0) {
                mc = expectedModCount = m.modCount;
                hi = fence = (tab == null) ? 0 : tab.length;
            } else
                mc = expectedModCount;
            if (tab != null && tab.length >= hi &&
                    (i = index) >= 0 && (i < (index = hi) || current != null)) {
                HashMap.Node<K, V> p = current;
                current = null;
                do {
                    if (p == null)
                        p = tab[i++];
                    else {
                        action.accept(p.value);
                        p = p.next;
                    }
                } while (p != null || i < hi);
                if (m.modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }

        public boolean tryAdvance(Consumer<? super V> action) {
            int hi;
            if (action == null)
                throw new NullPointerException();
            HashMap.Node<K, V>[] tab = map.table;
            if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
                while (current != null || index < hi) {
                    if (current == null)
                        current = tab[index++];
                    else {
                        V v = current.value;
                        current = current.next;
                        action.accept(v);
                        if (map.modCount != expectedModCount)
                            throw new ConcurrentModificationException();
                        return true;
                    }
                }
            }
            return false;
        }

        public int characteristics() {
            return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
        }
    }

    static final class EntrySpliterator<K, V>
            extends HashMap.HashMapSpliterator<K, V>
            implements Spliterator<Map.Entry<K, V>> {

        EntrySpliterator(HashMap<K, V> m, int origin, int fence, int est,
                         int expectedModCount) {
            super(m, origin, fence, est, expectedModCount);
        }

        public HashMap.EntrySpliterator<K, V> trySplit() {
            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
            return (lo >= mid || current != null) ? null :
                    new HashMap.EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
                            expectedModCount);
        }

        public void forEachRemaining(Consumer<? super Map.Entry<K, V>> action) {
            int i, hi, mc;
            if (action == null)
                throw new NullPointerException();
            HashMap<K, V> m = map;
            HashMap.Node<K, V>[] tab = m.table;
            if ((hi = fence) < 0) {
                mc = expectedModCount = m.modCount;
                hi = fence = (tab == null) ? 0 : tab.length;
            } else
                mc = expectedModCount;
            if (tab != null && tab.length >= hi &&
                    (i = index) >= 0 && (i < (index = hi) || current != null)) {
                HashMap.Node<K, V> p = current;
                current = null;
                do {
                    if (p == null)
                        p = tab[i++];
                    else {
                        action.accept(p);
                        p = p.next;
                    }
                } while (p != null || i < hi);
                if (m.modCount != mc)
                    throw new ConcurrentModificationException();
            }
        }

        public boolean tryAdvance(Consumer<? super Map.Entry<K, V>> action) {
            int hi;
            if (action == null)
                throw new NullPointerException();
            HashMap.Node<K, V>[] tab = map.table;
            if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
                while (current != null || index < hi) {
                    if (current == null)
                        current = tab[index++];
                    else {
                        HashMap.Node<K, V> e = current;
                        current = current.next;
                        action.accept(e);
                        if (map.modCount != expectedModCount)
                            throw new ConcurrentModificationException();
                        return true;
                    }
                }
            }
            return false;
        }

        public int characteristics() {
            return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
                    Spliterator.DISTINCT;
        }
    }

    /* ------------------------------------------------------------ */
    // LinkedHashMap support


    /*
     * The following package-protected methods are designed to be
     * overridden by LinkedHashMap, but not by any other subclass.
     * Nearly all other internal methods are also package-protected
     * but are declared final, so can be used by LinkedHashMap, view
     * classes, and HashSet.
     */

    // Create a regular (non-tree) node
    HashMap.Node<K, V> newNode(int hash, K key, V value, HashMap.Node<K, V> next) {
        return new HashMap.Node<>(hash, key, value, next);
    }

    // For conversion from TreeNodes to plain nodes
    HashMap.Node<K, V> replacementNode(HashMap.Node<K, V> p, HashMap.Node<K, V> next) {
        return new HashMap.Node<>(p.hash, p.key, p.value, next);
    }

    // Create a tree bin node
    HashMap.TreeNode<K, V> newTreeNode(int hash, K key, V value, HashMap.Node<K, V> next) {
        return new HashMap.TreeNode<>(hash, key, value, next);
    }

    // For treeifyBin
    HashMap.TreeNode<K, V> replacementTreeNode(HashMap.Node<K, V> p, HashMap.Node<K, V> next) {
        return new HashMap.TreeNode<>(p.hash, p.key, p.value, next);
    }

    /**
     * Reset to initial default state.  Called by clone and readObject.
     */
    void reinitialize() {
        table = null;
        entrySet = null;
        keySet = null;
        values = null;
        modCount = 0;
        threshold = 0;
        size = 0;
    }

    // Callbacks to allow LinkedHashMap post-actions
    void afterNodeAccess(HashMap.Node<K, V> p) {
    }

    void afterNodeInsertion(boolean evict) {
    }

    void afterNodeRemoval(HashMap.Node<K, V> p) {
    }

    // Called only from writeObject, to ensure compatible ordering.
    void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
        HashMap.Node<K, V>[] tab;
        if (size > 0 && (tab = table) != null) {
            for (int i = 0; i < tab.length; ++i) {
                for (HashMap.Node<K, V> e = tab[i]; e != null; e = e.next) {
                    s.writeObject(e.key);
                    s.writeObject(e.value);
                }
            }
        }
    }

    /* ------------------------------------------------------------ */
    // Tree bins

    /**
     * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
     * extends Node) so can be used as extension of either regular or
     * linked node.
     */
    static final class TreeNode<K, V> extends LinkedHashMap.Entry<K, V> {
        HashMap.TreeNode<K, V> parent;  // red-black tree links
        HashMap.TreeNode<K, V> left;
        HashMap.TreeNode<K, V> right;
        HashMap.TreeNode<K, V> prev;    // needed to unlink next upon deletion
        boolean red;

        TreeNode(int hash, K key, V val, HashMap.Node<K, V> next) {
            super(hash, key, val, next);
        }

        /**
         * Returns root of tree containing this node.
         */
        final HashMap.TreeNode<K, V> root() {
            for (HashMap.TreeNode<K, V> r = this, p; ; ) {
                if ((p = r.parent) == null)
                    return r;
                r = p;
            }
        }

        /**
         * Ensures that the given root is the first node of its bin.
         * // 确保红黑树的根节点是桶的第一个节点。
         * 为什么不直接将tab[index]==root? 是为了树重新转换成链表的时候使用的。
         */
        static <K, V> void moveRootToFront(HashMap.Node<K, V>[] tab, HashMap.TreeNode<K, V> root) {
            int n;
            if (root != null && tab != null && (n = tab.length) > 0) {
                int index = (n - 1) & root.hash;
                HashMap.TreeNode<K, V> first = (HashMap.TreeNode<K, V>) tab[index];
                // 判断第一个节点和root是不是相等的,判断的是地址。
                if (root != first) {
                    HashMap.Node<K, V> rn;
                    tab[index] = root;
                    HashMap.TreeNode<K, V> rp = root.prev;

                    if ((rn = root.next) != null) {
                        // root的后一个节点的指向前的指针指向root的前一个节点。
                        ((HashMap.TreeNode<K, V>) rn).prev = rp;
                    }

                    if (rp != null) {
                        // root的前一个节点的指向后的指针指向root的后一个节点。
                        rp.next = rn;
                    }

                    if (first != null) {
                        // 第一个元素的前指针指向root
                        first.prev = root;
                    }
                    // root的后向指针指向first
                    root.next = first;
                    // root的前向指针置为null
                    root.prev = null;
                }
                // 递归不变检查
                assert checkInvariants(root);
            }
        }

        /**
         * Finds the node starting at root p with the given hash and key.
         * The kc argument caches comparableClassFor(key) upon first use
         * comparing keys.
         * 红黑树的查找，
         * 从根节点开始, 直接判断hash值即可。
         * 如果hash值，小于当前节点hash值，对其左子树进行遍历。
         * 反之，对右子树进行遍历。
         * key值相等直接返回.
         * 注意: 这里有左右子树为null的情况。
         * 对接k和k的类进行比较,判断其要遍历的树为左子树或者右子树。
         */
        final HashMap.TreeNode<K, V> find(int h, Object k, Class<?> kc) {
            HashMap.TreeNode<K, V> p = this;
            do {
                int ph, dir;
                K pk;
                HashMap.TreeNode<K, V> pl = p.left, pr = p.right, q;
                if ((ph = p.hash) > h)
                    p = pl;
                else if (ph < h)
                    p = pr;
                else if ((pk = p.key) == k || (k != null && k.equals(pk)))
                    return p;
                else if (pl == null)
                    p = pr;
                else if (pr == null)
                    p = pl;
                else if ((kc != null ||
                        (kc = comparableClassFor(k)) != null) &&
                        (dir = compareComparables(kc, k, pk)) != 0) {
                    p = (dir < 0) ? pl : pr;
                } else if ((q = pr.find(h, k, kc)) != null)
                    return q;
                else
                    p = pl;
            } while (p != null);
            return null;
        }

        /**
         * Calls find for root node.
         */
        final HashMap.TreeNode<K, V> getTreeNode(int h, Object k) {
            // 如果当前节点不是根节点,找到根节点，调用find进行查找,
            // 如果是根节点，调用find进行查找。
            return ((parent != null) ? root() : this).find(h, k, null);
        }

        /**
         * Tie-breaking utility for ordering insertions when equal
         * hashCodes and non-comparable. We don't require a total
         * order, just a consistent insertion rule to maintain
         * equivalence across rebalancings. Tie-breaking further than
         * necessary simplifies testing a bit.
         */
        static int tieBreakOrder(Object a, Object b) {
            int d;
            if (a == null || b == null ||
                    (d = a.getClass().getName().compareTo(b.getClass().getName())) == 0) {
                d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
                        -1 : 1);
            }
            return d;
        }

        /**
         * Forms tree of the nodes linked from this node.
         * 把该节点连接的所有节点组成一棵树。(树化的过程)
         */
        final void treeify(HashMap.Node<K, V>[] tab) {
            // 该棵树的根节点。
            HashMap.TreeNode<K, V> root = null;
            // x是遍历的每个节点。
            for (HashMap.TreeNode<K, V> x = this, next; x != null; x = next) {
                // 存下下一个节点。(指向下一个节点的指针)
                next = (HashMap.TreeNode<K, V>) x.next;
                x.left = x.right = null;
                // 对根节点就行赋值(无父节点,黑色)
                if (root == null) {
                    x.parent = null;
                    x.red = false;
                    root = x;
                } else {
                    K k = x.key;
                    int h = x.hash;
                    Class<?> kc = null;

                    for (HashMap.TreeNode<K, V> p = root; ; ) {
                        // dir,负值和0为左子树，正值为右子树。
                        int dir, ph;
                        K pk = p.key;

                        /*************判断节点在左子树还是右子树 -start***************/
                        // h为当前节点的hash值。
                        // p是父节点, ph是父节点的hash值。
                        if ((ph = p.hash) > h) {
                            // 放在左子树
                            dir = -1;
                        } else if (ph < h) {
                            // 放在又子树
                            dir = 1;
                        }
                        //如果当前节点和父节点的hash值相等:
                        //如果节点的key实现了Comparable, 或者 父节点和当前节点的key为一个。
                        else if ((kc == null && (kc = comparableClassFor(k)) == null) ||
                                (dir = compareComparables(kc, k, pk)) == 0) {
                            // k是当前节点的key，pk是父节点的key
                            // 根据hashMap定义的规则,判断当前节点应该位于左子树还是右子树。
                            dir = tieBreakOrder(k, pk);
                        }
                        /*************判断节点在左子树还是右子树 -end***************/

                        HashMap.TreeNode<K, V> xp = p;
                        // p==null,代表着遍历到了叶子节点。
                        if ((p = (dir <= 0) ? p.left : p.right) == null) {
                            // xp是当前节点的父节点。
                            x.parent = xp;
                            if (dir <= 0) {
                                xp.left = x;
                            } else {
                                xp.right = x;
                            }
                            // 平衡插入的红黑树(完成插入后，红黑树的性质可能被破坏,这里进行重新平衡)
                            root = balanceInsertion(root, x);
                            break;
                        }
                    }

                }
            }
            //确保红黑树的根节点是桶的第一个节点。
            moveRootToFront(tab, root);
        }

        /**
         * Returns a list of non-TreeNodes replacing those linked from
         * this node.
         */
        final HashMap.Node<K, V> untreeify(HashMap<K, V> map) {
            HashMap.Node<K, V> hd = null, tl = null;
            for (HashMap.Node<K, V> q = this; q != null; q = q.next) {
                HashMap.Node<K, V> p = map.replacementNode(q, null);
                if (tl == null)
                    hd = p;
                else
                    tl.next = p;
                tl = p;
            }
            return hd;
        }

        /**
         * Tree version of putVal.
         */
        final HashMap.TreeNode<K, V> putTreeVal(HashMap<K, V> map, HashMap.Node<K, V>[] tab,
                                                int h, K k, V v) {
            Class<?> kc = null;
            boolean searched = false;
            HashMap.TreeNode<K, V> root = (parent != null) ? root() : this;
            for (HashMap.TreeNode<K, V> p = root; ; ) {
                int dir, ph;
                K pk;
                /***************判断 左右子树 ******************/
                if ((ph = p.hash) > h) {
                    dir = -1;
                } else if (ph < h) {
                    dir = 1;
                } else if ((pk = p.key) == k || (k != null && k.equals(pk))) {
                    return p;
                } else if ((kc == null &&
                        (kc = comparableClassFor(k)) == null) ||
                        (dir = compareComparables(kc, k, pk)) == 0) {
                    if (!searched) {
                        HashMap.TreeNode<K, V> q, ch;
                        searched = true;
                        if (((ch = p.left) != null &&
                                (q = ch.find(h, k, kc)) != null) ||
                                ((ch = p.right) != null &&
                                        (q = ch.find(h, k, kc)) != null))
                            return q;
                    }
                    dir = tieBreakOrder(k, pk);
                }

                /***************判断 左右子树 end******************/

                HashMap.TreeNode<K, V> xp = p;
                if ((p = (dir <= 0) ? p.left : p.right) == null) {
                    HashMap.Node<K, V> xpn = xp.next;
                    HashMap.TreeNode<K, V> x = map.newTreeNode(h, k, v, xpn);
                    if (dir <= 0)
                        xp.left = x;
                    else
                        xp.right = x;
                    xp.next = x;
                    x.parent = x.prev = xp;
                    if (xpn != null)
                        ((HashMap.TreeNode<K, V>) xpn).prev = x;
                    // 这里比较重要了，不过我们在treeify中已经说过了。
                    moveRootToFront(tab, balanceInsertion(root, x));
                    return null;
                }
            }
        }

        /**
         * Removes the given node, that must be present before this call.
         * This is messier than typical red-black deletion code because we
         * cannot swap the contents of an interior node with a leaf
         * successor that is pinned by "next" pointers that are accessible
         * independently during traversal. So instead we swap the tree
         * linkages. If the current tree appears to have too few nodes,
         * the bin is converted back to a plain bin. (The test triggers
         * somewhere between 2 and 6 nodes, depending on tree structure).
         */
        final void removeTreeNode(HashMap<K, V> map, HashMap.Node<K, V>[] tab,
                                  boolean movable) {
            // 注意了： 这个时候被删除的节点是谁??
            // 是this.
            int n;
            if (tab == null || (n = tab.length) == 0)
                return;

            // 找到对应的索引(确定对应桶的位置), n 是当前表的长度
            int index = (n - 1) & hash;
            // first: 第一个树节点(当前为父节点),root，父节点。rl:
            HashMap.TreeNode<K, V> first = (HashMap.TreeNode<K, V>) tab[index], root = first, rl;
            // succ:下一个节点(链表的指向)。pred, 前一个节点。
            HashMap.TreeNode<K, V> succ = (HashMap.TreeNode<K, V>) next, pred = prev;

            if (pred == null) {
                // 前一个为空时，即当前接是父节点:(被删除的节点是根节点)
                tab[index] = first = succ;
            } else {
                // 否测,前一个节点的下一个执行当前节点的下一个。(意会)
                pred.next = succ;
            }

            if (succ != null) {
                // 当前节点的后节点不为null,后一个节点的前节点指向当前节点的前节点(意会)
                succ.prev = pred;
            }

            if (first == null) {
                // 如果删除当前节点，该桶变成了null的。就直接返回
                return;
            }

            if (root.parent != null) {
                // 重置table[index]处为树的根节点。
                root = root.root();
            }

            // PS: 说点没用， JDK除了部分ifelse不加括号之外，
            // 其实换行，还是用的挺多的，看起来也挺舒服的。
            // 值得借鉴
            if (root == null
                    || (movable && (root.right == null
                    || (rl = root.left) == null
                    || rl.left == null))) {
                // 树太小了，将树转换成链表
                tab[index] = first.untreeify(map);  // too small
                return;
            }
            /*****注意！！！ 此时已经从双向链表中删除了, 第一步走完。******/

            // p是待删除的节点，pl当前节点的左孩子节点,pr当前节点的右孩子节点,replacement,用来交换的节点。
            HashMap.TreeNode<K, V> p = this, pl = left, pr = right, replacement;
            if (pl != null && pr != null) {

                // s为右子树的最小的节点,sl为左子树(一下五行和源码略有不同)
                HashMap.TreeNode<K, V> s = pr, sl = s.left;
                while (sl != null) { // find successor
                    s = sl;
                    sl = s.left;
                }

                // 交换颜色
                boolean c = s.red;
                s.red = p.red;
                p.red = c; // swap colors

                // 交换节点连接
                HashMap.TreeNode<K, V> sr = s.right;
                HashMap.TreeNode<K, V> pp = p.parent;
                // pr是当前节点的右孩子节点
                // s是当前节点的右子树的最小的节点
                // p的右子树,只有s这一个节点
                if (s == pr) { // p was s's direct parent
                    p.parent = s;
                    s.right = p;
                } else { //
                    // sp： 最小节点的父节点
                    HashMap.TreeNode<K, V> sp = s.parent;
                    if ((p.parent = sp) != null) {
                        if (s == sp.left)
                            sp.left = p;
                        else
                            sp.right = p;
                    }
                    if ((s.right = pr) != null)
                        pr.parent = s;
                }
                // 置null孩子。
                p.left = null;
                if ((p.right = sr) != null) {
                    sr.parent = p;
                }
                if ((s.left = pl) != null) {
                    pl.parent = s;
                }
                if ((s.parent = pp) == null) {
                    root = s;
                } else if (p == pp.left) {
                    pp.left = s;
                } else {
                    pp.right = s;
                }

                // 确定要交换的节点完毕，交换节点
                if (sr != null) {
                    replacement = sr;
                } else {
                    replacement = p;
                }
            } else if (pl != null) {
                // 当前树只含有左子树
                replacement = pl;
            } else if (pr != null) {
                // 当前树，只有又子树
                replacement = pr;
            } else {
                // 无孩子
                replacement = p;
            }

            if (replacement != p) {
                HashMap.TreeNode<K, V> pp = replacement.parent = p.parent;
                if (pp == null)
                    root = replacement;
                else if (p == pp.left)
                    pp.left = replacement;
                else
                    pp.right = replacement;
                p.left = p.right = p.parent = null;
            }

            // 是否要进行重平衡树?
            HashMap.TreeNode<K, V> r = p.red ? root : balanceDeletion(root, replacement);

            // 在平衡后删除该节点
            if (replacement == p) {  // detach
                HashMap.TreeNode<K, V> pp = p.parent;
                p.parent = null;
                if (pp != null) {
                    if (p == pp.left)
                        pp.left = null;
                    else if (p == pp.right)
                        pp.right = null;
                }
            }
            // 参数moveable控制是否删除节点后确保树的根节点为链表的头节点
            if (movable) {
                // 将树根节点，移动到tab[index]指针处
                moveRootToFront(tab, r);
            }
        }

        /**
         * Splits nodes in a tree bin into lower and upper tree bins, or untreeifies if now too small.
         * 将原来树桶中的节点拆分为更低或更高的树桶,如果太小的话就转化成链表
         * Called only from resize;
         * 只被resize方法调用
         * see above discussion about split bits and indices.
         *
         * @param map   hash表
         * @param tab   表中的指定的桶的头结点(桶是一个棵树)
         * @param index 要拆分的hash表的节点
         * @param bit   the bit of hash to split on 要分裂的hash位
         */
        final void split(HashMap<K, V> map, HashMap.Node<K, V>[] tab, int index, int bit) {
            HashMap.TreeNode<K, V> b = this;
            // Relink into lo and hi lists, preserving order
            HashMap.TreeNode<K, V> loHead = null, loTail = null;
            HashMap.TreeNode<K, V> hiHead = null, hiTail = null;
            // lc代表的是原来的桶的元素的数量
            // hc代表新的桶中的元素的数量, 用来和UNTREEIFY_THRESHOLD比较决定是否要转换结构.
            int lc = 0, hc = 0;
            // 这里还是当做链表去处理，把桶内的元素重新散列。
            for (HashMap.TreeNode<K, V> e = b, next; e != null; e = next) {
                next = (HashMap.TreeNode<K, V>) e.next;
                e.next = null;
                if ((e.hash & bit) == 0) {
                    if ((e.prev = loTail) == null)
                        loHead = e;
                    else
                        loTail.next = e;
                    loTail = e;
                    ++lc;
                } else {
                    if ((e.prev = hiTail) == null)
                        hiHead = e;
                    else
                        hiTail.next = e;
                    hiTail = e;
                    ++hc;
                }
            }
            //  散列完后，判断原来的桶(lo)和新的桶中的元素个数
            //  然后决定转换为树还是链表
            if (loHead != null) {
                if (lc <= UNTREEIFY_THRESHOLD)
                    tab[index] = loHead.untreeify(map);
                else {
                    tab[index] = loHead;
                    if (hiHead != null) // (else is already treeified)
                        loHead.treeify(tab);
                }
            }
            if (hiHead != null) {
                if (hc <= UNTREEIFY_THRESHOLD)
                    tab[index + bit] = hiHead.untreeify(map);
                else {
                    tab[index + bit] = hiHead;
                    if (loHead != null)
                        hiHead.treeify(tab);
                }
            }
        }

        /* ------------------------------------------------------------ */
        // Red-black tree methods, all adapted from CLR

        static <K, V> HashMap.TreeNode<K, V> rotateLeft(HashMap.TreeNode<K, V> root,
                                                        HashMap.TreeNode<K, V> p) {
            HashMap.TreeNode<K, V> r, pp, rl;
            if (p != null && (r = p.right) != null) {
                if ((rl = p.right = r.left) != null) {
                    rl.parent = p;
                }

                if ((pp = r.parent = p.parent) == null) {
                    (root = r).red = false;
                } else if (pp.left == p) {
                    pp.left = r;
                } else {
                    pp.right = r;
                }
                r.left = p;
                p.parent = r;
            }
            return root;
        }

        static <K, V> HashMap.TreeNode<K, V> rotateLeft2(HashMap.TreeNode<K, V> root,
                                                         HashMap.TreeNode<K, V> p) {
            HashMap.TreeNode<K, V> r, pp, rl;
            // p是父节点
            if (p != null && p.right != null) {
                // 右孩子
                r = p.right;
                // 右孩子有左孩子的话.
                if (r.left != null) {
                    // 右孩子变成右孩子的左孩子。即rl变成了p的右孩子。
                    p.right = r.left;
                    rl = r.left;
                    rl.parent = p;
                    // 注意此时r没有关联。
                }
                pp = p.parent;
                // 如果p没有有父节点的话。
                if (p.parent == null) {
                    // 将r的父节点置为null
                    r.parent = p.parent;
                    // 颜色涂成黑色，并且r就是根节点。
                    (root = r).red = false;
                }
                //  如果p节点有父节点，并且p是左子树的话
                else if (pp.left == p) {
                    // 将祖父节点的左子树置为r,
                    pp.left = r;
                } else {
                    // 将祖父节点的右子树置为r,
                    pp.right = r;
                }
                // 将r和p连接起来。
                r.left = p;
                p.parent = r;
            }
            return root;
        }

        static <K, V> HashMap.TreeNode<K, V> rotateRight(HashMap.TreeNode<K, V> root,
                                                         HashMap.TreeNode<K, V> p) {
            HashMap.TreeNode<K, V> l, pp, lr;
            if (p != null && (l = p.left) != null) {
                if ((lr = p.left = l.right) != null)
                    lr.parent = p;
                if ((pp = l.parent = p.parent) == null)
                    (root = l).red = false;
                else if (pp.right == p)
                    pp.right = l;
                else
                    pp.left = l;
                l.right = p;
                p.parent = l;
            }
            return root;
        }

        /**
         * 调整红黑树
         *
         * @param root 根节点
         * @param x    当前节点
         */
        static <K, V> HashMap.TreeNode<K, V> balanceInsertion(HashMap.TreeNode<K, V> root,
                                                              HashMap.TreeNode<K, V> x) {
            x.red = true;
            // xp: 当前节点的父节点(父节点)
            // xpp: 当前节点的父节点的父节点(祖父节点)
            // xppl: 当前节点的父节点的父节点的左子树(叔叔节点)
            // xppr: 当前节点的父节点的父节点的右子树(叔叔节点)
            for (HashMap.TreeNode<K, V> xp, xpp, xppl, xppr; ; ) {
                // 规则1
                if ((xp = x.parent) == null) {
                    x.red = false;
                    return x;
                }
                // 父节点为黑色 或者祖父节点为空==>规则2
                else if (!xp.red || (xpp = xp.parent) == null) {
                    return root;
                }

                // 父节点是左子树
                if (xp == (xppl = xpp.left)) {
                    // 父节点是左子树,且祖父节点存在右子树(叔叔节点为右子树)，并且叔叔为红色。 ==> 父节点是右子树时的性质1.
                    if ((xppr = xpp.right) != null && xppr.red) {
                        // 叔叔节点涂黑
                        xppr.red = false;
                        // 父节点涂黑
                        xp.red = false;
                        // 祖父节点涂红
                        xpp.red = true;
                        // 以祖父节点为新的当前节点
                        x = xpp;
                    }
                    // 祖父节点没有右子树或者有右子树,颜色为黑色。
                    else {
                        // 当前节点是父节点的右子树==> 规则4
                        if (x == xp.right) {
                            // 左旋
                            root = rotateLeft(root, x = xp);
                            // 设置祖父节点要么为空要么是父节点。
                            xpp = (xp = x.parent) == null ? null : xp.parent;
                        }
                        // 规则5
                        if (xp != null) {
                            // 父节点涂成黑色
                            // 此时xp可能为root.
                            xp.red = false;
                            // 如果xp不是root的时候。
                            if (xpp != null) {
                                // 祖父节点涂成红色,右旋。
                                xpp.red = true;
                                root = rotateRight(root, xpp);
                            }
                        }
                    }
                }

                // 父节点不是左子树==> 父节点是右子树。
                else {
                    // 叔叔节点(祖父节点的左子树),叔叔为红色 ==> 规则3
                    if (xppl != null && xppl.red) {
                        // 叔叔涂黑
                        xppl.red = false;
                        // 父节点涂黑
                        xp.red = false;
                        // 祖父节点涂红
                        xpp.red = true;
                        // 以祖父节点为新的当前节点
                        x = xpp;
                    }
                    // 祖父节点没有右子树或者有右子树,颜色为黑色。 ==> 规则4
                    else {
                        // 当前节点是左子树
                        if (x == xp.left) {
                            // 右旋
                            root = rotateRight(root, x = xp);
                            // 设置祖父节点要么为空要么是父节点。
                            xpp = (xp = x.parent) == null ? null : xp.parent;
                        }
                        // ==> 规则5
                        if (xp != null) {
                            xp.red = false;
                            // 如果有祖父
                            if (xpp != null) {
                                // 祖父节点涂成红色,右旋。
                                xpp.red = true;
                                root = rotateLeft(root, xpp);
                            }
                        }
                    }
                }
            }
        }

        static <K, V> HashMap.TreeNode<K, V> balanceDeletion(HashMap.TreeNode<K, V> root,
                                                             HashMap.TreeNode<K, V> x) {
            for (HashMap.TreeNode<K, V> xp, xpl, xpr; ; ) {
                if (x == null || x == root)
                    return root;
                else if ((xp = x.parent) == null) {
                    x.red = false;
                    return x;
                } else if (x.red) {
                    x.red = false;
                    return root;
                } else if ((xpl = xp.left) == x) {
                    if ((xpr = xp.right) != null && xpr.red) {
                        xpr.red = false;
                        xp.red = true;
                        root = rotateLeft(root, xp);
                        xpr = (xp = x.parent) == null ? null : xp.right;
                    }
                    if (xpr == null)
                        x = xp;
                    else {
                        HashMap.TreeNode<K, V> sl = xpr.left, sr = xpr.right;
                        if ((sr == null || !sr.red) &&
                                (sl == null || !sl.red)) {
                            xpr.red = true;
                            x = xp;
                        } else {
                            if (sr == null || !sr.red) {
                                if (sl != null)
                                    sl.red = false;
                                xpr.red = true;
                                root = rotateRight(root, xpr);
                                xpr = (xp = x.parent) == null ?
                                        null : xp.right;
                            }
                            if (xpr != null) {
                                xpr.red = (xp == null) ? false : xp.red;
                                if ((sr = xpr.right) != null)
                                    sr.red = false;
                            }
                            if (xp != null) {
                                xp.red = false;
                                root = rotateLeft(root, xp);
                            }
                            x = root;
                        }
                    }
                } else { // symmetric
                    if (xpl != null && xpl.red) {
                        xpl.red = false;
                        xp.red = true;
                        root = rotateRight(root, xp);
                        xpl = (xp = x.parent) == null ? null : xp.left;
                    }
                    if (xpl == null)
                        x = xp;
                    else {
                        HashMap.TreeNode<K, V> sl = xpl.left, sr = xpl.right;
                        if ((sl == null || !sl.red) &&
                                (sr == null || !sr.red)) {
                            xpl.red = true;
                            x = xp;
                        } else {
                            if (sl == null || !sl.red) {
                                if (sr != null)
                                    sr.red = false;
                                xpl.red = true;
                                root = rotateLeft(root, xpl);
                                xpl = (xp = x.parent) == null ?
                                        null : xp.left;
                            }
                            if (xpl != null) {
                                xpl.red = (xp == null) ? false : xp.red;
                                if ((sl = xpl.left) != null)
                                    sl.red = false;
                            }
                            if (xp != null) {
                                xp.red = false;
                                root = rotateRight(root, xp);
                            }
                            x = root;
                        }
                    }
                }
            }
        }

        /**
         * Recursive invariant check
         */
        static <K, V> boolean checkInvariants(HashMap.TreeNode<K, V> t) {
            HashMap.TreeNode<K, V> tp = t.parent, tl = t.left, tr = t.right,
                    tb = t.prev, tn = (HashMap.TreeNode<K, V>) t.next;
            if (tb != null && tb.next != t)
                return false;
            if (tn != null && tn.prev != t)
                return false;
            if (tp != null && t != tp.left && t != tp.right)
                return false;
            if (tl != null && (tl.parent != t || tl.hash > t.hash))
                return false;
            if (tr != null && (tr.parent != t || tr.hash < t.hash))
                return false;
            if (t.red && tl != null && tl.red && tr != null && tr.red)
                return false;
            if (tl != null && !checkInvariants(tl))
                return false;
            if (tr != null && !checkInvariants(tr))
                return false;
            return true;
        }
    }

}
