package com.hanxiaozhang.sourcecode.map;

import java.io.IOException;
import java.io.InvalidObjectException;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.*;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;


/**
 * Hash table based implementation of the <tt>Map</tt> interface.  This
 * implementation provides all of the optional map operations, and permits
 * <tt>null</tt> values and the <tt>null</tt> key.  (The <tt>HashMap</tt>
 * class is roughly equivalent to <tt>Hashtable</tt>, except that it is
 * unsynchronized and permits nulls.)  This class makes no guarantees as to
 * the order of the map; in particular, it does not guarantee that the order
 * will remain constant over time.
 * <p>
 * HashMap是基于哈希表的Map接口的实现，它实现提供了所有可选的map操作，并允许null的值和null的键
 * （HashMap类大致相当于Hashtable，但它是不同步的[unsynchronized]，并且允许空值)；
 * 这个类不保证映射的顺序，特别是，它不保证顺序随时间保持不变。
 *
 * <p>This implementation provides constant-time performance for the basic
 * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function
 * disperses the elements properly among the buckets.  Iteration over
 * collection views requires time proportional to the "capacity" of the
 * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number
 * of key-value mappings).  Thus, it's very important not to set the initial
 * capacity too high (or the load factor too low) if iteration performance is
 * important.
 * <p>
 * 它实现了为基本操作（例如，get和put）提供恒定的时间性能，假设，散列函数可以将元素正确地分散在桶（bucket）中。
 * 集合(collection)迭代所需的时间与HashMap实例的容量（bucket的数量）和大小（键值映射对的个数）成比例。
 * 因此，如果迭代性能很重要，那么不要将初始容量设置得太高（或者负载因子太低），这一点非常重要。
 *
 *
 * <p>An instance of <tt>HashMap</tt> has two parameters that affect its
 * performance: <i>initial capacity</i> and <i>load factor</i>.  The
 * <i>capacity</i> is the number of buckets in the hash table, and the initial
 * capacity is simply the capacity at the time the hash table is created.  The
 * <i>load factor</i> is a measure of how full the hash table is allowed to
 * get before its capacity is automatically increased.  When the number of
 * entries in the hash table exceeds the product of the load factor and the
 * current capacity, the hash table is <i>rehashed</i> (that is, internal data
 * structures are rebuilt) so that the hash table has approximately twice the
 * number of buckets.
 * <p>
 * HashMap的实例，影响性能的有两个参数：initial capacity（初始化容量）和load factor（装载因子）。
 * capacity（容量）：是哈希表中的bucket数，而initial capacity只是哈希表创建时的容量。
 * load factor（装载因子）：是一个度量哈希表在其容量自动增加之前，允许获得的完整程度。
 * 当哈希表中的键值对数量超过加载因子和当前容量的乘积时，哈希表将重新哈希（即，内部数据结构被重建），
 * 因此哈希表的bucket数大约是的两倍。
 *
 *
 * <p>As a general rule, the default load factor (.75) offers a good
 * tradeoff between time and space costs.  Higher values decrease the
 * space overhead but increase the lookup cost (reflected in most of
 * the operations of the <tt>HashMap</tt> class, including
 * <tt>get</tt> and <tt>put</tt>).  The expected number of entries in
 * the map and its load factor should be taken into account when
 * setting its initial capacity, so as to minimize the number of
 * rehash operations.  If the initial capacity is greater than the
 * maximum number of entries divided by the load factor, no rehash
 * operations will ever occur.
 * <p>
 * 一般来说，默认的装载因子（0.75）在时间和空间成本之间提供了一个很好的折衷方案。
 * 较高的值会减少空间开销，但会增加查找成本，它反映在HashMap类的大多数操作中，包括get和put操作
 * 在设置初始容量时，应考虑输入项数量及其负载系数，以尽量减少重新计算操作的次数。如果，
 * 初始容量大于最大条键值对数量除以负载系数，则不会发生任何重新计算操作。
 *
 *
 * <p>If many mappings are to be stored in a <tt>HashMap</tt>
 * instance, creating it with a sufficiently large capacity will allow
 * the mappings to be stored more efficiently than letting it perform
 * automatic rehashing as needed to grow the table.  Note that using
 * many keys with the same {@code hashCode()} is a sure way to slow
 * down performance of any hash table. To ameliorate impact, when keys
 * are {@link Comparable}, this class may use comparison order among
 * keys to help break ties.
 * <p>
 * 如果许多映射要存储在一个HashMap实例中，创建足够大的容量，将使映射的存储更加高效，
 * 而不是让HashMap根据需要执行自动扩容以增长表。
 * 注意，多个键使用相同的hashCode()是肯定会降低任何哈希表性能。
 * 为了改善影响，当键的对象实现Comparable类时，这个类可以使用键之间的比较顺序，来打破联系。
 *
 * <p><strong>Note that this implementation is not synchronized.</strong>
 * If multiple threads access a hash map concurrently, and at least one of
 * the threads modifies the map structurally, it <i>must</i> be
 * synchronized externally.  (A structural modification is any operation
 * that adds or deletes one or more mappings; merely changing the value
 * associated with a key that an instance already contains is not a
 * structural modification.)  This is typically accomplished by
 * synchronizing on some object that naturally encapsulates the map.
 * <p>
 * 请注意，此实现不是同步的，如果多个线程同时访问一个哈希映射，并且至少有一个线程在结构上修改了该映射，
 * 则必须在外部同步（结构修改是指添加或删除一个或多个映射的任何操作；
 * 仅仅更改与实例已经包含的键关联的值并不是结构修改）。
 * 这通常是通过对自然封装map的某个对象进行同步来实现的。
 * <p>
 * If no such object exists, the map should be "wrapped" using the
 * {@link Collections#synchronizedMap Collections.synchronizedMap}
 * method.  This is best done at creation time, to prevent accidental
 * unsynchronized access to the map:<pre>
 *   Map m = Collections.synchronizedMap(new HashMap(...));</pre>
 * <p>
 * 如果不存在这样的对象，则应该使用Collections.synchronizedMap()
 *
 *
 * <p>The iterators returned by all of this class's "collection view methods"
 * are <i>fail-fast</i>: if the map is structurally modified at any time after
 * the iterator is created, in any way except through the iterator's own
 * <tt>remove</tt> method, the iterator will throw a
 * {@link ConcurrentModificationException}.  Thus, in the face of concurrent
 * modification, the iterator fails quickly and cleanly, rather than risking
 * arbitrary, non-deterministic behavior at an undetermined time in the
 * future.
 * <p>
 * 所有此类返回的迭代器的集合视图方法使用快速失败机制
 *
 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 * as it is, generally speaking, impossible to make any hard guarantees in the
 * presence of unsynchronized concurrent modification.  Fail-fast iterators
 * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
 * Therefore, it would be wrong to write a program that depended on this
 * exception for its correctness: <i>the fail-fast behavior of iterators
 * should be used only to detect bugs.</i>
 *
 * <p>This class is a member of the
 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 * Java Collections Framework</a>.
 *
 * @param <K> the type of keys maintained by this map
 * @param <V> the type of mapped values
 * @author Doug Lea
 * @author Josh Bloch
 * @author Arthur van Hoff
 * @author Neal Gafter
 * @see Object#hashCode()
 * @see Collection
 * @see Map
 * @see TreeMap
 * @see Hashtable
 * @since 1.2
 */



/*
名称解释：
bucket(桶):在散列表的数组结构中，可以存储元素的位置（每一个格子）被称为桶
capacity(容量):HashMap中桶的数量。默认值为16
size(大小)：HashMap中存放键值对的数量（即，链表和红黑树中键值对的总和）
bin：桶后面存放的每一个数据称为bin
threshold=capacity*loadFactor

装逼知识点：
一般来说，默认的装载因子（0.75）在时间和空间成本之间提供了一个很好的折衷方案，
较高的值会减少空间开销，但会增加查找成本。


 */

/**
 * 〈一句话功能简述〉<br>
 * 〈〉
 * <p>
 * 2020-06-02 1.5h  处理源码中错误e
 * 2020-09-09 2.5h
 * 2020-10-30
 * 2020-11-02
 * 2020-11-03
 * 2020-11-04
 * 2020-11-05
 * 2023-04-21
 *
 * @author hanxinghua
 * @create 2020/6/2
 * @since 1.0.0
 */
public class MyHashMap<K, V> extends MyAbstractMap<K, V> implements Map<K, V>, Cloneable, Serializable {

    private static final long serialVersionUID = 362498820763181265L;

    /*
     * Implementation notes.
     * 实施说明
     *
     * This map usually acts as a binned (bucketed) hash table, but
     * when bins get too large, they are transformed into bins of
     * TreeNodes, each structured similarly to those in
     * java.util.TreeMap. Most methods try to use normal bins, but
     * relay to TreeNode methods when applicable (simply by checking
     * instanceof a node).  Bins of TreeNodes may be traversed and
     * used like any others, but additionally support faster lookup
     * when overpopulated. However, since the vast majority of bins in
     * normal use are not overpopulated, checking for existence of
     * tree bins may be delayed in the course of table methods.
     *
     * 这个Map通常使用binned哈希表，但是bin的数量太大时，它将转换成树结构。
     * 它们的结构与java.util.TreeMap类似。
     * 大多数方法尝试使用普通的bin，但是在适用的情况下中继到TreeNode方法
     * （只需检查节点的 instanceof a node）。
     * bin是树节点时，可以被遍历，像使用任何其他的一样，但是在bin过多时，查找会更快。
     * 然而，在正常使用中，绝大多数bin并不是过多，所以，在table methods的过程中，
     * 检查是否存在树型容器可能会延迟。
     *
     *
     * Tree bins (i.e., bins whose elements are all TreeNodes) are
     * ordered primarily by hashCode, but in the case of ties, if two
     * elements are of the same "class C implements Comparable<C>",
     * type then their compareTo method is used for ordering. (We
     * conservatively check generic types via reflection to validate
     * this -- see method comparableClassFor).  The added complexity
     * of tree bins is worthwhile in providing worst-case O(log n)
     * operations when keys either have distinct hashes or are
     * orderable, Thus, performance degrades gracefully under
     * accidental or malicious usages in which hashCode() methods
     * return values that are poorly distributed, as well as those in
     * which many keys share a hashCode, so long as they are also
     * Comparable. (If neither of these apply, we may waste about a
     * factor of two in time and space compared to taking no
     * precautions. But the only known cases stem from poor user
     * programming practices that are already so slow that this makes
     * little difference.)
     *
     * 树型的bin主要由哈希码排序，但是有一种"关系”情况，如果两个元素的类都现实了C比较器，
     * 它们将使用的compareTo方法进行排序（我们通过反射保守地检查泛型类型以进行验证这--
     * 请参见方法comparableClassFor）。
     * 在提供最坏情况下的O（logn）操作时，当键具有不同的哈希值或是可排序的时，
     * 树容器所增加的复杂性是值得的。
     * 因此，当hashCode()方法返回的值分布不好时，性能会在意外或恶意使用下正常下降，
     * 以及许多键共享一个哈希码的那些，只要它们也是可比较的。
     * （如果这两个条件都不适用，我们可能会在时间和空间上浪费大约两倍的时间和空间，但是，
     * 已知的唯一案例来自于拙劣的用户编程实践，这些实践已经非常缓慢，几乎没有什么不同）
     *
     * Because TreeNodes are about twice the size of regular nodes, we
     * use them only when bins contain enough nodes to warrant use
     * (see TREEIFY_THRESHOLD). And when they become too small (due to
     * removal or resizing) they are converted back to plain bins.  In
     * usages with well-distributed user hashCodes, tree bins are
     * rarely used.  Ideally, under random hashCodes, the frequency of
     * nodes in bins follows a Poisson distribution
     * (http://en.wikipedia.org/wiki/Poisson_distribution) with a
     * parameter of about 0.5 on average for the default resizing
     * threshold of 0.75, although with a large variance because of
     * resizing granularity. Ignoring variance, the expected
     * occurrences of list size k are (exp(-0.5) * pow(0.5, k) /
     * factorial(k)). The first values are:
     *
     * 因为TreeNode的大小大约是常规节点的两倍，所以只有当bin包含足够的节点以保证使用时，我们才会使用它们（参见TREEIFY_THRESHOLD）。
     * 当它们变得太小时（由于移除或调整大小），它们会被转换回普通的bins。在与分布良好的用户hashCodes一起使用时，很少使用tree bins。
     * 理想情况下，在随机hashCodes下，bin中节点的频率遵循泊松分布(http://en.wikipedia.org/wiki/Poisson_distribution)
     * 对于0.75的默认调整大小阈值，参数平均约为0.5，尽管由于调整大小的粒度而具有较大的方差。
     * 忽略方差，列表大小k的预期出现次数为（exp（-0.5）*pow（0.5，k）/factorial（k））。第一个值是：
     *
     * 0:    0.60653066
     * 1:    0.30326533
     * 2:    0.07581633
     * 3:    0.01263606
     * 4:    0.00157952
     * 5:    0.00015795
     * 6:    0.00001316
     * 7:    0.00000094
     * 8:    0.00000006
     * more: less than 1 in ten million
     *
     *
     * The root of a tree bin is normally its first node.  However,
     * sometimes (currently only upon Iterator.remove), the root might
     * be elsewhere, but can be recovered following parent links
     * (method TreeNode.root()).
     *
     * tree bin的根通常是它的第一个节点。然而，有时（目前仅在Iterator.remove上），根可能在其他地方，
     * 但可以在父链接之后恢复（方法TreeNode.root（））
     *
     *
     * All applicable internal methods accept a hash code as an
     * argument (as normally supplied from a public method), allowing
     * them to call each other without recomputing user hashCodes.
     * Most internal methods also accept a "tab" argument, that is
     * normally the current table, but may be a new or old one when
     * resizing or converting.
     *
     * 所有适用的内部方法都接受一个散列码作为参数（通常由公共方法提供），允许它们互相调用而无需重新计算用户哈希码。
     * 大多数内部方法也接受“tab”参数，即通常是当前表，但在调整大小或转换时可能是新的或旧的。
     *
     * When bin lists are treeified, split, or untreeified, we keep
     * them in the same relative access/traversal order (i.e., field
     * Node.next) to better preserve locality, and to slightly
     * simplify handling of splits and traversals that invoke
     * iterator.remove. When using comparators on insertion, to keep a
     * total ordering (or as close as is required here) across
     * rebalancings, we compare classes and identityHashCodes as
     * tie-breakers.
     *
     * 当bin列表被树化时，拆分，或未查询，我们保持它们在相同的相对访问/遍历顺序（i.e., field Node.next)
     * 为了更好地保留局部性，并稍微简化对调用iterator.remove。当在插入时使用比较器时，为了在平衡中保持总排序
     * （或尽可能接近这里的要求），我们比较类并将hashcode标识为tie-breakers。
     *
     * The use and transitions among plain vs tree modes is
     * complicated by the existence of subclass LinkedHashMap. See
     * below for hook methods defined to be invoked upon insertion,
     * removal and access that allow LinkedHashMap internals to
     * otherwise remain independent of these mechanics. (This also
     * requires that a map instance be passed to some utility methods
     * that may create new nodes.)
     *
     * 普通 与 树模式之间的使用和转换，由于子类LinkedHashMap的存在而变得复杂。请参阅下面的以了解定义为在插入、
     * 移除和访问时调用的钩子方法，这些方法允许LinkedHashMap内部构件否则将保持独立于这些机制。
     * （这还要求将一个映射实例传递给某些可能创建新节点的实用程序方法。）
     *
     * The concurrent-programming-like SSA-based coding style helps
     * avoid aliasing errors amid all of the twisty pointer operations.
     *
     * 基于SSA的并行编程风格有助于避免在所有扭曲指针操作中出现混叠错误
     */

    /**
     * 默认初始容量-必须是2的幂
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * 最大容量，如果某个具有参数的构造函数隐式指定了更高的值，则使用该值。
     * 必须是2的幂<=1<<30。
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * 构造函数中未指定时，使用的负载因子
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2 and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     * <p>
     * 使用树结构的计数阈值，而不是列表的容器计数阈值。Bins中至少有这么多节点添加元素（bin）时，
     * 才能转换成树。该值必须大于2，且至少应为8，以符合树移除中，关于在收缩时转换回普通bin的假设。
     */
    static final int TREEIFY_THRESHOLD = 8;

    /**
     * The bin count threshold for untreeifying a (split) bin during a
     * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     * <p>
     * 在调整大小操作期间，取消树结构的阈值。
     * 应小于TREEIFY_THRESHOLD，且最多6，以便在移除时进行收缩检测。
     */
    static final int UNTREEIFY_THRESHOLD = 6;

    /**
     * The smallest table capacity for which bins may be treeified.
     * (Otherwise the table is resized if too many nodes in a bin.)
     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     * between resizing and treeification thresholds.
     * <p>
     * 可以对Bins进行树型化的最小表容量（否则，如果一个bin（桶）中的节点太多，则会调整表的大小），
     * 应至少为4*TREEIFY_THRESHOLD，以避免调整大小和树化阈值之间的冲突。
     */
    static final int MIN_TREEIFY_CAPACITY = 64;


    /**
     * Basic hash bin node, used for most entries.  (See below for
     * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
     * <p>
     * 基本哈希bin节点，用于大多数Entry
     * （请参见下面的TreeNode子类，以及LinkedHashMap中的Entry子类）
     */
    static class Node<K, V> implements Entry<K, V> {

        // hash值
        final int hash;
        // key
        final K key;
        // value
        V value;
        // 下一个节点
        Node<K, V> next;

        /**
         * 构造函数
         *
         * @param hash
         * @param key
         * @param value
         * @param next
         */
        Node(int hash, K key, V value, Node<K, V> next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }

        @Override
        public final K getKey() {
            return key;
        }

        @Override
        public final V getValue() {
            return value;
        }

        @Override
        public final String toString() {
            return key + "=" + value;
        }

        @Override
        public final int hashCode() {
            return Objects.hashCode(key) ^ Objects.hashCode(value);
        }

        @Override
        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }

        @Override
        public final boolean equals(Object o) {
            if (o == this) {
                return true;
            }
            if (o instanceof Map.Entry) {
                Entry<?, ?> e = (Entry<?, ?>) o;
                if (Objects.equals(key, e.getKey()) &&
                        Objects.equals(value, e.getValue())) {
                    return true;
                }
            }
            return false;
        }
    }

    /* ---------------- 静态实用程序 -------------- */

    /**
     * Computes key.hashCode() and spreads (XORs) higher bits of hash
     * to lower.  Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)  So we
     * apply a transform that spreads the impact of higher bits
     * downward. There is a tradeoff between speed, utility, and
     * quality of bit-spreading. Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), and because we use trees to handle large sets of
     * collisions in bins, we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage, as well as
     * to incorporate impact of the highest bits that would otherwise
     * never be used in index calculations because of table bounds.
     * <p>
     * 计算key的hash值，并且将哈希的高位扩展到低位，因为table使用了两个掩蔽的能力，
     * 仅在当前掩码上方的位中变化的哈希将总是碰撞。（在已知的示例中，有一组浮点键把
     * 连续的整数放在小表中）。所以，我们应用了一个变换，来传播更高比特的影响向下，
     * 在速度、实用性和bit传播质量之间权衡，因为许多常见的散列集已经被合理地分布了
     * （所以不要从传播中获益）因为我们用树结构来处理bin在容器中的大量碰撞，我们只是用
     * 最便宜的方法异或一些移位位来减少系统损失，以及包含最高比特的影响，否则由于表边界
     * 的原因，永远不要在索引计算中使用。
     */
    static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }

    /**
     * Returns x's Class if it is of the form "class C implements
     * Comparable<C>", else null.
     * <p>
     * 如果x的类型为"Class C implements*Comparable<C>"，则返回该类，否则返回null
     */
    static Class<?> comparableClassFor(Object x) {
        if (x instanceof Comparable) {
            Class<?> c;
            Type[] ts, as;
            Type t;
            ParameterizedType p;

            // bypass checks 旁路检查
            // 1.获取x的类并赋值给c；2.判断c是否为String类
            if ((c = x.getClass()) == String.class) {
                return c;
            }

            // 1.获取c表示的类或接口直接实现的数组并赋值给ts；2.判断ts不等于空
            if ((ts = c.getGenericInterfaces()) != null) {

                // 循环ts数组
                for (int i = 0; i < ts.length; ++i) {

                    // 判断条件：1.t对象是ParameterizedType的实例 &&
                    //         2.p对象声明此类型的类或接口是Comparable类  &&
                    //         3.p对象如果支持泛型，实际类型参数的Type对象数组不为空，并且长度为1，并且是c类型的对象
                    if (((t = ts[i]) instanceof ParameterizedType) &&
                            ((p = (ParameterizedType) t).getRawType() == Comparable.class) &&
                            (as = p.getActualTypeArguments()) != null && as.length == 1 && as[0] == c) {
                        return c;
                    }
                }
            }
        }
        return null;
    }

    /**
     * Returns k.compareTo(x) if x matches kc (k's screened comparable
     * class), else 0.
     * <p>
     * 使用k.compareTo(x)，如果x的类型与kc的类型匹配（k的筛选可比较的类比较），否则返回0
     */
    // for cast to Comparable
    @SuppressWarnings({"rawtypes", "unchecked"})
    static int compareComparables(Class<?> kc, Object k, Object x) {
        // x为空，x的类不是kc，返回0，否则返回(Comparable) k).compareTo(x)
        return (x == null || x.getClass() != kc ? 0 : ((Comparable) k).compareTo(x));
    }

    /**
     * Returns a power of two size for the given target capacity.
     * 返回一个大于等于且最接近cap的2的幂次方整数
     * 详情见 Test.tableSizeFor(int cap)
     */
    static final int tableSizeFor(int cap) {
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }

    /* ---------------- 领域 -------------- */

    /**
     * The table, initialized on first use, and resized as
     * necessary. When allocated, length is always a power of two.
     * (We also tolerate length zero in some operations to allow
     * bootstrapping mechanics that are currently not needed.)
     * 表，在第一次使用时初始化，并根据需要调整大小。分配时，长度总是2的幂
     * （在某些操作中，我们也允许长度为零，以允许当前不需要的引导机制）
     */
    transient Node<K, V>[] table;

    /**
     * Holds cached entrySet(). Note that AbstractMap fields are used
     * for keySet() and values().
     * 保存缓存的entrySet()。请注意，AbstractMap字段用于keySet()和values()。
     */
    transient Set<Entry<K, V>> entrySet;

    /**
     * The number of key-value mappings contained in this map.
     * 此映射中包含的键值映射数
     */
    transient int size;

    /**
     * The number of times this HashMap has been structurally modified
     * Structural modifications are those that change the number of mappings in
     * the HashMap or otherwise modify its internal structure (e.g.,
     * rehash).  This field is used to make iterators on Collection-views of
     * the HashMap fail-fast.  (See ConcurrentModificationException).
     * <p>
     * 这个HashMap在结构上被修改的次数，是改变HashMap中映射的数量或者修改它的内部结构的次数
     * （例如，rehash）。此字段用于在的集合视图上的迭代器快速失败(看 ConcurrentModificationException)
     */
    transient int modCount;

    /**
     * 要调整大小的下一个大小值（容量*负载系数）（阈值）
     * 另外，如果尚未分配表数组，则字段保留初始阵列容量，或者为零，表示默认初始容量。
     *
     * @serial
     */
    // (The javadoc description is true upon serialization.
    // Additionally, if the table array has not been allocated, this
    // field holds the initial array capacity, or zero signifying
    // DEFAULT_INITIAL_CAPACITY.)
    int threshold;

    /**
     * 哈希表的加载因子。
     *
     * @serial
     */
    final float loadFactor;

    /* ---------------- 公共业务 -------------- */

    /**
     * 使用指定的初始容量和负载因子构造一个空的HashMap
     *
     * @param initialCapacity the initial capacity
     * @param loadFactor      the load factor
     * @throws IllegalArgumentException if the initial capacity is negative
     *                                  or the load factor is nonpositive
     */
    public MyHashMap(int initialCapacity, float loadFactor) {

        // 初始化容量小于0，抛异常
        if (initialCapacity < 0) {
            throw new IllegalArgumentException("Illegal initial capacity: " +
                    initialCapacity);
        }
        // 初始化容量大于最大容量时，设置容量为最大容量值
        if (initialCapacity > MAXIMUM_CAPACITY) {
            initialCapacity = MAXIMUM_CAPACITY;
        }
        // 负荷系数小于等于0，或者非数字式，抛异常
        if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
            throw new IllegalArgumentException("Illegal load factor: " +
                    loadFactor);
        }
        this.loadFactor = loadFactor;
        this.threshold = tableSizeFor(initialCapacity);
    }

    /**
     * 使用指定的初始容量和默认的加载因子（0.75）构造一个空的HashMap
     *
     * @param initialCapacity the initial capacity.
     * @throws IllegalArgumentException if the initial capacity is negative.
     */
    public MyHashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

    /**
     * 使用默认的初始容量（16）和默认的加载因子（0.75）构造一个空的HashMap
     */
    public MyHashMap() {
        this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
    }

    /**
     * 构造一个新的HashMap，其映射与指定的Map相同。HashMap是使用默认负载因子（0.75）
     * 和足以保存指定Map中的映射的初始容量创建的
     *
     * @param m the map whose mappings are to be placed in this map
     *          要将其Map放置在此Map中的映射
     * @throws NullPointerException if the specified map is null
     */
    public MyHashMap(Map<? extends K, ? extends V> m) {
        this.loadFactor = DEFAULT_LOAD_FACTOR;
        putMapEntries(m, false);
    }

    /**
     * 补充Map.putAll和Map构造函数
     *
     * @param m     the map
     * @param evict false when initially constructing this map, else
     *              true (relayed to method afterNodeInsertion).
     */
    final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
        // Map的大小
        int s = m.size();
        // Map的大小大于0
        if (s > 0) {
            // table 为空
            if (table == null) {
                // 计算合理容量：大小/装载因子+1
                float ft = ((float) s / loadFactor) + 1.0F;
                // 计算的合理容量与最大容量比较
                int t = ((ft < (float) MAXIMUM_CAPACITY) ? (int) ft : MAXIMUM_CAPACITY);
                // 计算的合理容量与threshold比较
                if (t > threshold) {
                    threshold = tableSizeFor(t);
                }
                // table 不为空 ，并且大小大于threshold
            } else if (s > threshold) {
                resize();
            }
            // 循环添加键值对
            for (Entry<? extends K, ? extends V> e : m.entrySet()) {
                K key = e.getKey();
                V value = e.getValue();
                putVal(hash(key), key, value, false, evict);
            }
        }
    }

    /**
     * 返回此映射中键值映射的数目
     *
     * @return the number of key-value mappings in this map
     */
    @Override
    public int size() {
        return size;
    }

    /**
     * 如果此映射不包含键值映射，则返回true。
     *
     * @return <tt>true</tt> if this map contains no key-value mappings
     */
    @Override
    public boolean isEmpty() {
        return size == 0;
    }

    /**
     * Returns the value to which the specified key is mapped,
     * or {@code null} if this map contains no mapping for the key.
     * <p>
     * 返回指定键映射到的值，或null（如果此映射不包含该键的映射）
     *
     * <p>More formally, if this map contains a mapping from a key
     * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
     * key.equals(k))}, then this method returns {@code v}; otherwise
     * it returns {@code null}.  (There can be at most one such mapping.)
     * <p>
     * 更正式地说，如果这个映射包含来自一个键k的映射到一个值v，如果key==null ? k==null:key.equals(k)
     * ，则此方法返回 v；否则它返回null。（最多可以有一个这样的映射。）
     *
     * <p>A return value of {@code null} does not <i>necessarily</i>
     * indicate that the map contains no mapping for the key; it's also
     * possible that the map explicitly maps the key to {@code null}.
     * The {@link #containsKey containsKey} operation may be used to
     * distinguish these two cases.
     * <p>
     * null的返回值并不意味着映射不包含键的映射；映射也可能显式地将键映射到null
     * containsKey操作可用于区分这两种情况。
     *
     * @see #put(Object, Object)
     */
    @Override
    public V get(Object key) {
        // 声明一个节点e
        Node<K, V> e;
        // 获取节点
        return (e = getNode(hash(key), key)) == null ? null : e.value;
    }

    /**
     * 获取节点，是Map.get()方法具体的实现
     *
     * @param hash hash for key
     * @param key  the key
     * @return the node, or null if none
     */
    final Node<K, V> getNode(int hash, Object key) {
        // 声明节点数组
        Node<K, V>[] tab;
        // 声明节点 first e
        Node<K, V> first, e;
        int n;
        K k;
        // table赋值给tab，并且判断tab不为空； &&
        if ((tab = table) != null &&
                // tab长度赋值给n，并且判断它大于0；&&
                (n = tab.length) > 0 &&
                // (n - 1) & hash获取桶的位置，然后获取数组中节点赋值给first，并且判断first不等于空
                (first = tab[(n - 1) & hash]) != null) {
            // 始终检查第一个节点
            // 第一个节点hash等于hash； &&
            if (first.hash == hash &&
                    // (first.key == key || key.equals(first.key))
                    ((k = first.key) == key || (key != null && key.equals(k)))) {
                // 返回first节点
                return first;
            }
            // first节点的下一节点赋值给e，并判断e不等于null
            if ((e = first.next) != null) {
                // first为树节点类型
                if (first instanceof TreeNode) {
                    // 调用获取树型节点方法
                    return ((TreeNode<K, V>) first).getTreeNode(hash, key);
                }
                // 循环找出目标节点
                do {
                    if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) {
                        return e;
                    }
                } while ((e = e.next) != null);
            }
        }
        return null;
    }

    /**
     * 如果此映射包含指定键的映射，则返回true
     *
     * @param key The key whose presence in this map is to be tested
     * @return <tt>true</tt> if this map contains a mapping for the specified
     * key.
     */
    @Override
    public boolean containsKey(Object key) {
        return getNode(hash(key), key) != null;
    }

    /**
     * 将指定值Value与此映射中的指定键Key相关联。如果映射先前包含键的映射，则替换旧的值。
     *
     * @param key   key with which the specified value is to be associated
     * @param value value to be associated with the specified key
     * @return the previous value associated with <tt>key</tt>, or
     * <tt>null</tt> if there was no mapping for <tt>key</tt>.
     * (A <tt>null</tt> return can also indicate that the map
     * previously associated <tt>null</tt> with <tt>key</tt>.)
     */
    @Override
    public V put(K key, V value) {
        // 调用putVal
        return putVal(hash(key), key, value, false, true);
    }

    /**
     * 具体实现Map.put以及相关的方法
     *
     * @param hash         key的hash
     * @param key          key
     * @param value        value
     * @param onlyIfAbsent 如果为true，则不更改现有值
     * @param evict        如果为false，则表处于创建模式。
     * @return 当前一个vault，如果没有，则为null
     */
    final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
        // tab数组
        Node<K, V>[] tab;
        Node<K, V> p;
        // n是tab数组的长度
        int n, i;
        // 判断table是否为空，为空先进行初始化扩容
        if ((tab = table) == null || // 将table赋值给tab，并判断tab是否空
                (n = tab.length) == 0)   // 将tab.length赋值给n，并判断n是否为0
        {
            // 初始化扩容操作
            n = (tab = resize()).length;
        }
        // 通过(n - 1) & hash]计算数组位置，并赋值给i；tab[i]获取数组元素赋值给p；p如果等于null插入当前元素，即证明一个节点为空
        if ((p = tab[i = (n - 1) & hash]) == null) {
            tab[i] = newNode(hash, key, value, null);
        } else { // 第一个节点不为空
            // 声明目标节点
            Node<K, V> e;
            K k;
            // 第一个节点的就是key就是k，获取目标节点
            if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) {
                e = p;
                // 否则（第一个节点的key不是k），p是树结构，调用putTreeVal
            } else if (p instanceof TreeNode) {
                e = ((TreeNode<K, V>) p).putTreeVal(this, tab, hash, key, value);
                // 否则（第一个节点的key不是k，p不是树结构），即散列结构
            } else {
                // 循环散列，统计bin的数量
                for (int binCount = 0; ; ++binCount) {
                    // p的下一个节点赋值给e，并且判断是否为空
                    if ((e = p.next) == null) {
                        // 设置p的下一个节点
                        p.next = newNode(hash, key, value, null);
                        // 如果到第7个节点后（即第八个节点时），散列变成红黑树
                        if (binCount >= TREEIFY_THRESHOLD - 1) {
                            treeifyBin(tab, hash);
                        }
                        break;
                    }
                    // e节点的元素key是k
                    if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) {
                        break;
                    }
                    // 当前节点不是目标节点，将当前节点赋值给p，继续循环
                    p = e;
                }
            }
            // key存在现有映射
            if (e != null) {
                // 获取老的value
                V oldValue = e.value;
                // onlyIfAbsent是false，oldValue等于空，改变目标值
                if (!onlyIfAbsent || oldValue == null) {
                    e.value = value;
                }
                // 用于LinkedHashMap
                afterNodeAccess(e);
                return oldValue;
            }
        }
        // fast-fail机制
        ++modCount;
        // 现有键值对映射数量大于阈值扩容
        if (++size > threshold) {
            resize();
        }
        // 用于LinkedHashMap
        afterNodeInsertion(evict);
        return null;
    }

    /**
     * 初始化或加倍表大小。
     * 如果为空，则根据字段阈值中保留的初始容量目标分配。否则，因为我们使用的是二次展开的幂，
     * 每个bin中的元素必须保持在相同的索引中，或者在新表中以2的幂次偏移量移动。
     *
     * @return the table
     */
    final Node<K, V>[] resize() {
        // 把table赋值给oldTab
        Node<K, V>[] oldTab = table;
        // oldTab的容量
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        // oldTab的要调整大小的下一个大小值
        int oldThr = threshold;
        // 声明新数组的容量和新数组要调整大小的下一个大小值
        int newCap, newThr = 0;
        // oldCap大于0
        if (oldCap > 0) {
            // oldCap大于MAXIMUM_CAPACITY，返回oldTab，即不能再扩容了
            if (oldCap >= MAXIMUM_CAPACITY) {
                // threshold赋值为Integer.MAX_VALUE
                threshold = Integer.MAX_VALUE;
                // 返回oldTab
                return oldTab;
                // oldCap剩2赋值给newCap，并且newCap小于MAXIMUM_CAPACITY
            } else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                    oldCap >= DEFAULT_INITIAL_CAPACITY) // oldCap大于等于DEFAULT_INITIAL_CAPACITY
            {
                // threshold剩2
                newThr = oldThr << 1;
            }
        } else if (oldThr > 0) { // oldCap = 0，oldThr > 0时，初始容量处于阈值
            // oldThr赋值给newCap
            newCap = oldThr;
        } else {               // oldCap = 0，oldThr = 0时，零初始阈值表示使用默认值
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        // newThr为0时
        if (newThr == 0) {
            // 计算 newThr值 ft
            float ft = (float) newCap * loadFactor;
            // 判断newCap小于MAXIMUM_CAPACITY并且ft小于MAXIMUM_CAPACITY时，将ft赋值给newThr，否则将Integer.MAX_VALUE赋值给newThr
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float) MAXIMUM_CAPACITY ? (int) ft : Integer.MAX_VALUE);
        }
        // 将newThr赋值给threshold
        threshold = newThr;
        // 创建一个新的节点数组，数组长度为newCap
        @SuppressWarnings({"rawtypes", "unchecked"})
        Node<K, V>[] newTab = (Node<K, V>[]) new Node[newCap];
        // newTab赋值给table
        table = newTab;
        // oldTab不为空
        if (oldTab != null) {
            // 循环数组，向新数组赋值
            for (int j = 0; j < oldCap; ++j) {
                Node<K, V> e;
                // oldTab[j]赋值给e，并且判断不为空
                if ((e = oldTab[j]) != null) {
                    // oldTab[j]制空
                    oldTab[j] = null;
                    // e节点的下一个节点为空,即老数组该位置只有一个节点
                    if (e.next == null) {
                        // 重新计算位置存储
                        newTab[e.hash & (newCap - 1)] = e;
                    } else if (e instanceof TreeNode) { // e.next != null && e instanceof TreeNode时
                        // 调用split方法
                        ((TreeNode<K, V>) e).split(this, newTab, j, oldCap);
                    } else { // e.next != null && !e instanceof TreeNode时
                        // 存储索引位置为:"原索引位置"的节点
                        Node<K, V> loHead = null, loTail = null;
                        // 存储索引位置为:"原索引位置+oldCap"的节点
                        Node<K, V> hiHead = null, hiTail = null;
                        Node<K, V> next;
                        do {
                            // e下一个节点赋值给next
                            next = e.next;
                            // e的hash值与老表的容量与运算为0，则扩容后的索引位置跟老表的索引位置一样
                            if ((e.hash & oldCap) == 0) {
                                // 如果loTail为空, 代表该节点为第一个节点
                                if (loTail == null) {
                                    // 将loHead赋值为第一个节点
                                    loHead = e;
                                } else {
                                    // 在loTail节点后边添加e节点
                                    loTail.next = e;
                                }
                                // 将e赋值给loTail，用于下次循环
                                loTail = e;
                            } else {// e的hash值与老表的容量与运算为0，则扩容后的索引位置跟老表的索引位置+oldCap一样
                                if (hiTail == null) {
                                    hiHead = e;
                                } else {
                                    hiTail.next = e;
                                }
                                hiTail = e;
                            }
                        } while ((e = next) != null); //next节点不为空，继续循环
                        // loTail不为空
                        if (loTail != null) {
                            // loTail下一个节点设置为空
                            loTail.next = null;
                            // 把链表放到数组中
                            newTab[j] = loHead;
                        }
                        // 同上
                        if (hiTail != null) {
                            hiTail.next = null;
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }

    /**
     * 替换给定哈希的索引处，bin中的所有散列节点为树节点。
     * 如果table太小，在这种情况下先扩容
     */
    final void treeifyBin(Node<K, V>[] tab, int hash) {
        int n, index;
        // 目标节点
        Node<K, V> e;
        // tab等于空，tab长度赋值给n，并且n小于64时，首先扩容
        if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) {
            resize();
            // 哈希索引处的节点赋值给e，并且判断不为空
        } else if ((e = tab[index = (n - 1) & hash]) != null) {
            TreeNode<K, V> hd = null, tl = null;
            // 循环构建红黑树节点
            do {
                // 将散列中节点转换为树结构（红黑树）的节点
                TreeNode<K, V> p = replacementTreeNode(e, null);
                // tl为空时（即第一次循环），p赋值hd
                if (tl == null) {
                    hd = p;
                } else { // 头结点不为空，设置p的前继节点为tl，设置tl的后继节点为p
                    p.prev = tl;
                    tl.next = p;
                }
                // 将p节点赋值给tl，用于在下一次循环中作为上一个节点进行一些链表的关联操作
                tl = p;
            } while ((e = e.next) != null); // 循环条件：e的下一个节点不为空
            //  将hd（红黑树的节点）赋值到数组，如果不为空，构建红黑树
            if ((tab[index] = hd) != null) {
                hd.treeify(tab);
            }

        }
    }

    /**
     * 将指定映射中的所有映射复制到此映射。
     * 这些映射将替换此映射为指定映射中当前任何键的任何映射
     *
     * @param m mappings to be stored in this map
     * @throws NullPointerException if the specified map is null
     */
    @Override
    public void putAll(Map<? extends K, ? extends V> m) {
        putMapEntries(m, true);
    }

    /**
     * 从该映射中删除指定键的映射（如果存在）
     *
     * @param key 要从映射中删除其映射的key
     * @return 返回与key关联的先前值，如果键不存在映射，返回null（返回null也
     * 有可能是先前key与null关联）
     */
    @Override
    public V remove(Object key) {
        Node<K, V> e;
        // 调用移除节点
        return (e = removeNode(hash(key), key, null, false, true)) == null ? null : e.value;
    }

    /**
     * 具体实现Map.remove以及相关方法
     *
     * @param hash       hash for key
     * @param key        the key
     * @param value      如果匹配值，则忽略匹配值
     * @param matchValue 如果为true，则仅在值相等时移除
     * @param movable    如果为false，则在移除时不要移动其他节点
     * @return the node, or null if none
     */
    final Node<K, V> removeNode(int hash, Object key, Object value,
                                boolean matchValue, boolean movable) {
        Node<K, V>[] tab;
        Node<K, V> p;
        int n, index;
        // table赋值给tab并且不为空 &&
        if ((tab = table) != null &&
                // tab长度赋值给n，并且大于0 &&
                (n = tab.length) > 0 &&
                // 根据hash计算数组索引位置，并赋值给index；获取tab[index]节点赋值给p，并且判断p不等于空
                (p = tab[index = (n - 1) & hash]) != null) {
            Node<K, V> node = null, e;
            K k;
            V v;
            // p节点就是要删除的节点
            if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) {
                node = p;
                // 把p节点的后继节点赋值给e，并且e节点不为空
            } else if ((e = p.next) != null) {
                // p 是树结构节点
                if (p instanceof TreeNode) {
                    node = ((TreeNode<K, V>) p).getTreeNode(hash, key);
                    // p是散列节点
                } else {
                    do {
                        if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) {
                            node = e;
                            break;
                        }
                        // p节点赋值为本次结束的e，在下一次循环中，e为p的next节点
                        p = e;
                    } while ((e = e.next) != null);
                }
            }
            // node 不为空 &&
            if (node != null &&
                    // matchValue是false || node节点的值赋值给v，并且v==value || value不等于null，并且value.equals(v)
                    (!matchValue || (v = node.value) == value || (value != null && value.equals(v)))) {
                // node是树节点
                if (node instanceof TreeNode) {
                    // 调用removeTreeNode移除
                    ((TreeNode<K, V>) node).removeTreeNode(this, tab, movable);
                    // node是第一个节点
                } else if (node == p) {
                    // node后继节点赋值给tab[index]
                    tab[index] = node.next;
                    // node不是第一个节点
                } else {
                    // 将p后继节点设置为node的后继节点，即删除node元素
                    p.next = node.next;
                }
                // fast-fail机制
                ++modCount;
                // 数量减一
                --size;
                // LinkedHashMap使用
                afterNodeRemoval(node);
                return node;
            }
        }
        return null;
    }

    /**
     * 删除此映射中的所有映射
     * 此调用返回后，映射将为空
     */
    @Override
    public void clear() {
        Node<K, V>[] tab;
        modCount++;
        if ((tab = table) != null && size > 0) {
            size = 0;
            for (int i = 0; i < tab.length; ++i) {
                tab[i] = null;
            }
        }
    }

    /**
     *
     * 如果此映射将一个或多个键映射到指定的值，则返回true
     *
     * @param value 要测试其在该Map中的存在性的值
     * @return <tt>true</tt> 如果此映射将一个或多个键映射到指定的值，则返回为true
     */
    @Override
    public boolean containsValue(Object value) {
        Node<K, V>[] tab;
        V v;
        // 判断Map不为空，并且Map元素个数大于0
        if ((tab = table) != null && size > 0) {
            // 循环数组每一个散列
            for (int i = 0; i < tab.length; ++i) {
                // 循环散列中的每一个元素
                for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                    if ((v = e.value) == value || (value != null && value.equals(v))) {
                        return true;
                    }
                }
            }
        }
        return false;
    }

    /**
     *
     * 返回此映射中包含的键的Set视图。集合由映射支持，因此对映射的更改将
     * 反映在集合中，反之亦然。如果在对集合进行迭代时修改了映射（除了通过
     * 迭代器自己的remove操作），则迭代的结果是未定义的。集合支持元素删除，
     * 它通过Iterator.remove、Set.remove、removeAll、retainAll、clear操作。
     * 它不支持add或addAll操作。
     *
     * @return a set view of the keys contained in this map
     */
    @Override
    public Set<K> keySet() {
        Set<K> ks = keySet;
        if (ks == null) {
            ks = new KeySet();
            keySet = ks;
        }
        return ks;
    }

    /**
     * KeySet内部类
     *
     */
    final class KeySet extends AbstractSet<K> {

        @Override
        public final int size() {
            return size;
        }

        @Override
        public final void clear() {
            this.clear();
        }

        @Override
        public final Iterator<K> iterator() {
            return new KeyIterator();
        }

        @Override
        public final boolean contains(Object o) {
            return containsKey(o);
        }

        @Override
        public final boolean remove(Object key) {
            return removeNode(hash(key), key, null, false, true) != null;
        }

        @Override
        public final Spliterator<K> spliterator() {
            return new KeySpliterator<>(MyHashMap.this, 0, -1, 0, 0);
        }

        @Override
        public final void forEach(Consumer<? super K> action) {
            Node<K, V>[] tab;
            if (action == null) {
                throw new NullPointerException();
            }
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                        action.accept(e.key);
                    }
                }
                if (modCount != mc) {
                    throw new ConcurrentModificationException();
                }
            }
        }
    }

    /**
     * 返回此映射中包含的值的Collection视图。集合由映射支持，因此对映射的更改将
     * 反映在集合中，反之亦然。如果在对集合进行迭代时修改了映射（除了通过迭代器自己的
     * remove操作），则迭代的结果是未定义的。集合支持元素删除，它通过Iterator.remove、
     * Set.remove、removeAll、retainAll、clear操作。它不支持add或addAll操作。
     *
     * @return a view of the values contained in this map
     */
    @Override
    public Collection<V> values() {
        Collection<V> vs = values;
        if (vs == null) {
            vs = new Values();
            values = vs;
        }
        return vs;
    }

    final class Values extends AbstractCollection<V> {
        @Override
        public final int size() {
            return size;
        }

        @Override
        public final void clear() {
            this.clear();
        }

        @Override
        public final Iterator<V> iterator() {
            return new ValueIterator();
        }

        @Override
        public final boolean contains(Object o) {
            return containsValue(o);
        }

        @Override
        public final Spliterator<V> spliterator() {
            return new ValueSpliterator<>(MyHashMap.this, 0, -1, 0, 0);
        }

        @Override
        public final void forEach(Consumer<? super V> action) {
            Node<K, V>[] tab;
            if (action == null) {
                throw new NullPointerException();
            }
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                        action.accept(e.value);
                    }
                }
                if (modCount != mc) {
                    throw new ConcurrentModificationException();
                }
            }
        }
    }

    /**
     *
     * 返回此映射中包含的映射的Set视图。集合由映射支持，因此对映射的更改将
     * 反映在集合中，反之亦然。如果在对集合进行迭代时修改了映射（除了通过
     * 迭代器自己的remove操作，或通过迭代器返回的映射项上的setValue操作），
     * 则迭代的结果是未定义的。set支持元素删除，通过Iterator.remove、set.remove、
     * removeAll、retainAll/tt>和clear操作。它不支持add或addAll操作。
     *
     *
     * @return a set view of the mappings contained in this map
     */
    @Override
    public Set<Entry<K, V>> entrySet() {
        Set<Entry<K, V>> es;
        return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
    }

    final class EntrySet extends AbstractSet<Entry<K, V>> {
        @Override
        public final int size() {
            return size;
        }

        @Override
        public final void clear() {
            this.clear();
        }

        @Override
        public final Iterator<Entry<K, V>> iterator() {
            return new EntryIterator();
        }

        @Override
        public final boolean contains(Object o) {
            if (!(o instanceof Map.Entry)) {
                return false;
            }
            Entry<?, ?> e = (Entry<?, ?>) o;
            Object key = e.getKey();
            Node<K, V> candidate = getNode(hash(key), key);
            return candidate != null && candidate.equals(e);
        }

        @Override
        public final boolean remove(Object o) {
            if (o instanceof Map.Entry) {
                Entry<?, ?> e = (Entry<?, ?>) o;
                Object key = e.getKey();
                Object value = e.getValue();
                return removeNode(hash(key), key, value, true, true) != null;
            }
            return false;
        }

        @Override
        public final Spliterator<Entry<K, V>> spliterator() {
            return new EntrySpliterator<>(MyHashMap.this, 0, -1, 0, 0);
        }

        @Override
        public final void forEach(Consumer<? super Entry<K, V>> action) {
            Node<K, V>[] tab;
            if (action == null) {
                throw new NullPointerException();
            }
            if (size > 0 && (tab = table) != null) {
                int mc = modCount;
                for (int i = 0; i < tab.length; ++i) {
                    for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                        action.accept(e);
                    }
                }
                if (modCount != mc) {
                    throw new ConcurrentModificationException();
                }
            }
        }
    }

    // Overrides of JDK8 Map extension methods

    /**
     * 获取key的值，如果没有返回默认值
     *
     * @param key
     * @param defaultValue
     * @return
     */
    @Override
    public V getOrDefault(Object key, V defaultValue) {
        Node<K, V> e;
        return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
    }

    /**
     * 如果该链表中保存的有相同key的值，那么就不会对我们当前的value进行保存
     *
     * @param key
     * @param value
     * @return
     */
    @Override
    public V putIfAbsent(K key, V value) {
        return putVal(hash(key), key, value, true, true);
    }


    /**
     * 删除
     *
     * @param key
     * @param value
     * @return
     */
    @Override
    public boolean remove(Object key, Object value) {
        return removeNode(hash(key), key, value, true, true) != null;
    }

    /**
     * 替换
     *
     * @param key
     * @param oldValue
     * @param newValue
     * @return
     */
    @Override
    public boolean replace(K key, V oldValue, V newValue) {
        Node<K, V> e;
        V v;
        if ((e = getNode(hash(key), key)) != null &&
                ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
            e.value = newValue;
            afterNodeAccess(e);
            return true;
        }
        return false;
    }

    /**
     * 替换
     *
     * @param key
     * @param value
     * @return
     */
    @Override
    public V replace(K key, V value) {
        Node<K, V> e;
        if ((e = getNode(hash(key), key)) != null) {
            V oldValue = e.value;
            e.value = value;
            afterNodeAccess(e);
            return oldValue;
        }
        return null;
    }

    @Override
    public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
        if (mappingFunction == null) {
            throw new NullPointerException();
        }
        int hash = hash(key);
        Node<K, V>[] tab;
        Node<K, V> first;
        int n, i;
        int binCount = 0;
        TreeNode<K, V> t = null;
        Node<K, V> old = null;
        if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0) {
            n = (tab = resize()).length;
        }
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode) {
                old = (t = (TreeNode<K, V>) first).getTreeNode(hash, key);
            } else {
                Node<K, V> e = first;
                K k;
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
            V oldValue;
            if (old != null && (oldValue = old.value) != null) {
                afterNodeAccess(old);
                return oldValue;
            }
        }
        V v = mappingFunction.apply(key);
        if (v == null) {
            return null;
        } else if (old != null) {
            old.value = v;
            afterNodeAccess(old);
            return v;
        } else if (t != null) {
            t.putTreeVal(this, tab, hash, key, v);
        } else {
            tab[i] = newNode(hash, key, v, first);
            if (binCount >= TREEIFY_THRESHOLD - 1) {
                treeifyBin(tab, hash);
            }
        }
        ++modCount;
        ++size;
        afterNodeInsertion(true);
        return v;
    }

    @Override
    public V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null) {
            throw new NullPointerException();
        }
        Node<K, V> e;
        V oldValue;
        int hash = hash(key);
        if ((e = getNode(hash, key)) != null &&
                (oldValue = e.value) != null) {
            V v = remappingFunction.apply(key, oldValue);
            if (v != null) {
                e.value = v;
                afterNodeAccess(e);
                return v;
            } else {
                removeNode(hash, key, null, false, true);
            }
        }
        return null;
    }

    @Override
    public V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null) {
            throw new NullPointerException();
        }
        int hash = hash(key);
        Node<K, V>[] tab;
        Node<K, V> first;
        int n, i;
        int binCount = 0;
        TreeNode<K, V> t = null;
        Node<K, V> old = null;
        if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0) {
            n = (tab = resize()).length;
        }
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode) {
                old = (t = (TreeNode<K, V>) first).getTreeNode(hash, key);
            } else {
                Node<K, V> e = first;
                K k;
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        V oldValue = (old == null) ? null : old.value;
        V v = remappingFunction.apply(key, oldValue);
        if (old != null) {
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            } else {
                removeNode(hash, key, null, false, true);
            }
        } else if (v != null) {
            if (t != null) {
                t.putTreeVal(this, tab, hash, key, v);
            } else {
                tab[i] = newNode(hash, key, v, first);
                if (binCount >= TREEIFY_THRESHOLD - 1) {
                    treeifyBin(tab, hash);
                }
            }
            ++modCount;
            ++size;
            afterNodeInsertion(true);
        }
        return v;
    }

    @Override
    public V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
        if (value == null) {
            throw new NullPointerException();
        }
        if (remappingFunction == null) {
            throw new NullPointerException();
        }
        int hash = hash(key);
        Node<K, V>[] tab;
        Node<K, V> first;
        int n, i;
        int binCount = 0;
        TreeNode<K, V> t = null;
        Node<K, V> old = null;
        if (size > threshold || (tab = table) == null ||
                (n = tab.length) == 0) {
            n = (tab = resize()).length;
        }
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode) {
                old = (t = (TreeNode<K, V>) first).getTreeNode(hash, key);
            } else {
                Node<K, V> e = first;
                K k;
                do {
                    if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        if (old != null) {
            V v;
            if (old.value != null) {
                v = remappingFunction.apply(old.value, value);
            } else {
                v = value;
            }
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            } else {
                removeNode(hash, key, null, false, true);
            }
            return v;
        }
        if (value != null) {
            if (t != null) {
                t.putTreeVal(this, tab, hash, key, value);
            } else {
                tab[i] = newNode(hash, key, value, first);
                if (binCount >= TREEIFY_THRESHOLD - 1) {
                    treeifyBin(tab, hash);
                }
            }
            ++modCount;
            ++size;
            afterNodeInsertion(true);
        }
        return value;
    }

    @Override
    public void forEach(BiConsumer<? super K, ? super V> action) {
        Node<K, V>[] tab;
        if (action == null) {
            throw new NullPointerException();
        }
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                    action.accept(e.key, e.value);
                }
            }
            if (modCount != mc) {
                throw new ConcurrentModificationException();
            }
        }
    }

    @Override
    public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
        Node<K, V>[] tab;
        if (function == null) {
            throw new NullPointerException();
        }
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                    e.value = function.apply(e.key, e.value);
                }
            }
            if (modCount != mc) {
                throw new ConcurrentModificationException();
            }
        }
    }

    /* ------------------------------------------------------------ */
    // Cloning and serialization

    /**
     * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
     * values themselves are not cloned.
     *
     * @return a shallow copy of this map
     */
    @SuppressWarnings("unchecked")
    @Override
    public Object clone() {
        MyHashMap<K, V> result;
        try {
            result = (MyHashMap<K, V>) super.clone();
        } catch (CloneNotSupportedException e) {
            // this shouldn't happen, since we are Cloneable
            throw new InternalError(e);
        }
        result.reinitialize();
        result.putMapEntries(this, false);
        return result;
    }

    // These methods are also used when serializing HashSets
    final float loadFactor() {
        return loadFactor;
    }

    final int capacity() {
        return (table != null) ? table.length :
                (threshold > 0) ? threshold :
                        DEFAULT_INITIAL_CAPACITY;
    }

    /**
     * Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
     * serialize it).
     *
     * @serialData The <i>capacity</i> of the HashMap (the length of the
     * bucket array) is emitted (int), followed by the
     * <i>size</i> (an int, the number of key-value
     * mappings), followed by the key (Object) and value (Object)
     * for each key-value mapping.  The key-value mappings are
     * emitted in no particular order.
     */
    private void writeObject(java.io.ObjectOutputStream s)
            throws IOException {
        int buckets = capacity();
        // Write out the threshold, loadfactor, and any hidden stuff
        s.defaultWriteObject();
        s.writeInt(buckets);
        s.writeInt(size);
        internalWriteEntries(s);
    }

    /**
     * Reconstitute the {@code HashMap} instance from a stream (i.e.,
     * deserialize it).
     */
    private void readObject(java.io.ObjectInputStream s)
            throws IOException, ClassNotFoundException {
        // Read in the threshold (ignored), loadfactor, and any hidden stuff
        s.defaultReadObject();
        reinitialize();
        if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
            throw new InvalidObjectException("Illegal load factor: " +
                    loadFactor);
        }
        s.readInt();                // Read and ignore number of buckets
        int mappings = s.readInt(); // Read number of mappings (size)
        if (mappings < 0) {
            throw new InvalidObjectException("Illegal mappings count: " +
                    mappings);
        } else if (mappings > 0) { // (if zero, use defaults)
            // Size the table using given load factor only if within
            // range of 0.25...4.0
            float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
            float fc = (float) mappings / lf + 1.0f;
            int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
                    DEFAULT_INITIAL_CAPACITY :
                    (fc >= MAXIMUM_CAPACITY) ?
                            MAXIMUM_CAPACITY :
                            tableSizeFor((int) fc));
            float ft = (float) cap * lf;
            threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
                    (int) ft : Integer.MAX_VALUE);
            @SuppressWarnings({"rawtypes", "unchecked"})
            Node<K, V>[] tab = (Node<K, V>[]) new Node[cap];
            table = tab;

            // Read the keys and values, and put the mappings in the HashMap
            for (int i = 0; i < mappings; i++) {
                @SuppressWarnings("unchecked")
                K key = (K) s.readObject();
                @SuppressWarnings("unchecked")
                V value = (V) s.readObject();
                putVal(hash(key), key, value, false, false);
            }
        }
    }

    /* ------------------------------------------------------------ */
    // iterators

    abstract class HashIterator {
        Node<K, V> next;        // next entry to return
        Node<K, V> current;     // current entry
        int expectedModCount;  // for fast-fail
        int index;             // current slot

        HashIterator() {
            expectedModCount = modCount;
            Node<K, V>[] t = table;
            current = next = null;
            index = 0;
            if (t != null && size > 0) { // advance to first entry
                do {
                } while (index < t.length && (next = t[index++]) == null);
            }
        }

        public final boolean hasNext() {
            return next != null;
        }

        final Node<K, V> nextNode() {
            Node<K, V>[] t;
            Node<K, V> e = next;
            if (modCount != expectedModCount) {
                throw new ConcurrentModificationException();
            }
            if (e == null) {
                throw new NoSuchElementException();
            }
            if ((next = (current = e).next) == null && (t = table) != null) {
                do {
                } while (index < t.length && (next = t[index++]) == null);
            }
            return e;
        }

        public final void remove() {
            Node<K, V> p = current;
            if (p == null) {
                throw new IllegalStateException();
            }
            if (modCount != expectedModCount) {
                throw new ConcurrentModificationException();
            }
            current = null;
            K key = p.key;
            removeNode(hash(key), key, null, false, false);
            expectedModCount = modCount;
        }
    }

    final class KeyIterator extends HashIterator
            implements Iterator<K> {
        @Override
        public final K next() {
            return nextNode().key;
        }
    }

    final class ValueIterator extends HashIterator
            implements Iterator<V> {
        @Override
        public final V next() {
            return nextNode().value;
        }
    }

    final class EntryIterator extends HashIterator
            implements Iterator<Entry<K, V>> {
        @Override
        public final Entry<K, V> next() {
            return nextNode();
        }
    }

    /* ------------------------------------------------------------ */
    // spliterators

    static class HashMapSpliterator<K, V> {
        final MyHashMap<K, V> map;
        Node<K, V> current;          // current node
        int index;                  // current index, modified on advance/split
        int fence;                  // one past last index
        int est;                    // size estimate
        int expectedModCount;       // for comodification checks

        HashMapSpliterator(MyHashMap<K, V> m, int origin, int fence, int est, int expectedModCount) {
            this.map = m;
            this.index = origin;
            this.fence = fence;
            this.est = est;
            this.expectedModCount = expectedModCount;
        }

        final int getFence() { // initialize fence and size on first use
            int hi;
            if ((hi = fence) < 0) {
                MyHashMap<K, V> m = map;
                est = m.size;
                expectedModCount = m.modCount;
                Node<K, V>[] tab = m.table;
                hi = fence = (tab == null) ? 0 : tab.length;
            }
            return hi;
        }

        public final long estimateSize() {
            getFence(); // force init
            return (long) est;
        }
    }

    static final class KeySpliterator<K, V> extends HashMapSpliterator<K, V> implements Spliterator<K> {
        KeySpliterator(MyHashMap<K, V> m, int origin, int fence, int est,
                       int expectedModCount) {
            super(m, origin, fence, est, expectedModCount);
        }

        @Override
        public KeySpliterator<K, V> trySplit() {
            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
            return (lo >= mid || current != null) ? null :
                    new KeySpliterator<>(map, lo, index = mid, est >>>= 1,
                            expectedModCount);
        }

        @Override
        public void forEachRemaining(Consumer<? super K> action) {
            int i, hi, mc;
            if (action == null) {
                throw new NullPointerException();
            }
            MyHashMap<K, V> m = map;
            Node<K, V>[] tab = m.table;
            if ((hi = fence) < 0) {
                mc = expectedModCount = m.modCount;
                hi = fence = (tab == null) ? 0 : tab.length;
            } else {
                mc = expectedModCount;
            }
            if (tab != null && tab.length >= hi &&
                    (i = index) >= 0 && (i < (index = hi) || current != null)) {
                Node<K, V> p = current;
                current = null;
                do {
                    if (p == null) {
                        p = tab[i++];
                    } else {
                        action.accept(p.key);
                        p = p.next;
                    }
                } while (p != null || i < hi);
                if (m.modCount != mc) {
                    throw new ConcurrentModificationException();
                }
            }
        }

        @Override
        public boolean tryAdvance(Consumer<? super K> action) {
            int hi;
            if (action == null) {
                throw new NullPointerException();
            }
            Node<K, V>[] tab = map.table;
            if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
                while (current != null || index < hi) {
                    if (current == null) {
                        current = tab[index++];
                    } else {
                        K k = current.key;
                        current = current.next;
                        action.accept(k);
                        if (map.modCount != expectedModCount) {
                            throw new ConcurrentModificationException();
                        }
                        return true;
                    }
                }
            }
            return false;
        }

        @Override
        public int characteristics() {
            return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
                    Spliterator.DISTINCT;
        }
    }

    static final class ValueSpliterator<K, V> extends HashMapSpliterator<K, V> implements Spliterator<V> {
        ValueSpliterator(MyHashMap<K, V> m, int origin, int fence, int est,
                         int expectedModCount) {
            super(m, origin, fence, est, expectedModCount);
        }

        @Override
        public ValueSpliterator<K, V> trySplit() {
            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
            return (lo >= mid || current != null) ? null :
                    new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
                            expectedModCount);
        }

        @Override
        public void forEachRemaining(Consumer<? super V> action) {
            int i, hi, mc;
            if (action == null) {
                throw new NullPointerException();
            }
            MyHashMap<K, V> m = map;
            Node<K, V>[] tab = m.table;
            if ((hi = fence) < 0) {
                mc = expectedModCount = m.modCount;
                hi = fence = (tab == null) ? 0 : tab.length;
            } else {
                mc = expectedModCount;
            }
            if (tab != null && tab.length >= hi &&
                    (i = index) >= 0 && (i < (index = hi) || current != null)) {
                Node<K, V> p = current;
                current = null;
                do {
                    if (p == null) {
                        p = tab[i++];
                    } else {
                        action.accept(p.value);
                        p = p.next;
                    }
                } while (p != null || i < hi);
                if (m.modCount != mc) {
                    throw new ConcurrentModificationException();
                }
            }
        }

        @Override
        public boolean tryAdvance(Consumer<? super V> action) {
            int hi;
            if (action == null) {
                throw new NullPointerException();
            }
            Node<K, V>[] tab = map.table;
            if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
                while (current != null || index < hi) {
                    if (current == null) {
                        current = tab[index++];
                    } else {
                        V v = current.value;
                        current = current.next;
                        action.accept(v);
                        if (map.modCount != expectedModCount) {
                            throw new ConcurrentModificationException();
                        }
                        return true;
                    }
                }
            }
            return false;
        }

        @Override
        public int characteristics() {
            return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
        }
    }

    static final class EntrySpliterator<K, V> extends HashMapSpliterator<K, V> implements Spliterator<Entry<K, V>> {
        EntrySpliterator(MyHashMap<K, V> m, int origin, int fence, int est, int expectedModCount) {
            super(m, origin, fence, est, expectedModCount);
        }

        @Override
        public EntrySpliterator<K, V> trySplit() {
            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
            return (lo >= mid || current != null) ? null :
                    new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
                            expectedModCount);
        }

        @Override
        public void forEachRemaining(Consumer<? super Entry<K, V>> action) {
            int i, hi, mc;
            if (action == null) {
                throw new NullPointerException();
            }
            MyHashMap<K, V> m = map;
            Node<K, V>[] tab = m.table;
            if ((hi = fence) < 0) {
                mc = expectedModCount = m.modCount;
                hi = fence = (tab == null) ? 0 : tab.length;
            } else {
                mc = expectedModCount;
            }
            if (tab != null && tab.length >= hi &&
                    (i = index) >= 0 && (i < (index = hi) || current != null)) {
                Node<K, V> p = current;
                current = null;
                do {
                    if (p == null) {
                        p = tab[i++];
                    } else {
                        action.accept(p);
                        p = p.next;
                    }
                } while (p != null || i < hi);
                if (m.modCount != mc) {
                    throw new ConcurrentModificationException();
                }
            }
        }

        @Override
        public boolean tryAdvance(Consumer<? super Entry<K, V>> action) {
            int hi;
            if (action == null) {
                throw new NullPointerException();
            }
            Node<K, V>[] tab = map.table;
            if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
                while (current != null || index < hi) {
                    if (current == null) {
                        current = tab[index++];
                    } else {
                        Node<K, V> e = current;
                        current = current.next;
                        action.accept(e);
                        if (map.modCount != expectedModCount) {
                            throw new ConcurrentModificationException();
                        }
                        return true;
                    }
                }
            }
            return false;
        }

        @Override
        public int characteristics() {
            return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
                    Spliterator.DISTINCT;
        }
    }

    /* ------------------------------------------------------------ */
    // LinkedHashMap support


    /*
     * The following package-protected methods are designed to be
     * overridden by LinkedHashMap, but not by any other subclass.
     * Nearly all other internal methods are also package-protected
     * but are declared final, so can be used by LinkedHashMap, view
     * classes, and HashSet.
     */

    // Create a regular (non-tree) node
    Node<K, V> newNode(int hash, K key, V value, Node<K, V> next) {
        return new Node<>(hash, key, value, next);
    }

    // For conversion from TreeNodes to plain nodes
    Node<K, V> replacementNode(Node<K, V> p, Node<K, V> next) {
        return new Node<>(p.hash, p.key, p.value, next);
    }

    // Create a tree bin node
    TreeNode<K, V> newTreeNode(int hash, K key, V value, Node<K, V> next) {
        return new TreeNode<>(hash, key, value, next);
    }

    // For treeifyBin
    TreeNode<K, V> replacementTreeNode(Node<K, V> p, Node<K, V> next) {
        return new TreeNode<>(p.hash, p.key, p.value, next);
    }

    /**
     * Reset to initial default state.  Called by clone and readObject.
     */
    void reinitialize() {
        table = null;
        entrySet = null;
        keySet = null;
        values = null;
        modCount = 0;
        threshold = 0;
        size = 0;
    }

    // Callbacks to allow LinkedHashMap post-actions
    void afterNodeAccess(Node<K, V> p) {
    }

    void afterNodeInsertion(boolean evict) {
    }

    void afterNodeRemoval(Node<K, V> p) {
    }

    // Called only from writeObject, to ensure compatible ordering.
    void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
        Node<K, V>[] tab;
        if (size > 0 && (tab = table) != null) {
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K, V> e = tab[i]; e != null; e = e.next) {
                    s.writeObject(e.key);
                    s.writeObject(e.value);
                }
            }
        }
    }

    /* ------------------------------------------------------------ */
    // 树结构 bins

    /**
     * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
     * extends Node) so can be used as extension of either regular or
     * linked node.
     * 树结构bins的Entry 。延伸LinkedHashMap.Entry（进而扩展节点）
     * 所以可以用作常规节点或链接节点的扩展
     */
    static final class TreeNode<K, V> extends MyLinkedHashMap.Entry<K, V> {
        // 红黑树链
        // 父节点
        TreeNode<K, V> parent;
        // 左节点
        TreeNode<K, V> left;
        // 右节点
        TreeNode<K, V> right;
        // 删除后需要取消链接
        TreeNode<K, V> prev;
        // 是否是红节点
        boolean red;

        TreeNode(int hash, K key, V val, Node<K, V> next) {
            super(hash, key, val, next);
        }

        /**
         * 返回包含此节点的树的根
         */
        final TreeNode<K, V> root() {
            // 循环获得根节点
            for (TreeNode<K, V> r = this, p; ; ) {
                if ((p = r.parent) == null) {
                    return r;
                }
                r = p;
            }
        }

        /**
         * 确保给定的根是其bin的第一个节点
         * （把红黑树的根节点设为其所在的数组槽的第一个元素的方法）
         */
        static <K, V> void moveRootToFront(Node<K, V>[] tab, TreeNode<K, V> root) {
            int n;
            // 根节点不为空，tab数组不为空，并且有节点
            if (root != null && tab != null && (n = tab.length) > 0) {
                // 获取根节点在数组中的位置
                int index = (n - 1) & root.hash;
                // 获取数组
                TreeNode<K, V> first = (TreeNode<K, V>) tab[index];
                if (root != first) {
                    Node<K, V> rn;
                    tab[index] = root;
                    TreeNode<K, V> rp = root.prev;
                    if ((rn = root.next) != null) {
                        ((TreeNode<K, V>) rn).prev = rp;
                    }
                    if (rp != null) {
                        rp.next = rn;
                    }
                    if (first != null) {
                        first.prev = root;
                    }
                    root.next = first;
                    root.prev = null;
                }
                assert checkInvariants(root);
            }
        }


        /**
         * 使用给定的hash和key查找从根 p 开始的节点。
         * kc参数在第一次使用比较键时缓存comparableClassFor（key）。
         *
         * @param h  hash值
         * @param k  key
         * @param kc
         * @return
         */
        final TreeNode<K, V> find(int h, Object k, Class<?> kc) {
            // 将p节点赋值为调用此方法的节点，即为红黑树根节点
            TreeNode<K, V> p = this;
            // 从p节点开始向下遍历
            do {
                // ph-->p的hash，dir-->k.compareTo(pk)
                int ph, dir;
                // pk-->p的key
                K pk;
                // 获取p的左节点声明为pl，获取p的右节点声明为pr
                TreeNode<K, V> pl = p.left, pr = p.right, q;
                //  p节点的hash赋值给ph，并且判断ph是否大于h
                if ((ph = p.hash) > h) {
                    // 将pl重新赋值为p
                    p = pl;
                    // 判断ph是小于h
                } else if (ph < h) {
                    // 将pr重新赋值为p
                    p = pr;
                    // 获取p节点的key赋值给pk，并且判断k是否等于pk
                } else if ((pk = p.key) == k || (k != null && k.equals(pk))) {
                    // 返回p
                    return p;
                    // pl为空，即p没有左节点
                } else if (pl == null) {
                    // 将pr赋值给p
                    p = pr;
                    // pr为空，即p没有右节点
                } else if (pr == null) {
                    // 将pl赋值给p
                    p = pl;
                    // 将p节点与k进行比较，判断红黑树的下一个节点寻找方向
                } else if ((kc != null || (kc = comparableClassFor(k)) != null) &&  // kc不为空，或者k实现了Comparable接口；
                        // 并且k与pk比较不等于0，即p与pk不相等
                        (dir = compareComparables(kc, k, pk)) != 0) {
                    // dir < 0时，pl赋值p，反之pr赋值p
                    p = (dir < 0) ? pl : pr;
                    // 此处代码代表key所属类没有实现Comparable, 直接指定向p的右边遍历，并且把右遍历的值给q
                } else if ((q = pr.find(h, k, kc)) != null) {
                    // 返回q
                    return q;
                } else {
                    // 上述都不成立，将pl赋值给p
                    p = pl;
                }
                // p不为空继续循环
            } while (p != null);

            // 寻找不到返回空
            return null;
        }


        /**
         * 为根节点（树结构）的查找
         *
         * @param h hash值
         * @param k key
         * @return
         */
        final TreeNode<K, V> getTreeNode(int h, Object k) {
            // 父节点不为空先调用root()，获取根节点，再调用find()；父节点为空调用find()
            return ((parent != null) ? root() : this).find(h, k, null);
        }

        /**
         * 当哈希码相等且不可比较时，用于排序插入的Tie breaking实用程序。我们不需要总的
         * 顺序，只需要一个一致的插入规则来保持在重新平衡中的等价性。
         * 比必要的更进一步的断开连接简化了测试。
         */
        static int tieBreakOrder(Object a, Object b) {
            int d;
            if (a == null || // a等于null
                    b == null || // 或者 a等于null
                    (d = a.getClass().getName().compareTo(b.getClass().getName())) == 0)  // 或者类名称相等
            {
                // 规则如下
                d = (System.identityHashCode(a) <= System.identityHashCode(b) ? -1 : 1);
            }
            return d;
        }

        /**
         * 将散列构建成红黑树
         *
         * @return root of tree
         */
        final void treeify(Node<K, V>[] tab) {
            // 声明一个根节点
            TreeNode<K, V> root = null;
            // 循环散列转换成红黑树
            for (TreeNode<K, V> x = this, next; x != null; x = next) { //声明x，并赋值当前节点，声明next；x不为空；x赋值成next
                // next赋值为x下一个节点
                next = (TreeNode<K, V>) x.next;
                // x的左右节点设置为空
                x.left = x.right = null;
                // 根节点为空，将x设置为根节点
                if (root == null) {
                    x.parent = null;
                    x.red = false;
                    root = x;
                } else { // 根节点不为空
                    // k赋值为x的key
                    K k = x.key;
                    // h赋值为x的hash值
                    int h = x.hash;
                    Class<?> kc = null;
                    // 从根节点开始查找属于该节点的位置
                    for (TreeNode<K, V> p = root; ; ) {
                        // dir->ph与h比较值,ph-> p.hash
                        int dir, ph;
                        // pk-> p.key
                        K pk = p.key;
                        // p.hash赋值给ph，并且判断是否大于h
                        if ((ph = p.hash) > h) {
                            dir = -1;
                        } else if (ph < h) { // 小于h
                            dir = 1;
                            // 哈希值相等，k没有实现comparable接口 或者 compareComparables()等于0
                        } else if ((kc == null &&
                                (kc = comparableClassFor(k)) == null) ||
                                (dir = compareComparables(kc, k, pk)) == 0) {
                            // 通过tieBreakOrder规则确定dir
                            dir = tieBreakOrder(k, pk);
                        }
                        // xp赋值为x的父节点
                        TreeNode<K, V> xp = p;
                        // dir<=0获取p的左节点，获取p的右节点，并且把节点赋值给p，如果p为空，则代表该位置即为x的目标位置
                        if ((p = (dir <= 0) ? p.left : p.right) == null) {
                            // x的父节点设置为xp
                            x.parent = xp;
                            // 如果时dir <= 0, 则代表x节点为父节点的左节点
                            if (dir <= 0) {
                                xp.left = x;
                            } else {  // 如果时dir > 0, 则代表x节点为父节点的右节点
                                xp.right = x;
                            }
                            // 进行红黑树的插入平衡(通过左旋、右旋和改变节点颜色来保证当前树符合红黑树的要求)
                            root = balanceInsertion(root, x);
                            break;
                        }
                    }
                }
            }
            // 如果root节点不在table索引位置的头节点, 则将其调整为头节点
            moveRootToFront(tab, root);
        }

        /**
         * 将树结构转换成散列结构
         * @param map
         * @return
         */
        final Node<K, V> untreeify(MyHashMap<K, V> map) {
            Node<K, V> hd = null, tl = null;
            // 循环将每一个TreeNode转换成Node
            for (Node<K, V> q = this; q != null; q = q.next) {
                Node<K, V> p = map.replacementNode(q, null);
                if (tl == null) {
                    hd = p;
                } else {
                    tl.next = p;
                }
                tl = p;
            }
            return hd;
        }

        /**
         * putVal的树结构板
         *
         * @param map
         * @param tab
         * @param h
         * @param k
         * @param v
         * @return
         */
        final TreeNode<K, V> putTreeVal(MyHashMap<K, V> map, Node<K, V>[] tab,
                                        int h, K k, V v) {
            Class<?> kc = null;
            boolean searched = false;
            // 获取红黑树根节点
            TreeNode<K, V> root = (parent != null) ? root() : this;
            // 将p节点设置为root节点，循环
            for (TreeNode<K, V> p = root; ; ) {
                int dir, ph;
                K pk;
                // 将p的哈希赋值给ph，并判断是否大于h
                if ((ph = p.hash) > h) {
                    dir = -1;
                    // ph小于h
                } else if (ph < h) {
                    dir = 1;
                    // ph等于h，并且pk等于k
                } else if ((pk = p.key) == k || (k != null && k.equals(pk))) {
                    return p;
                    // ph等于h，k没有实现Comparable接口 或者 k和p节点的key相等
                } else if ((kc == null && (kc = comparableClassFor(k)) == null) ||
                        (dir = compareComparables(kc, k, pk)) == 0) {
                    // searched等于false
                    if (!searched) {
                        TreeNode<K, V> q, ch;
                        // 将searched设置为true
                        searched = true;
                        // 将p左节点设置为ch，并且判断ch不为空；从ch节点开始查找k元素赋值给q，并且判断q不为空
                        if (((ch = p.left) != null && (q = ch.find(h, k, kc)) != null) ||
                                // 将p右节点设置为ch，并且判断ch不为空；从ch节点开始查找k元素赋值给q，并且判断q不为空
                                ((ch = p.right) != null && (q = ch.find(h, k, kc)) != null)) {
                            // 返回q
                            return q;
                        }
                    }
                    // 按一定规则返回遍历方向
                    dir = tieBreakOrder(k, pk);
                }
                // 将p赋值给xp
                TreeNode<K, V> xp = p;
                // dir判断左右节点赋值给p，判断p是否为空
                if ((p = (dir <= 0) ? p.left : p.right) == null) {
                    // xp的下一个节点赋值给xpn
                    Node<K, V> xpn = xp.next;
                    // 新建一个TreeNode节点
                    TreeNode<K, V> x = map.newTreeNode(h, k, v, xpn);
                    // dir小于0时，将xp的左节点设置为x
                    if (dir <= 0) {
                        xp.left = x;
                    } else {
                        // 否则将xp的右节点设置为x
                        xp.right = x;
                    }
                    // xp的下一个节点设置为x
                    xp.next = x;
                    // 将xp赋值给x的前继节点，又赋值给x的父节点
                    x.parent = x.prev = xp;
                    // xpn不为空
                    if (xpn != null) {
                        // 将xpn的前继节点设置为x
                        ((TreeNode<K, V>) xpn).prev = x;
                    }
                    // 进行红黑树的插入平衡调整
                    moveRootToFront(tab, balanceInsertion(root, x));
                    return null;
                }
            }
        }

        /**
         *  todo 有点懵逼 2020-11-05
         *
         * 移除此调用之前必须存在的给定节点。这比典型的红黑删除代码更混乱，
         * 因为我们无法将内部节点的内容与叶后续节点的内容交换，该叶后续节点由可在遍历期间
         * 独立访问的"next"指针固定，所以我们交换树状连接。如果当前树的节点太少，
         * 则将bin转换回普通bin。（测试触发在2到6个节点之间，具体取决于树结构）。
         *
         * @param map
         * @param tab
         * @param movable
         */
        final void removeTreeNode(MyHashMap<K, V> map, Node<K, V>[] tab, boolean movable) {
            // 散列的处理
            int n;
            // tab等于null || tab长度赋值给n，并且等于0
            if (tab == null || (n = tab.length) == 0) {
                return;
            }
            // 获取数组中索引位置
            int index = (n - 1) & hash;
            // tab[index]赋值给first，first赋值给root，声明rl
            TreeNode<K, V> first = (TreeNode<K, V>) tab[index], root = first, rl;
            // 将node（要移除节点）的next节点赋值给succ，prev节点赋值给pred
            TreeNode<K, V> succ = (TreeNode<K, V>) next, pred = prev;
            // pred节点为空，说明要被移除的节点是头节点
            if (pred == null) {
                // 将succ节点赋值给first，first再赋值给tab[index]
                tab[index] = first = succ;
            } else {
                // 将pred节点的next属性设置为succ节点，即剔除node节点与pred节点的关联关系
                pred.next = succ;
            }
            // 如果succ节点不为空
            if (succ != null) {
                // succ的前继属性设置为pred，即剔除node节点与pred节点的关联关系
                succ.prev = pred;
            }
            // first 为空返回
            if (first == null) {
                return;
            }
            // root的父节点不为空，获取根节点
            if (root.parent != null) {
                root = root.root();
            }
            // root等于null ||
            if (root == null ||
                    // root的右子节点为空 ||
                    root.right == null ||
                    // root的左子节点为空，并赋值给rl ||
                    (rl = root.left) == null ||
                    // rl 左子节点为空
                    rl.left == null) {
                // 红黑树太小转为链表
                tab[index] = first.untreeify(map);  // too small
                return;
            }
            // 红黑树结构的处理
            TreeNode<K, V> p = this, pl = left, pr = right, replacement;
            // 左节点和右节点都不为空
            if (pl != null && pr != null) {
                // 将s节点赋值为p的右节点，声明sl
                TreeNode<K, V> s = pr, sl;
                // 向左一直寻找，找到最低端
                while ((sl = s.left) != null) { // find successor
                    s = sl;
                }
                // 交换p节点和s节点的颜色
                boolean c = s.red;
                s.red = p.red;
                p.red = c;
                // 获取s的右节点
                TreeNode<K, V> sr = s.right;
                // 获取p的父节点
                TreeNode<K, V> pp = p.parent;
                // s是p节点的右节点，p是s的直系父母，则s与p互换位置
                if (s == pr) {
                    // 将p的父节点设置s
                    p.parent = s;
                    // 将s的右节点设置为p
                    s.right = p;
                } else {
                    // 将s的父节点设置为sp
                    TreeNode<K, V> sp = s.parent;
                    // 将sp赋值给p的父节点，并且判断不为空
                    if ((p.parent = sp) != null) {
                        // 如果s为sp的左节点
                        if (s == sp.left) {
                            // 将sp的左节点设置为p
                            sp.left = p;
                        } else {
                            // 将sp的右节点设置为p
                            sp.right = p;
                        }
                    }
                    // 将pr赋值给s的右节点，并判断不为空
                    if ((s.right = pr) != null) {
                        // 将pr的父节点赋值为s
                        pr.parent = s;
                    }
                }
                // 将p的左节点设为null
                p.left = null;
                // 将p的右节点设置为sr，并且判断是否为空
                if ((p.right = sr) != null) {
                    // sr的父节点设置为p
                    sr.parent = p;
                }
                // 将s的左节点设置为pl，并且判断是否为空
                if ((s.left = pl) != null) {
                    // 将pl的父节点设置为s
                    pl.parent = s;
                }
                // 将s父节点设置为pp，并且判断是否为空
                if ((s.parent = pp) == null) {
                    // 将root设置为s
                    root = s;
                    // pp不为空，并且pp的左节点是p
                } else if (p == pp.left) {
                    // 将pp的左节点设置为s
                    pp.left = s;
                } else {
                    // 否则将pp的右节点设置为s
                    pp.right = s;
                }
                // sr不为空
                if (sr != null) {
                    // 将replacement设置为sr
                    replacement = sr;
                } else {
                    // 否则将replacement设置为p
                    replacement = p;
                }
                //    pl不为空
            } else if (pl != null) {
                // 将replacement设置为pl
                replacement = pl;
                //    pr不为空
            } else if (pr != null) {
                // 将replacement设置为pr
                replacement = pr;
            } else {
                // 将replacement设置为p
                replacement = p;
            }
            // replacement不等于p
            if (replacement != p) {
                // 将p的父节点赋值给replacement的父节点，然后在赋值给pp
                TreeNode<K, V> pp = replacement.parent = p.parent;
                // pp等于空
                if (pp == null) {
                    // 将replacement赋值给root
                    root = replacement;
                    // pp不等于空，pp的左节点是p
                } else if (p == pp.left) {
                    // 将pp的左节点设为replacement
                    pp.left = replacement;
                } else {
                    // 将pp的右节点设为replacement
                    pp.right = replacement;
                }
                // 将p的父节点，右节点，左节点设置为null
                p.left = p.right = p.parent = null;
            }
            // 如果p节点不为红色，需要进行红黑树删除平衡调整
            TreeNode<K, V> r = p.red ? root : balanceDeletion(root, replacement);

            // replacement等于p
            if (replacement == p) {
                // 将p的父节点设置为pp
                TreeNode<K, V> pp = p.parent;
                // 将pp设置为空
                p.parent = null;
                // pp 不为空
                if (pp != null) {
                    // p是pp的左节点
                    if (p == pp.left) {
                        // 将pp左节点设为空
                        pp.left = null;
                        // p是pp的右节点
                    } else if (p == pp.right) {
                        // 将pp右节点设为空
                        pp.right = null;
                    }
                }
            }
            // 确保给定的根是其bin的第一个节点
            if (movable) {
                moveRootToFront(tab, r);
            }
        }

        /**
         * 将树结构bin中的节点拆分为较低和较高的树结构bin，
         * 如果现在太小，则取消拆分。只调用resize；
         * 请参阅上面关于拆分位和索引的讨论
         *
         * @param map   the map
         * @param tab   the table for recording bin heads
         * @param index the index of the table being split
         * @param bit   要分割的散列的容量（oldCap）
         */
        final void split(MyHashMap<K, V> map, Node<K, V>[] tab, int index, int bit) {
            // 获取当前节点
            TreeNode<K, V> b = this;
            // 重新链接到lo和hi列表，保持顺序
            // 存储索引位置为:"原索引位置"的节点
            TreeNode<K, V> loHead = null, loTail = null;
            // 存储索引位置为:"原索引位置+oldCap"的节点
            TreeNode<K, V> hiHead = null, hiTail = null;
            // 原索引位置节点个数，"原索引位置+oldCap"位置的节点个数
            int lc = 0, hc = 0;
            // 循环遍历整个红黑树节点
            for (TreeNode<K, V> e = b, next; e != null; e = next) { // 声明树节点e，并赋值b，声明节点next；e不为空循环；e重新赋值为next
                // next赋值为e的下一个节点
                next = (TreeNode<K, V>) e.next;
                // 把e的下一个节点设置为空
                e.next = null;
                // 与运算为0，扩容后的索引位置跟老表的索引位置一样
                if ((e.hash & bit) == 0) {
                    // loTail设置为e节点前继节点，如果loTail为空，代表该节点为第一个节点
                    if ((e.prev = loTail) == null) {
                        // 将e设置为第一个节点
                        loHead = e;
                    } else { // loTail不为空
                        // 将loTail后继设为e
                        loTail.next = e;
                    }
                    // 将e设置为loTail
                    loTail = e;
                    // 个数加1
                    ++lc;
                } else {
                    // 同上类似
                    if ((e.prev = hiTail) == null) {
                        hiHead = e;
                    } else {
                        hiTail.next = e;
                    }
                    hiTail = e;
                    ++hc;
                }
            }
            // 原索引位置不为空
            if (loHead != null) {
                // 节点个数<=6个则将红黑树结构转为链表结构
                if (lc <= UNTREEIFY_THRESHOLD) {
                    tab[index] = loHead.untreeify(map);
                } else { // 大于6个
                    // loHead放到tab[index]位置，即原位置
                    tab[index] = loHead;
                    // 如果hiHead不为空，则代表原来的红黑树已经被改变, 需要重新构建新的红黑树
                    if (hiHead != null) {
                        // 以loHead为根节点, 构建新的红黑树
                        loHead.treeify(tab);
                    }// (else is already treeified)
                }
            }
            // 原索引位置+oldCap不为空
            if (hiHead != null) {
                // 同上
                if (hc <= UNTREEIFY_THRESHOLD) {
                    tab[index + bit] = hiHead.untreeify(map);
                } else {
                    tab[index + bit] = hiHead;
                    if (loHead != null) {
                        hiHead.treeify(tab);
                    }
                }
            }
        }

        /* ------------------------------------------------------------ */
        // Red-black tree methods, all adapted from CLR

        static <K, V> TreeNode<K, V> rotateLeft(TreeNode<K, V> root,
                                                TreeNode<K, V> p) {
            TreeNode<K, V> r, pp, rl;
            if (p != null && (r = p.right) != null) {
                if ((rl = p.right = r.left) != null) {
                    rl.parent = p;
                }
                if ((pp = r.parent = p.parent) == null) {
                    (root = r).red = false;
                } else if (pp.left == p) {
                    pp.left = r;
                } else {
                    pp.right = r;
                }
                r.left = p;
                p.parent = r;
            }
            return root;
        }

        static <K, V> TreeNode<K, V> rotateRight(TreeNode<K, V> root,
                                                 TreeNode<K, V> p) {
            TreeNode<K, V> l, pp, lr;
            if (p != null && (l = p.left) != null) {
                if ((lr = p.left = l.right) != null) {
                    lr.parent = p;
                }
                if ((pp = l.parent = p.parent) == null) {
                    (root = l).red = false;
                } else if (pp.right == p) {
                    pp.right = l;
                } else {
                    pp.left = l;
                }
                l.right = p;
                p.parent = l;
            }
            return root;
        }

        static <K, V> TreeNode<K, V> balanceInsertion(TreeNode<K, V> root,
                                                      TreeNode<K, V> x) {
            x.red = true;
            for (TreeNode<K, V> xp, xpp, xppl, xppr; ; ) {
                if ((xp = x.parent) == null) {
                    x.red = false;
                    return x;
                } else if (!xp.red || (xpp = xp.parent) == null) {
                    return root;
                }
                if (xp == (xppl = xpp.left)) {
                    if ((xppr = xpp.right) != null && xppr.red) {
                        xppr.red = false;
                        xp.red = false;
                        xpp.red = true;
                        x = xpp;
                    } else {
                        if (x == xp.right) {
                            root = rotateLeft(root, x = xp);
                            xpp = (xp = x.parent) == null ? null : xp.parent;
                        }
                        if (xp != null) {
                            xp.red = false;
                            if (xpp != null) {
                                xpp.red = true;
                                root = rotateRight(root, xpp);
                            }
                        }
                    }
                } else {
                    if (xppl != null && xppl.red) {
                        xppl.red = false;
                        xp.red = false;
                        xpp.red = true;
                        x = xpp;
                    } else {
                        if (x == xp.left) {
                            root = rotateRight(root, x = xp);
                            xpp = (xp = x.parent) == null ? null : xp.parent;
                        }
                        if (xp != null) {
                            xp.red = false;
                            if (xpp != null) {
                                xpp.red = true;
                                root = rotateLeft(root, xpp);
                            }
                        }
                    }
                }
            }
        }

        static <K, V> TreeNode<K, V> balanceDeletion(TreeNode<K, V> root,
                                                     TreeNode<K, V> x) {
            for (TreeNode<K, V> xp, xpl, xpr; ; ) {
                if (x == null || x == root) {
                    return root;
                } else if ((xp = x.parent) == null) {
                    x.red = false;
                    return x;
                } else if (x.red) {
                    x.red = false;
                    return root;
                } else if ((xpl = xp.left) == x) {
                    if ((xpr = xp.right) != null && xpr.red) {
                        xpr.red = false;
                        xp.red = true;
                        root = rotateLeft(root, xp);
                        xpr = (xp = x.parent) == null ? null : xp.right;
                    }
                    if (xpr == null) {
                        x = xp;
                    } else {
                        TreeNode<K, V> sl = xpr.left, sr = xpr.right;
                        if ((sr == null || !sr.red) &&
                                (sl == null || !sl.red)) {
                            xpr.red = true;
                            x = xp;
                        } else {
                            if (sr == null || !sr.red) {
                                if (sl != null) {
                                    sl.red = false;
                                }
                                xpr.red = true;
                                root = rotateRight(root, xpr);
                                xpr = (xp = x.parent) == null ?
                                        null : xp.right;
                            }
                            if (xpr != null) {
                                xpr.red = (xp == null) ? false : xp.red;
                                if ((sr = xpr.right) != null) {
                                    sr.red = false;
                                }
                            }
                            if (xp != null) {
                                xp.red = false;
                                root = rotateLeft(root, xp);
                            }
                            x = root;
                        }
                    }
                } else { // symmetric
                    if (xpl != null && xpl.red) {
                        xpl.red = false;
                        xp.red = true;
                        root = rotateRight(root, xp);
                        xpl = (xp = x.parent) == null ? null : xp.left;
                    }
                    if (xpl == null) {
                        x = xp;
                    } else {
                        TreeNode<K, V> sl = xpl.left, sr = xpl.right;
                        if ((sl == null || !sl.red) &&
                                (sr == null || !sr.red)) {
                            xpl.red = true;
                            x = xp;
                        } else {
                            if (sl == null || !sl.red) {
                                if (sr != null) {
                                    sr.red = false;
                                }
                                xpl.red = true;
                                root = rotateLeft(root, xpl);
                                xpl = (xp = x.parent) == null ?
                                        null : xp.left;
                            }
                            if (xpl != null) {
                                xpl.red = (xp == null) ? false : xp.red;
                                if ((sl = xpl.left) != null) {
                                    sl.red = false;
                                }
                            }
                            if (xp != null) {
                                xp.red = false;
                                root = rotateRight(root, xp);
                            }
                            x = root;
                        }
                    }
                }
            }
        }

        /**
         * Recursive invariant check
         */
        static <K, V> boolean checkInvariants(TreeNode<K, V> t) {
            TreeNode<K, V> tp = t.parent, tl = t.left, tr = t.right,
                    tb = t.prev, tn = (TreeNode<K, V>) t.next;
            if (tb != null && tb.next != t) {
                return false;
            }
            if (tn != null && tn.prev != t) {
                return false;
            }
            if (tp != null && t != tp.left && t != tp.right) {
                return false;
            }
            if (tl != null && (tl.parent != t || tl.hash > t.hash)) {
                return false;
            }
            if (tr != null && (tr.parent != t || tr.hash < t.hash)) {
                return false;
            }
            if (t.red && tl != null && tl.red && tr != null && tr.red) {
                return false;
            }
            if (tl != null && !checkInvariants(tl)) {
                return false;
            }
            if (tr != null && !checkInvariants(tr)) {
                return false;
            }
            return true;
        }
    }

}
