\section{Related Work}
\label{section:relatedwork}
In this section we will present some solutions for network measurement, including the sampling methods as well as the sketches.

The widely adopted method of network measurement in commodity switches/routers are sampling based methods. The basic method of sampling is PS (Packet Sampling), in which the measurement process records a packet for every $M$ packets, or record every packet on the probability of $\frac{1}{M}$. During post processing, the size of the traffic can be gotten by multiplying the number of recorded packets with $M$. However, packet sampling has the drawback of flow size bias, so the elephant flows with far more than $M$ packets are very probably to be recorded, while the mice flows with only a few packets will be missed with high probability. Moreover, we cannot know the specific size of a flow in packet sampling. For example, a flow with only a packet and another flow with more than $M$ packets appears to be the same if only one packet is recorded for each flow. 

PS+SNY\cite{duffield_estimating_2005} proposes that since the majority of the network traffic are the type of TCP and usually each TCP flow corresponds to a SYN packet, we can care about the packets corresponding to the flows with a recorded SYN packet only and discard the packets from the flows without SYN packets. Since each SYN packet is recorded randomly, the flow distribution of the original traffic is the same as the sampled flows with SYN packets. However, this method results in the waste of many packets that don't have a SYN packet associated with them, and the specific size of the flows with SYN packets are not known still. 

Flow Sampling (FS)\cite{hohn_inverting_2003} proposes to select a couple of flows randomly, and once a flow is selected, all the packets corresponding to the flow will be recorded, so we can know the complete information about the recorded flows. Moreover, since the flows are selected randomly, the flow size distribution of the original flows should be the same as the selected flows. However, FS requires to maintain the IDs of the selected flows and query the memory for the selected flows for each incoming packet. So FS is not efficient as PS. 

Dual Sampling (DS)\cite{tune_towards_2008} merges PS and FS by including two components running in parallel, where the first one operates on SYN packets with the sampling probability $p_s$ and the other component operates on non-SYN packets with the sampling probability $p_n$. So DS reduces to PS+SYN when $p_s = p_n$, and approximates to FS when $p_n=1$. 

Since the missing of information is inevitable for the sampling methods, sketches are proposed to record the packets using a compact data structure and fixed computations. 

One of the widely used sketches is count-min sketch\cite{cormode_countmin_2005}. Count-min sketch consists of $d$ rows of counters where each row consists of $r$ counters and has an independent hash function associated with it, which maps a flow ID to a counter of the row. When a packet arrives, it will be mapped to a counter in each row of the sketch, and the corresponding counters will be incremented by 1. On querying the sketch for the size of a particular flow, we will get the counters corresponding to the flow ID in each row and take the minimum one. However, this method requires $2\times d$ memory accesses ($d$ read operations and $d$ write operations) for processing each packet while causing positive bias in flow size estimation.

SCBF\cite{kumar_space-code_2004} is a bit set with $l$ groups of hash functions associated with it. Each group consists of $k$ independent hash functions which map a flow ID to the bit set. When a packet arrives, SCBF randomly chooses a group of the hash functions and map the packet to a group of bits which will be set to 1. When querying for the size of a flow, we get the number of groups of hash functions whose corresponding bits are 1s, which will be used to estimate the size of the flow. MRSCBF is proposed to count the size of very large flows. MRSCBF consists of a group of SCBFs, each of which is associated with a probability $p$. When a packet arrives, a SCBF is updated with the associated probability. The SCBF with small probability will help to estimate the size of large flows. As the algorithm is efficient in memory utilization, it involves too many estimation operations and its precision regarding flow size estimation will be low.

Counter Tree\cite{chen_counter_2017} consists of an array of counters which is organized as a tree of $d$ layers. A counter in the layer-0 and all its ancestors in the higher layers constitute a virtual counter where the counters in the higher layers is the more significant bits of the virtual counter. When a packet arrives, we will hash the packet to a counter in the layer 0 and increment the counter by 1. When the counter at a layer overflows, the parent counter at the next higher layer will be incremented by 1. Finally, counter tree employs statistical tools to remove the noise introduced due to space sharing between different virtual counters.  

SketchVisor\cite{huang_sketchvisor:_2017} aims to bypass the problem of designing efficient sketches. Instead, it designs a fast path which is more efficient in processing packets while less accurate in maintaining information for flows. When the amount of traffic in the network is fair, we can process the packets using a normal sketch, which is more computation intensive but more accurate. When the amount of traffic is greater than a given threshold, we will resort to the fast path to process the overflowing traffic.

The drawback of the sketches is that the accuracy of the maintained information will decrease constantly as the amount of information poured into the sketches increases. In some scenarios it is necessary to maintain only the most important flows and give up the less important flows. ElasticSketch\cite{yang_elastic_2018} and HashPipe\cite{sivaraman_heavy-hitter_2017} introduced in Section~\ref{section:backgroundandmotivation} as well as HashFlow proposed in this paper fall into this category. Another algorithm of this category is HeavyKeeper. 

HeavyKeeper\cite{gong_heavykeeper:_2018} consists of a counting table where each cell contains a flow ID field and a counter field and there is a hash function associated with it. When a packet arrives, the hash function maps the packet to a cell of the counting table by taking the flow ID as input. If collision occurs at the cell, the counter field will be decremented by 1 on the probability of $b^{-C}$, where $b > 1$ and $b \approx 1$, and $C$ is the value of counter field of the cell. The packet's flow ID will be written into the cell and the counter field will be set to 1 if counter field has the value of 0 after decrementing. HeavyKeeper tends to cause negative bias in the flow size estimation of the elephant flows. To minimize the bias it is suggested to create multiple counting tables and take the largest value of the corresponding cells' counters from each counting table as the estimated flow size, which will be inefficient in utilization of memory.
