Early work in this area mainly focused on streaming workloads. Several scheduling strategies such as First-Come-First-Served, Credit-Fair, First-Ready-Round-Robin, Close-page have been proposed. 

\begin{itemize}
\item \textbf{First-Come-First-Served}: Among them FCFS is a primitive solution, which achieves merely poor bank-level parallelism and poor bandwidth utilization. \\
\item \textbf{Credit-Fair}: Credit-Fair, however, introduced the concept of \textit{credit}, which is assigned to each thread: the \textit{credit} will increase when the thread opens row hits and decrease when issuing a "column read" command. The scheduler then prioritizes thread with the most credits. The concept of \textit{credit} is so influential that it was widely adopted by later sophisticated solutions like TCM\cite{kim2010thread}. \\
\item \textbf{First-Ready-Round-Robin}: FRRR scheduler always prioritize open row hits. If no open row hit found, then it follows the rules of Round-Robin, combining the benefits of open row hit with the fairness of a round-robin scheduler. \\
\item \textbf{Close-page}: Close-page, on the other hand, immediately issues precharge command after column read/write. Therefore the scheduler attempts to be always prepared for potential row buffer hits.\\
\end{itemize}

In addition to these relatively simple strategies, researchers have proposed other more sophisticated solutions. TCM\cite{kim2010thread} addresses system throughput and fairness separately with the goal of achieving the best of both. It separate clusters into two categories applied with different policies: \textit{latency-sensitive} and \textit{bandwidth-sensitive}. For latency-sensitive clusters, a strict priority that the least memory-intensive thread receiving the highest priority is applied; while for bandwidth-sensitive cluster, a composite metric, \textit{niceness}, is introduced to monitor inter-thread interference. \\

Other scheduling strategies\cite{mutlu2008parallelism}\cite{mutlu2007stall} also considers fairness issues using other techniques. Stall-Time Fair Memory(STFM) Access Scheduling\cite{mutlu2007stall} provides QoS to different threads sharing the DRAM memory system. In order to achieve the QoS goal, the scheduler have to estimates both the stall time $T_\textit{share}$ that a thread could experience running with others, and the stall time $T_\textit{alone}$ that a thread could experience running alone. Then the \textit{memory slowdown} is computed as $S = T_\textit{share} / T_\textit{alone}$. Consequently, if the \textit{unfairness} $S_{\textit{max}} / S_{\textit{min}}$ is high, Fairness-Rule is applied, otherwise the scheduler applies FR-FCFS-Rule.\\

Later Mutlu et al. proposed Parallelism-Aware Batch Scheduling\cite{mutlu2008parallelism}, which processes DRAM requests in batches. Within a batch, Parallelism-Aware Batch Scheduling uses policy aiming to process requests in parallel in DRAM banks, reducing the memory-related stall-time. To provide fairness, thread with more average bank-load will be given higher ranking; whereas to reduce stall-time, the controller prioritize row-hit requests.\\

Recently a reinforcement learning approach\cite{ipek2008self}\cite{mukundan2012morse} has been introduced to formulate memory access scheduling as a reinforcement learning problem. Thus memory controller can be viewed as an reinforcement learning(RL) agent whose goal is to learn automatically an optimal memory scheduling policy via interaction with the rest of the system. Ipek et al.\cite{ipek2008self} came up with the self-optimizing memory scheduling using Q-Learning: $Q(s_{t-1},a_{t-1}) = (1-\alpha)Q(s_{t-1},a_{t-1}) + \alpha[r_t + \gamma Q(s_t, a_t)]$, where $s$ denotes states, $a$ actions, $t$ time and $r_t$ the immediate reward at time $t$. As Q-value gets updated each quantum, the Q-value matrix eventually converge to the optimal state. While Ipek et al. adopt $r_{\textit{write}} = r_{\textit{read}} = 1$ and $r_{\textit{others}} = 0$ as immediate reward, Mukundan et al.\cite{mukundan2012morse} argue that the setting is probably not optimal and can not be generalized. Hence, they use Genetic Algorithm to explore the optimal immediate reward value of each DRAM action. Noted the fitness function can be any other metric besides performance, the optimization can be for other purpose such as energy-saving. However, our experiment shows that the result of RL-based scheduler does not turn out to be good enough, although in theory it should be closer to the optimal solution than other strategies mentioned above.\\

