This section is organized as follow. First, we introduce the definition of the \textit{niceness} value, including the two metrics we devised: \textit{bank-level parallelism} and \textit{row-buffer locality}. Second, we show the \textit{priority shuffling} technique that minimizes thread slowdown. Finally, we introduce the scheduling decision based on the context we have just derived.\\

\subsection{Bank-level Parallelism (BLP)}
Each memory controller counts the number of distinct banks that the current thread requests for, as an estimate of the thread's instantaneous BLP had it been running alone. Throughout a quantum, each controller takes samples of a thread's instantaneous BLP and computes the average BLP for that thread, which is sent to the meta-controller at the end of the quantum. The meta-controller then computes the average BLP for each thread across all memory controllers.

\subsection{Row-buffer Locality (RBL)}
Each memory controller estimates the row-buffer locality of a thread. In order to estimates the row-buffer locality, the memory controller keeps track of the rows that would have been opened in the bank if the thread were running alone on the system. RBL (row-buffer locality) is simply calculated as the number of shadow row-buffer hits divided by the number of accesses during a quantum.

\subsection{Priority Shuffling}
In our controller design, threads share the memory bandwidth, so that no thread is disproportionately slowed down or, even worse, starved. We accomplishes this by periodically shuffling the priority ordering of the threads. To minimize thread slowdown, we use the insertion shuffle\cite{kim2010thread}, that tries to reduce the amount of inter-thread interference and at the same time maximize row-buffer locality and bank-level parallelism. \\

Every quantum, threads are sorted based on their \textit{niceness} value to yield a ranking, where the nicest thread receives the highest rank. Subsequently, every shuffle-interval cycles, the insertion shuffle algorithm perturbs this ranking in a way that reduces the time during which the least nice threads are prioritized over the nicest threads, ultimately resulting in less interference.\\

\subsection{Scheduling}
When scheduling, we first tries to issue any open row hits with the ``correct'' thread ID (selected by the algorithm mentioned above), then other row hits, then row misses with the ``correct'' thread-ID, and finally, a random request. We also use close-page algorithm to improve the performance of our scheduler. In every idle cycle, the scheduler issues precharge operations to banks that last serviced a column read/write command.
