Memory bandwidth is an increasingly important aspect of overall system performance. Early work in this area has focused on streaming workloads, which place great stress on the memory system.\\

Recently, the need for increased memory bandwidth has begun to extend beyond streaming workloads. Faster processor clock rates and chip multiprocessors increase the demand for memory bandwidth. Furthermore, to cope with relatively slower memory latency, modern systems increasingly use techniques that reduce or hide memory latency at the cost of increased memory traffic. For example, simultaneous multithreading hides latency by using multiple threads, and hardware-controlled prefetching speculatively brings in data from higher levels of the memory hierarchy so that it is closer to the processor.\\

In the face of these trends, the state-of-art controller design have a major limitations. It is no longer sufficient to focus on streaming workload patterns; we need to consider richer patterns of data access instead. As a thread with high row-buffer locality is more likely to make consecutive accesses to a small subset of banks, causing these banks to be congested. Meanwhile, another thread with high bank-level parallelism becomes vulnerable to memory interference brought by the thread with high row-buffer locality. Hence, a thread with high bank-level parallelism is more likely to be interfered, whereas one with high row-buffer locality is more likely to conflict with others. \\

This paper proposes a memory scheduling mechanism that take multithreading access conflict into account. We rank threads and utilize the ranking for its shuffling decisions, in order to ensure that nice threads are more likely to receive higher priority.\\