%Introduction
\begin{itemize}
\item Why 3D - Large caches occupy large area on chip. (Talk about 2.5D) has longer wires and higher access latency. Also limited connections are possible, which restricts bandwidth
\item Why 3D DRAM - High density cache architectures for large memory footprint applications. (HMC)
\item 3D DRAM as a cache (Why) - Why not STTRAM/PCRAM/eDRAM? (Cite Xiangyu, Ben Lee, Bruce Jacob). 
\item Brief background of DRAM caches (Citations).
\item What are the hurdles? Refresh (as compared to SRAM) in terms of power and performance. Why Refresh in DRC is much more of an issue? (lower $t_{REFW}$).
\end{itemize}

With the continuous advancement in technology and architectural techniques used in processor design, computational speed no longer continues to be the limiting factor in processor performance. 
Most modern-day applications, in addition to having significant computing requirements, are now constrained by the massive demands they place on system memory. (Give examples of apps).
This has led to the increment in application speedup reducing with newer technology generations, contributing to what is known as a \emph{memory wall} (cite).
This memory wall occurs on account of two reasons:
\begin{itemize}
\item Limitation in memory available to the user, on account of the ever-increasing working set size of the application. 
\item Large penalty in execution time due to bandwidth limitations of the interconnection framework between processor and memory.
\end{itemize}

In order to mitigate these limitations, it becomes necessary to provide a large chunk of memory on or close to processors, with a high access bandwidth. In this direction, there have been several works that propose three-dimensional stacked memories (cite Loh) as a viable solution.
In comparison with on-chip caches, this technology does not occupy large area on chip which could instead be used to increase the processor count.
Further, 3D stacking also allow for the possibility to integrate heterogeneous technologies, which are normally divergent with incompatible processes, on a single chip.
The Hybrid Memory Cube (cite) has demonstrated an innovative technology where several layers of memory are stacked on top of a logic layer.

In order to further reduce the access time of the 3D DRAM, DRAM-based caches have been proposed (cite). 
In order to provide the high memory density demanded of the applications mentioned above, conventional dynamic RAM would seem to be the best candidate. SRAM-based memories, while faster cannot provide the required density of several 100s of MBs. Also, the leakage energy dissipation by SRAM caches is much higher than an equivalent DRAM. 
In contrast, emerging memory technologies like STT-RAMs, PC-RAMs (cite, cite), are far more efficient in terms of leakage energy, on account of their inherent non-volatile nature. However, writes are very expensive on these transistor technologies, both from a performance and a power perspective. 

One of the main drawbacks of DRAMs is the necessity to continuously refresh each cell, to prevent the storage capacitor from leaking completely, causing data to be lost. 
A refresh operation effectively comprises of reading each cell and writing back the data periodically.
The memory controller issues specific refresh commands which instruct the DRAM to refresh several rows, one after the other. Any read or write accesses to the bank being refreshed are stalled until the refresh operation is completed.
In case of DRAM caches, the refresh operations are similar. However, on account of the cache being stacked on the processor, its temperature is higher than conventional main memory.
This causes the capacitor to leak faster, decreasing the data retention time. DRAM caches generally employ a refresh window ($t_{REFW}$) of 8-32~ms as opposed to 32-64~ms for conventional off-chip memory. (cite).

The smaller refresh window for DRAM caches can have adverse impacts on the system performance and power. On an average refreshing a DRAM cache causes X\% reduction in performance and Y\% increase in power. Hence it is necessary to devise techniques to minimize this overhead.
In this paper, we propose several techniques in which we couple the refresh operations of the off-chip main memory and the stacked DRAM cache in order to minimize redundant refresh operations. 
We also propose techniques in which entire banks can be shut down in case of low memory utilization.

(Section 2-8...).

