% ----------------------------------------------------------------
% Article Class (This is a LaTeX2e document)  ********************
% ----------------------------------------------------------------
\documentclass[12pt]{article}
\usepackage[english]{babel}
\usepackage{amsmath,amsthm}
\usepackage{amsfonts}
\usepackage{url}
\usepackage{graphicx}

% THEOREMS -------------------------------------------------------
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\theoremstyle{remark}
\newtheorem{rem}[thm]{Remark}
\numberwithin{equation}{section}
% ----------------------------------------------------------------
\begin{document}

\title{Thesis Etai}
\author{Etai Hazan
\thanks{Etai Hazan, Department of Computer science, Ben-Gurion University, Beer-Sheva, Israel,84105.
{\tt etai@cs.bgu.ac.il}}}
% ----------------------------------------------------------------
\begin{abstract}
  TODO
\end{abstract}
\maketitle
% ----------------------------------------------------------------

\section{Introduction}
Most of the currently available enterprise storage arrays involve a mixture of SSD, SCSI and SATA drives. Accordingly many vendors offer capacity planning and dynamic data reconfiguration tools for such systems, see
\cite{EMC,IBM,compellent,3par,netap} among others. Details about the methods of operation of these tools are not publicly available. In addition, the tools do not generally support user defined QoS requirements. Since trace data is not available for large enterprise systems, a realistic algorithm must rely on coarse, counter based statistics which only provide a rough description of the user access pattern.

In this paper we present a dynamic data reconfiguration tool for enterprise level storage arrays. The tool does take user QoS requirements into account and tries to dynamically minimize a damage function which measures the difference between the desired performance of the user applications and the actual performance that the applications experience. We tested the tool on traces from large real production systems. The traces were used to generate the coarse statistics which served as the input to the configuration engine.
The configuration engine uses a priority vector which assigns a priority value to each logical unit of data. The priority value and the user access pattern determine together a data placement. The full trace coupled with a rather detailed simulation of an enterprise storage system that we developed, is used to determine application response times. The application response times are then compared to the desired response times resulting in a damage function value. A gradient descent algorithm is then applied to the damage function domain to yield a new priority vector and the process repeats.

Our basic approach which uses widely available coarse data and conservative models
is based on an approach which was developed for the SymOptimizer, an external
dynamic re-configuration tool for enterprise arrays from EMC. A description of the tool which has been commercially available until recently (superceded by EMC FAST, \cite{EMC}, which operates in a similar mode), since 1998 is provided in \cite{ABLM} with the theoretical background provided in \cite{BLM}.

Our experiments show that our process yields good results on traces from real production systems and substantially improves upon placement algorithms which do not take QoS requirements into account. This is especially significant given the coarseness of the input data. It is unlikely that any external optimization tool will get more refined information, so this study shows that such optimization tools can still be useful. While considering the traces, we observed interesting and complex phenomenon that showed the importance of cleverly handling spikes of activity and in particular of adding device utilization considerations into the placement process. It seems that previous studies have not taken device queueing sufficiently into consideration.

\section{Storage System Characteristics}

We provide a brief general description of the architecture of the type
of storage systems which concern us in this paper.
%The main physical
%system components include directors, cache memory and secondary
%storage devices (disk drives and flash drives).

\subsection{Components}

The system is comprised of two main types of components, directors and
storage components. The storage components are further divided into
primary storage (cache) and secondary storage (disk drives and flash drives).

The computational heart of the storage system is a set of CPUs called
directors, which manage incoming host requests to read and write data
and direct these requests to the appropriate storage components,
either cache or secondary storage, which actually keep the data.
%\subsubsection{Cache Memory}

Cache memory (DRAM) is a fast and expensive storage area.  The cache
(DRAM) is managed as a shared resource by the directors. The content
of the cache is typically managed by a replacement algorithm which is
similar to FIFO or the Least Recently Used (LRU) algorithm. In
addition, data can be prefetched in advance of user requests if there
is a good probability that it will be requested in the near
future. Additionally, some data may be placed permanently in cache if
it is very important, regardless of how often it is used. Whatever
data is not stored in cache, resides solely on secondary
storage. Typically, the cache comprises a very small portion of the
total storage capacity of the system, in the range of $0.1$
percent.

Four basic types of operations occur in a Storage system: Read Hits,
Read Misses, Write Hits, and Write Misses.  A \emph{Read Hit} occurs
on a read operation when all data necessary to satisfy the host I/O
request is in cache. The requested data is transferred from cache to
the host.

A \emph{Read Miss} occurs when not all data necessary to satisfy the
host I/O request is in cache. A director stages the block(s)
containing the missing data from secondary storage and
places the block(s) in a cache page. The read request is then satisfied from the cache.

The cache is also used for handling write requests. When a new write
request arrives at a director, the director writes the data into one
or more pages in cache. The storage system provides reliable battery
backup for the cache (and usually also employs cache mirroring to write the
change into two different cache boards in case a DRAM fails), so write
acknowledgments can be safely sent to hosts before the page has been
written (destaged) to secondary storage. This allows writes to be
written to secondary storage during periods of relative read
inactivity, making writing an asynchronous event, typically of low
interference. This sequence of operations is also called a \emph{write
hit}.

In some cases the cache fills up with data which has not been written
to secondary storage yet. The number of pages in cache occupied by
such data is known as the \emph{write pending count}. If the write
pending count passes a certain threshold, the data will be written
directly to secondary storage, so that cache does not fill up further
with pending writes. In that case we say that the write operation was
a \emph{write miss}. Write misses do not occur frequently, as the
cache is fairly large on most systems. A write miss leads to a
considerable delay in the acknowledgment (completion) of the write
request.

Not every write hit corresponds to a write I/O to secondary
storage. There can be multiple write operations to the same page
before it is destaged to secondary storage, resulting in
write \emph{folding}. Multiple logical updates to the same page are
folded into a single destaging I/O to secondary storage. We will call a write operation to a secondary storage device a \emph{destage write}.

%\subsection{Disks}
\label{disks}
\begin{table*}[t]
\caption {Estimated storage characteristics} % title of Table
  \centering
\begin{tabular}{|p{1 cm} |p{1.5 cm} |p{1.5 cm} |p{1.8 cm} |p{1.6 cm} |p{1.5 cm}| p{1.1 cm}|} % centered columns (6 columns)
        \hline %inserts double horizontal lines
        Type & Capacity (GB) & Cost (Dollars) & Overhead per sequential I/O (ms)  & Overhead per random I/O (ms) & R/W rate (MB/second) & Energy (watts) \\ [0.5ex] % inserts table
        %heading
        \hline % inserts single horizontal line
        SSD & 300 & 1000 & 0.025 & 0.1 & 160/120 & 6 \\\hline
        FC & 300 & 100 & 1 & 4 & 60/60 & 12 \\\hline
        SATA & 2000 & 100 & 2.5 & 10 & 50/50 & 12\\\hline %inserts single line
  \end{tabular}
  \label{table:storage} % is used to refer this table in the text
\end{table*}

\indent
There are three types of secondary storage devices, typically used in enterprise storage systems which include SSD-solid state devices (flash), FC devices which are typically 10K or 15K RPM spindles and SATA devices which are typically 5400 or 7200 RPM.

Table~\ref{table:storage} shows the estimated device characteristics that we use to calculate the performance of a particular configuration. The table is based on information from commercial system vendors, but is obviously subject to change due to technological advances.


The I/O overhead is conservatively estimated from server-grade drive data sheets. The overhead is the average latency to complete an I/O, and depends on the drive firmware, interconnect, and rotational latency (for FC and SATA). Sequential I/O is less costly than random I/O in rotating disks, but still incurs some seek and latency overhead especially at the beginning and end of the sequence.
%Nonetheless, in line with our conservative approach
We assess no seek or latency penalty for sequential I/O accesses.
%In addition to not penalizing SCSI and SATA drives for latency on sequential activity, we have also been very generous to them, assessing an average overhead of 4ms on a random I/O to a SCSI device and 10ms to a SATA device. this is consistent with our strategy of making them "look good". The numbers also make SCSI $2.5$ times faster than SATA, a comparison which favors SCSI over SATA.
%so for sequential read misses we decrease the
%estimated overhead by half.
SSDs do not have a seek penalty, so the latency remains unchanged regardless of whether an I/O is sequential or random.

The read/write rate is the speed at which bytes can be read from the disk. For rotating disks, this speed is relatively stable and relatively close to the speed of flash drives (as compared to the difference in overhead between SSDs and rotating disks). Flash drives have the highest throughput, although it is lower for writes than reads. This is because SSDs do not support random rewrite operations, instead they must perform \emph{erasure} on a large segment of memory (a slow process), and can only \emph{program} (or set) an erased block.
%This tradeoff underscores the importance of analyzing the
%statistics of a storage system before provisioning.

%The exact energy consumption of a drive depends on its capacity, the number of I/Os it receives, and its idle power consumption. In terms of a single device our table ranks the energy consumption of devices as SSD $<$ SATA $<$ FC. Flash drives use the least energy of any of the drive types, but
%because they have a significantly lower capacity, the energy consumption of a system comprised of SSDs may be higher than, say, the same system comprised of SATA drives (because it would need many more SSDs than SATA drives to store the same amount of data). This is the case according to the very conservative numbers we have employed in our table.

In terms of performance, we can think of the drives as being tiered, with the SSD drives forming the top tier, the FC drives are the middle tier
and the SATA drives forming the bottom tier. Our goal is to map the user data to these different tiers in a way that will take into account the user defined performance goals.
%Again, it makes SSD drives "look bad" in comparison with disk drives.
%This is
%another tradeoff that makes provisioning heterogenous storage systems
%complex.

\section{Logical Units and Extents}
\label{sec:LUextent}
%The data in the system is divided into units, which are called logical units or volumes (LUN). A logical unit will typically span several GB, and is divided into \emph{Extents}.
%An Extent contains 7680 blocks whom are of a fixed size of 512 Bytes.
%In an I/O request, an Extent, identified by the offset in blocks in a specific logical unit, is referenced.

The data in the system is divided into user defined units, which are called logical units (LU) or
volumes. An LU will typically span several GB. The LU
is a unit of data which is referenced in the I/O communication between
the host and the storage system. From the point of view of the storage system, the LU is divided into smaller units called \emph{Extents}, which can be viewed as atomic units for storage management.

Typically, an I/O request will consist of an operation, read or write, a logical volume number, a block offset within the logical volume, both of which identify an extent, and a size in blocks. Using this information, statistics regarding user activity at the extent level are being produced by the storage system and can be used for managing the system.
In the system that we have studied, a Symmetrix VMAX system,
an extent contains 7680 blocks whom are of a fixed size of 512 Bytes.
Extents are used for tiering the LU on separate types of drives. Each extent may be stored on a different tier and move between tiers. Our algorithms will make tiering decisions at the extent level, using the extent activity statistics. Specifically, we use the following
%Extents of 7680 blocks are use in EMC VAMX system FAST (fully automated storage tiering) as the basic blocks for tiering and thus
%
%\subsection{Extent statistics}
%\label{sec:statistics}
%The coarse statistics were provided from real customer storage systems.
%The statistics are usually provided at an extent base.
counters
which record statistics for each extent in each LU:
%From such counters, our provisioning algorithm makes use of the following
%statistics:

\begin{itemize}
\item Read misses
\item Read hits
\item Writes de-staged
\item Number of Bytes read
\item Number of Bytes written
\item Sequential read requests
\end{itemize}
%\begin{enumerate}
%\item random read miss requests.
%\item random read hit requests.
%\item sequential read requests.
%\item write requests.
%\item KB read from random read requests.
%\item KB read from sequential requests.
%\item KB read from write requests.
%\end{enumerate}
We are particularly interested in read misses and de-staged writes, since
these actually incur a cost from the storage system drives. The other reads and writes are absorbed by the DRAM cache. The system is also able to identify requests which are sequential and these are reported separately.

The counters summarize the above statistics over a given time period, which could be a few minutes or an hour. The systems that we have studied in this paper report extent activity at 1 hour intervals.
%\section{Data}

%\begin{figure}[t!]
%  \centering
%  \includegraphics[height=23em,angle=270]{../summaries/num_vols}
%  \caption{Histogram of volumes per machine}
%  \label{fig:hist-num-vols}
%\end{figure}

%\begin{figure}[t!]
%  \centering
%  \includegraphics[height=23em,angle=270]{../summaries/tot_gbytes}
%  \caption{Histogram of activity per machine}
%  \label{fig:hist-tot-gbytes}
%\end{figure}

%Our data set was provided by EMC, and consists of the counter
%values described in section~\ref{LUextent} (along with many
%other counters we did not use in this study), captured at intervals of
%60 minutes.
%Some counters represent averages; for example, the counter for read misses
%reports the average number of read misses per second during the entire
%period. Others are percentages, such as the percentage of read misses
%which required random I/O.
%The counters represent summations of the event. For example, the counter for read misses reports the total number of host read requests issues during the current time unit for a specific volume that were a read miss, and the bytes read and written are summations
%of the sizes of each read and write I/O during the period
%(respectively).
%In this paper we examine the activity patterns of 15 active
%production systems comprising a total of around 7700 volumes.

%The total size of the system in terms of the number of logical volumes in each
%machine varies,    but is in general quite large. The smallest machine in
%our data set holds 34, while the largest holds 1450
%volumes.
%Figure~\ref{fig:hist-num-vols} shows a histogram of the
%number of volumes in each machine; most machines have between 350 and
%5000 volumes.
%The number of bytes read and written on a single machine
%varies in our data set from very small (order of megabytes) to very
%large (order of terabytes).
%Figure~\ref{fig:hist-tot-gbytes} shows a
%histogram of the number of gigabytes read / written.
%84\% of the
%machines in our data set read or write more than 100 GB total, with
%some machines transferring significantly more.


\section{Flash Cache}
\subsection{The idea}
We want to add second tier cache to the system and we use a SSD device from three reasons, first, we add more size to the cache so the chance of getting more read and write hits is increasing, second, In Table ~\ref{table:storage} we can see that the SSD device has great characteristics and can help the customer to use relatively small regular cache which is expensive, and the last reason for another cache tier is due to the difference between the granularity of the block in the regular cache and in the SSD device, in the regular cache the blocks granularity is 8KB or 64KB and for the SSD device the granularity is 1MB for each block, we will show in the results that this fact improves the hits in the flash due to the locality principle, in addition, the overhead by bringing 1MB instead of 8/64KB from the devices is not significant because even if the data resides in the worse device (SATA) when reading sequential data in a row the characteristics is improves significantly (the problem with SATA devices is their overhead when moving the head from one place to another and when we read 1MB sequentially we need to move the head only once and hence suffer from this overhead only once).

\subsection{Second Tier Policies} 
\label {Second Tier Policies}
Another issue in this model is how to use the second tier cache, we compared two major policies \emph{Layered} and \emph{Dual}.
For every I/O we simulate the director who checks whether the requested section of the extent exists in the cache. If it does, the I/O is considered a \emph{cache hit}. If it does not exist, the director checks whether it resides in the flash cache, if it does, the I/O considered as a \emph{flash cache hit}. If it does not exists in the flash cache also, it is considered a miss and is brought into the cache from the storage component it resides in according to the policy. 
\begin {itemize}
\item Layered policy - When an I/O does not exists in both caches we fetch the I/O data from the device that it is resides in and put it in the first tier.
    When the regular cache is full and we want to insert new I/O we remove the last I/O from the first tier cache according to the cache removal policy (LRU or FIFO), and insert it to the second tier cache and bring from the storage all the 1MB aligned data in order to insert 1MB block to the second tier cache, when the second tier cache is full and we want to insert new I/O we remove the last I/O according to the second tier cache removal policy and delete it completely.
\item Dual policy - When an I/O does not exists in both caches we bring from the storage the 1MB aligned data and insert the I/O to both regular cache and flash cache (for the regular cache we insert only the part of this data according to its cache chunk size), if the first level cache is full, the system removes the last I/O from the cache according to the cache policy and delete it without transfer it to the second tier cache, if the flash cache is full the removal is identical to the removal of the \emph{Layered} policy.
\end {itemize}

\section{Verification}
\subsection{Comparison between the policies}
In order to compare between the two policies, we developed a simulator that will simulate the flow of the system with each policy.
The simulator is composed several components.

\begin {itemize}
\item Lun - this component simulates logical unit in the system, each Lun creates its own I/Os and send them to the cache.
\item Cache - This component simulates the regular cache in the system, it handles all the I/Os and manage all its data according to its policy.
\item Flash cache - This component simulates the flash cache in the system, it handles every I/O that doesn't found in the regular cache and
\item Statistics Handler - This component collects all the statistics relevant to the comparison between the policies and responsible on the creation of the output files.
\end{itemize}


\subsection{Trace data}
This simulation uses real trace data collected from several Symmetrix machines from different customers. This data is hard to collect and requires special permission from the customer. We manage to get real trace data from 5 different customers and build the results on this data.

The traces that we use in the simulations are composed of items which describe each and every I/O request during an extended time period. The items corresponding to each I/O request consist of:
\begin{enumerate}
\item time stamp: indicating when the request was received.
\item I/O operation: read or write.
\item logical unit ID: the volume targeted by the I/O request.
\item block offset: the offset of the start of the request within the logical unit
\item size: size of the I/O request, in blocks.
\end{enumerate}
For example, a request might be to read 32 blocks from logical unit number 41, starting with block 15360 within the logical unit.

\subsection{Cache and Flash Cache}
Each cache is divided into cache chunks, and is composed of two parts, read cache and write cache. For each I/O request, the aligned cache chunk section (or sections when the size of the I/O is larger than the cache chunk size) of the requested extent will be inserted into the relevant cache section; a read I/O will be inserted to the read cache, and a write I/O to the write cache. The size of the read and write caches vary dynamically according to the number of outstanding writes, not yet written to secondary devices.
The size of the cache chunks was one of our parameters, for the regular cache we try both cache chunks of size of 8KB and 64KB, for the flash cache we defined the size of the cache chunk to be 1MB.


\subsection{Algorithm Flow}

The simulation runs in one hour time units. During each time unit the simulation receives all the traces and simulates the I/O requests in the system. The simulation handles all the I/Os according to the cache management policy as described in section \ref{Second Tier Policies}.
% ETAI: check whether to remove the part of the response time.
While handling the I/O, its response time is calculated. The response time takes into account the amount of time it takes the director to search the cache or the flash cache for the required section. When the I/O is a miss, the time it takes to fetch the requested section from the relevant device is added to the response time. This time is composed of different parts. Each device has a queue of requests waiting to be serviced. The new I/O enters the queue, and the time it has to wait in the queue until it is serviced is added to its response time. Then, the time it takes to receive the data from the device is included. This time is influenced by the type of device the data resides in since the type affects the read and write rates and the overhead time as described in section \ref{disks}.

%\\\indent During the simulation aggregated statistics which will be used by the Placement System are collected. These statistics are collected per extent and contain how many of the requests for each extent were reads and writes, hits and misses, sequential or random, and the total size of the requests in KB.
%\\\indent At the end of each time unit, the simulation runs the Placement System with the aggregated statistics and receives the recommended placements of all the extents in the different storage components for the next time unit. In order to correctly simulate the exchanges between the different devices, we perform the exchange by reading the data from the device it currently resides in, and writing it to the recommended device. These exchanges add heavy workload to the devices. For instance, in a simulation that allows for 10,000 extent exchanges (each extent is of size 3.75MB, as described in \ref{sec:LUextent}), we need to move around 37GB of data between the devices.
%In the case that most of this data needs to be written to a Flash device, this will take over 300 seconds just to write,  and thus in this case we will distribute this workload over at least 400 seconds while taking into consideration the current workload of the system.
%Thus, we distribute the exchanges according to the current system workload and allow up to $50\%$ utilization of the devices, over a time period relative to the number of exchanges to be made. The simulation then continues on to the next time unit as described above.





\section{Experimental Evaluation}
In this section we will describe the experiments conducted to evaluate which of the policies is better, we compared only between the \emph{Dual policy} and the \emph{Layered policy} because for those policies we have the same resources used in contrast to the regular product who has only the regular cache and thus will have worse performance.



\subsection {Analysis}
As mentioned before we test our policies on 5 real customers. For each customer we ran for each policy simulation with cache with 8KB chunks and 64KB cache chunks, in total we have 4 simulations per customer.
In overall when we used 64KB cache chunks the improvement was less significant than with the 8KB cache chunks, the reason for that is the whole idea of using two caches one with small cache chunks granularity and the second with large cache chunks granularity, is based on the locality principle, when we looking on the difference between the policies we can see that if the data preserves the locality principle we predict that the \emph{Dual policy} will have an advantage over the \emph{Layered policy} because when we bring the data we bring all the 1MB aligned at the beginning and not only when the I/O is supposed to leave the regular cache, if part of the data is . In the other hand, if the data is not preserves the locality principle the bringing of 1MB is unnecessary and cost us in the response time from the devices and space in the cache.

The conclusion that 64KB cache chunks size has affect on the improvement of the \emph{Dual Policy} in comparison to the \emph{Layered Policy} is confirmed in table \ref{SummaryResults}, the table is divided into simulation with 8KB cache chunks size and 64KB cache chunks size, in every simulation we calculated the total miss ratio of the system according to the formula: $\frac{\#misses I/Os}{\#I/Os}$. We can see that for all the customers the 64KB simulation improves less than the 8KB simulation and in some cases the miss ratio of the \emph{Dual Policy} was even worse than the \emph{Layered Policy}, as can be seen in customer c3.

%\section{Conclusions}
%system learns and will improve and balance over time

%\section{Future Work}
\begin{figure}[t*]
\centering
\setlength\fboxsep{1pt}
\setlength\fboxrule{0.1pt}
\fbox{\includegraphics[width=120mm]{Figures/ResultsSummaryTable.pdf}}
\caption{Miss ratio summary table for all customers c1-c5.}
\label{SummaryResults}
\end{figure}

\section{Related Work}
%There is a great deal of recent literature on flash drives.
%Writing is a more complicated operation than reading in flash drives,
%since it involves erasing the previous data as a preliminary step.
%In addition, writing causes serious media wear,
%therefore, writes have to be balanced across all device addresses. The issues involved
%in writing to flash drives are considered in \cite{APWDMP,BITW, GT,KA} among others.
A comparison of SCSI and SATA drives appears in \cite{ADR},
it should be noted though, that the comparison predates the
use of SATA drives in enterprise storage.
Various applications which could profit
from the enhanced performance of flash drives and the use of flash in enterprise storage systems have been considered in
\cite{GF,He,KV,LM,LMPKK,Le,MBL,MW,Na1,NK}.
%but, as pointed out in \cite{Na1},
%they do not take price into consideration.

The configuration of storage systems with disk drives only has been considered in
\cite{ABGRBGMSVW}.
This work uses traces as input data and is mostly
concerned with the configuration of LUs to disks. The analysis is based on a mixture of modeling,
extrapolation of device
performance tables and bin packing heuristics. Unfortunately, real time traces exist very rarely. In contrast,
we use the common LU and extent statistics, avoid the
bin packing issues by assuming that data is striped across devices of the same
type and use a thin queueing model which does not require traces.

A detailed analysis of performance and power in flash drives is given in \cite{CKZ}.
It is shown that depending on the specific workload (reads/writes, sequential/random)
both the performance of power consumption of the flash drive may vary.
A similar study of power consumption in disk drives, \cite{AAFKMN}
shows that the workload characteristics can also affect the power
consumption of disk drives in ways which are similar to the way it affects power consumption in flash drives.
The paper \cite{GT}, provides an analysis of management software and algorithms to avoid write issues in flash drives.

Configurations of enterprise arrays involving a mix of SSD, SCSI and SATA are commercially common these days and accordingly many vendors offer capacity planning and dynamic data reconfiguration tools for such systems, see
\cite{EMC,IBM,compellent,3par,netap} among others. Details of the operation of these tools is not publicly available.
In addition, to work at such level of detail the tools must be fully integrated with the storage system. at this level of granularity the flash is managed more efficiently with methods which resemble caching.

A recent study, \cite{GPGBR}, provides a description of a tool which relies on I/O traces to produce the required extent based statistics, which are then used to provide capabilities similar to those of the commercial tools to a prototype of a disk array. The tool is then successfully tested on a single production trace, showing the advantages of mixed configurations as well as dynamic reconfiguration.
The modeling in \cite{GPGBR} is similar in many ways to our modeling but lacks the queueing theoretic load component, also, given the detailed data, being conservative is not a concern.


\begin{thebibliography}{99}
\end{thebibliography}

\end{document}
% ----------------------------------------------------------------
