%\section{Synthesize and Replay Workloads}
\section{Workload Synthesis and Execution}
\label{sec:synthesis}

We would like to synthesize a representative workload for a particular use case and execute it to evaluate MapReduce performance for a specific configuration. For MapReduce cluster operators, this approach offers more relevant insights than those gathered from one-size-fits-all benchmarks. We describe here a mechanism to synthesize representative workloads from MapReduce traces (Section \ref{subsec:synthesis}), and a mechanism to execute the synthetic workload on a target system (Section \ref{subsec:replay}).

\subsection{Design Goals}

\noindent We identify two design goals:

\emph{1. The workload synthesis and execution framework should be agnostic to hardware/software/configuration choices, cluster size, specific MapReduce implementation, and the underlying file system}. We may intentionally vary any of these factors to quantify the performance impact of hardware choices, software (e.g., task scheduler) optimizations, configuration differences, cluster capacity increases, MapReduce implementation choices (open source vs. proprietary), or file system choices (distributed vs. local vs. in-memory).

\emph{2. The framework should synthesize representative workloads that execute in a short duration}. Such workloads lead to rapid performance evaluations, i.e., rapid design loops. It is challenging to achieve both representativeness and short duration. For example, a trace spanning several months forms a workload that is representative by definition, but practically impossible to replay in full. 

%\emph{3. The framework should anonymize the data and mask the computation done by the jobs}. This requirement seemingly conflicts with the goal of capturing representative workloads, but it is necessary to protect proprietary information and enable sharing the synthetic workloads with the research community. 

%Leaving out the compute portion of production jobs would allow us to compare workloads with different computations, thus creating a far more generic and widely applicable workload generation tool. Despite ignoring the compute part, we can still capture the essential characteristics of the jobs that facilitate configuration and trade-off design analysis. In the long term, we believe MapReduce type systems would become more IO bound, since CPU performance improves faster than IO performance. Thus, in the future, the cost ignoring compute would become progressively less. We later measure the cost of ignoring the compute component for present technology. 



\subsection{Workload Synthesis}
\label{subsec:synthesis}

The workload synthesizer takes as input a MapReduce trace over a time period of length $L$, and the desired synthetic workload duration $W$, with $W < L$. The MapReduce trace is a list of jobs, with each item containing the job submit time, input data size, shuffle/input data ratio, and output/shuffle data ratio. Traces with this information allow us to synthesize a workload with representative job submission rate and patterns, as well as data size and ratio characteristics. The data ratios also serve as an approximation to the actual computation being done. The approximation is a good one for a large class of IO-bound MapReduce computations. 
%We measure the quality of the approximation in Section \ref{sec:evaluate}. 

The workload synthesizer divides the synthetic workload into $N$ non-overlapping segments, each of length $W/N$. Each segment will be filled with a randomly sampled segment of length $W/N$, taken from the input trace. Each sample contains a list of jobs, and for each job the submit time, input data size, shuffle/input data ratio, and output/shuffle data ratio. We concatenate $N$ such samples to obtain a synthetic workload of length $W$. The synthetic workload essentially samples the trace for a number of time segments. If $W << L$, the samples have low probability of overlapping.  

The workload synthesis returns a list of jobs in the same format as the initial trace, with each item containing the job submit time, input data size, shuffle/input data ratio, and output/shuffle data ratio. The primary difference is that the synthetic workload is of duration $W < L$. 

We satisfy design Requirement 1 by the choice of what data to include in the input trace -- the data included contains no cluster-specific information. In Section \ref{sec:evaluate}, we demonstrate that Requirement 2 is also satisfied. Intuitively, when we increase $W$, we get more samples, hence a more representative workload. When we increase $W/N$, we capture more representative job submission sequences, but at the cost of fewer samples within a given $W$. Adjusting $W$ and $N$ allows us to tradeoff representativeness in one characteristic versus another. 

%Also, if Requirement 2 is not a concern, and we have information about data formats and the computation done, we can readily enhance the workload synthesis algorithm. We simply add for each job the data format at input/shuffle/output, and the computation done at the map and reduce stages. 

%
%
%From our five-number summary statistics for each metric we collect for our jobs, we construct an approximate CDF for that metric by linearly extrapolating between the percentile metrics. We construct our synthetic workload by probabilistically sampling the approximated distributions per-metric and per-job. The sampling algorithm is as follows:
%
%{\tt \scriptsize
%\begin{verbatim}
%    1. Sample the five-number summary-based  
%       CDF of job inter arrival times.
%    2. Sample the CDF of job indices. 
%    3. For the job index obtained in Step 2, 
%       sample the five-number summary for 
%       the CDFs of per-node input data size, 
%       shuffle-input ratio, and output-shuffle 
%       ratio.
%    4. Construct vector 
%       [  inter job arrival time, 
%          job index, 
%          input data size, 
%          shuffle-input ratio, 
%          output-shuffle ratio  ] 
%       and append to the workload.
%    5. Repeat until reaching the desired 
%       number of jobs or the desired workload 
%       time interval length.
%\end{verbatim}
%}
%
%Thus, the synthetic workload would be a list of jobs described by 
%
%{\tt \scriptsize
%\begin{verbatim}
%   [  inter job arrival time, 
%      job index, 
%      input data size, 
%      shuffle-input ratio, 
%      output-shuffle ratio  ] 
%\end{verbatim}
%}




\subsection{Workload Execution}
\label{subsec:replay}

The workload executor translates the job list from the synthetic workload to concrete MapReduce jobs that can be executed on artificially generated data. The workload executor runs a shell script that writes the input test data to the underlying file system (HDFS in our case), launches jobs with specified data sizes and data ratios, and sleeps between successive jobs to account for gaps between job submissions: 

{\tt \scriptsize
\begin{verbatim}
  HDFS randomwrite(max_input_size)
  
  sleep interval[0]
  RatioMapReduce inputFiles[0] output0 \
    shuffleInputRatio[0] outputShuffleRatio[0]
  HDFS -rmr output0 &
    
  sleep interval[1]
  RatioMapReduce inputFiles[1] output1 \
    shuffleInputRatio[1] outputShuffleRatio[1]
  HDFS -rmr output1 &
    
  ...
\end{verbatim}
}

%Our workload execution tool includes (1) a mechanism to populate the input data, (2) a MapReduce job that can adhere to specified input-shuffle and output-shuffle ratios, and (3) a mechanism to remove the MapReduce job output to prevent storage capacity overload. We discuss each of these three elements in turn.

\noindent \emph{Populate the file system with test data:}

We write the input data to HDFS using the RandomWriter example included with recent Hadoop distributions. This job creates a directory of fixed size files, each corresponding to the output of a RandomWriter reduce task. We populate the input data only once, writing the maximum per-job input data size for our workload. Jobs in the synthetic workload take as their input a random sample of these files, determined by the input data size of each job. The input data size has the same granularity as the file sizes, which we set to be 64MB, the same as default HDFS block size. We believe this setting is reasonable because our input files would be as granular as the underlying HDFS. We validated that there is negligible  overhead when concurrent jobs read from the same HDFS input (Section \ref{sec:evaluate}).

\vspace{2pt}
\noindent \emph{MapReduce job to preserve data ratios:}

We wrote a MapReduce job that reproduces job-specific shuffle-input and output-shuffle data ratios.  This RatioMapReduce job uses a straightforward probabilistic identity filter to enforce data ratios. We show only the map function below. The reduce function uses an identical algorithm. 

{\tt \scriptsize
\begin{verbatim}
  class RatioMapReduce {

  x = shuffleInputRatio

  map(K1 key, V1 value, <K2, V2> shuffle) {
    repeat floor(x) times 
      shuffle.collect(new K2(randomKey), new V2(randomVal));      
    if (randomFloat(0,1) < decimal(x)) 
      shuffle.collect(new K2(randomKey), new V2(randomVal));
  } 

  reduce(K2 key, <V2> values, <K3, V3> output) {
    ...
  }

  } // end class RatioMapReduce
\end{verbatim}
}


%reduce(K2 key, <V2> values, <K3, V3> output) {
%
%  for each v in values {
%    repeat floor(y) times {
%      output.collect(new K3(randomKey), 
%                     new V3(randomValue));
%    }
%  }
%
%  if (randomFloat(0,1) < decimal(y)) {
%		output.collect(new K3(randomKey), 
%		               new V3(randomValue));
%  }
%} // end reduce()

\noindent \emph{Removing HDFS output from synthetic workload:}

We need to remove the data generated by the synthetic workload. Otherwise, the synthetic workload outputs accumulate, quickly reaching the storage capacity on a cluster. We used a straightforward HDFS remove command, issued to run as a background process by the main shell script running the workload. We also experimentally ensured that this mechanism imposes no performance overhead (Section \ref{sec:evaluate}).

