\title{Technical Report: Robust Optimization}
\author{Jun HU\footnote{OPS}}
\date{\today}

\documentclass[10pt]{article}

\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{fullpage}
\usepackage{graphicx} 
\usepackage{lmodern}
%\usepackage{pdfpages}


\renewcommand{\familydefault}{\sfdefault} %set default font Sans 
\renewcommand{\baselinestretch}{1.50}\normalsize %set line space 1.5

\providecommand{\expt}[1]{{\mathbb E}[ #1 ]}

\begin{document}
\maketitle

\begin{abstract}
This paper resumes the recent development in Robust Optimization. Several approaches will be essentially presented based on our particular case. A robust approach based on simulation-optimization will be proposed to deal with our specific case.
\\{\bf Keywords:} robust, non-convex,convex, stochastic optimization, on-line, recoverable, simulation, metamodel 
\end{abstract}

\tableofcontents


\pagebreak

\section{Introduction}

%% ==================== State of the art ===================

Robust control refers to the control of unknown plants with unknown dynamics subject to unknown disturbances" \cite{Chandrasekharan1996robust}. It is said that, the key issue with robust control systems is uncertainty and how the control system can deal with this problem. Robust optimization tackles problems affected by uncertainty, providing solutions that are almost insensitive to perturbations in the model parameters \cite{Dellino2009thesis}. 


Stochastic programming is a framework for modeling optimization problems that involve uncertainty. The most popular stochastic programming is the two-stage stochastic programming which the optimal decisions should be based on data available at the time the decisions are made and should not depend on future observations. 

Robust convex optimization \cite{BenTal1998,Beyer2007},  


The on-line robust optimization (on-line anticipatory stochastic optimization)

The robust optimization essentially attempts to carry out a solution or a set of solutions which are relatively optimal on all different scenarios. It makes a compromise with the sacrifice the performance on single scenario.  


{\huge Difference between static and dynamic robust optimization}
These scenarios may be predictable. Here we'd like to classify these optimization problems into two categories: 


\begin{itemize}
\item Static robust optimization
	\begin{itemize}
     \item Robust convex optimization 
     \item Stochastic programming
     \item Heuristics based on the statistical solutions evaluation
     \end{itemize}
\item Dynamic robust optimization
	\begin{itemize}
	\item On-line anticipatory stochastic programming
    \item Off-line optimization with model prediction technique
    \end{itemize}
\end{itemize}

The static robust optimization is derived from off-line optimization, and the latter is from on-line optimization. The static robust optimization intends to solve the problems with a predictable scenarios in the pre-defined future. The pre-defined future is decided before the optimization procedure, and the evaluation of solutions is exclusively with the criteria on these future scenarios. 

While the dynamic robust optimization concentrates in solving dynamically incoming neared future scenarios.  

In dynamic robust optimization, two elements are essential: 

\begin{itemize}
\item Future scenarios prediction
\item Rapid Feedback gathering (close-loop control)
\end{itemize}
The following figure illustrates the general scheme of an on-line algorithm: 

\begin{figure}
\includegraphics[width=.7\textwidth]{pic/onlineScheme.pdf}
\end{figure}


The future scenarios forecasting plays an important role in on-line algorithm scheme. The scenarios forecasting is predicted by the model ARIMA. 

%% ==================== User Case ===================
%% ==================================================
\section{On-line optimization}

In old-fashion ideal world, the data of the optimization problem is completely known in advance of the optimization process. While \textbf{On-line optimization} is a optimization scheme, different from the off-line optimization, which dedicates itself to solve the optimization problem whose data is revealed step by step. 


In telecommunication network, the delay of feedback is considerably long (possible be over 1 hour). Thus on global optimization strategy, the methods proposed by the robust (automatic) control community, which essentially depend on the rapid feedback, may not be suitable for our case. Thus the simulation plays an important role in network optimization, which provides simulated feedback of system. In such context, our problem can be classified as simulation-optimization problem. The simulation results 


The future scenarios prediction plays an important role in on-line algorithm scheme. The following figure illustrates the general scheme of an on-line algorithm: 

\begin{figure}
\includegraphics[width=.7\textwidth]{pic/onlineScheme.pdf}
\end{figure}

\section{Simulation-optimization}

\textbf{Simulation-optimization} aims to identify the setting of the input parameters of a simulated system leading to optimal system performances. In practice, however, some of these parameters cannot be perfectly controlled due to measurement errors or other implementation issues, and because of the inherent uncertainty caused by fluctuations in the environment (e.g. temperature or pressure in physical and chemical processes or demand in inventory problems). Consequently, the classic optimal solution derived ignoring these sources of uncertainty may turn out to be sub-optimal or even infeasible. Robust optimization (RO) offers a way to tackle this class of problems, with the purpose of deriving solutions that are relatively insensitive to perturbations caused by the noise factors. 


In the simulation literature, a metamodel is an explicit model of an underlying simulation model; the metamodels are also
called response surfaces, surrogates, emulators, etc. Here, we restrict us to focus on {\it Kriging} metamodel (also known as Gaussian Process Model inside the machine learning community). 



The traffic in dynamic telecommunication network demonstrates the periodic behavior, random fluctuations and occasional bursts \cite{}. One important propriety, the network traffic is spatio-temporally correlated. 

The simulation cannot be avoided because of the delay of feedback. First the simulation itself inherently includes the simulation model error, etc., Second, the 

Referring the approach of MPC (Model Prediction Control), one solution is to generate the future scenarios, and consider it as offline control problem. The scenarios construction will 


The variance measures how far a set of numbers is spread out. The variance of a set of numbers $X$ is its second central moment, the expected value of the squared deviation from the mean: 

\begin{equation}
Var(x)=\expt{(X-\expt{X})^2}
\end{equation}


\section{Forecasting with ARIMA}\label{ch:arima}
ARIMA (AutoRegressive Integrated Moving-Average) integrates the AR model and MA model with allowing differencing of the data series on non-stationary time series prediction. The general non-seasonal model of ARIMA is known as $ARIMA(p,d,q)$, where $p$ is the order of the autoregressive part, $d$ is the degree of first differencing involved and $q$ is the order of the moving average part. 


See the following regression equation (autoregression): 

\begin{equation}
Y_t = b_0 + b_1Y_{t-1} + b_2Y_{t-2} + \dots + b_pY_{t-p} + e_t
\end{equation}

where $Y_t$ is the forecast variable, $Y_{t-1},\dots,Y_{t-p}$ are the previous values of forecast variable $Y_t$ (explanatory variables), and $e_t$ is the white noise. The general non-seasonal model is known as ARIMA($p,d,q$), where $p$ is the order of the autoregressive part, $d$ is the degree of first differencing involved and $q$ is the order of the moving average part. 



% List of the methods, or comparing with Kalman filter 

\subsection{Kalman filter} %% comparing to ARIMA model, the article


\subsection{Experiments setting}
In our case, the parameters are set as $ARIMA(2,1,2)$. The measurement methods are 

Based on the historical data, the ARIMA estimates the future traffic density per cellular. 


\subsection{Experimental results}





\section{Kriging metamodel}
The metamodel, the model of model, is a response surface method in simulation. In definition, a metamodel is an approximation representing the Inputs and Outputs functions defined by the simulation model. The Kriging metamodel is one of such response surface methods which originally developed in geostatistics. 

Before introducing the kriging metamodel, let's first investigate a simpler metamodel, the linear regression model. The formula of the linear regression model can be expressed as: 

\begin{equation}
Y=X\beta+\varepsilon
\end{equation}

where $Y$ denotes the $n$-dimensional vector with the regression predictor, $n$ is the number of simulated experiments. $X$ denotes the $n\times q$ matrix of explanatory regression variables with each entry $x_{ij}$, $i\in n$ and $j\in q$. $\beta$ denotes the $q$-dimension vector of regression parameters. Finally, $\varepsilon$ is the vector of residuals in the $n$ factor combinations. 

The idea behind kriging is to associated the variables spatial covariance with their values. The simpliest kriging metamodel (Ordinary Kriging) assumes that:  

\begin{equation}
Y(x)= \mu(x) + \varepsilon(x)
\end{equation}\label{equ:kriging}

where $Y(x)$ is the metamodel representing the Input/Output function, $x\in \mathbb{R}^m$ is a design point in $m$-dimension space,  $\mu(x)$ is constant mean, $\varepsilon(x)$ is a stationary stochastic gaussian process with mean zero and variance $\sigma^2(x)$. The covariances of $Y(x+h)$ and $Y(x)$ exclusively depends on the distance (lag) $|h| = |(x+h)-x|$. 


The kriging predictor for non-simulated input point $x_0$ in $\mathbb{R}^m$ space, denoted $\hat{Y}(x_0)$, is a weighted ($\lambda_i, i\in n$) linear combination of all the $n$ simulated outputs (simulation experiments): 

\begin{equation}
\hat{Y}(x_0) = \sum^n_{i=1}\lambda_i\times Y(x_i)
\end{equation}

In the literature, usually, the bast linear unbiased estimator (BLUE) is applied to estimate the weights $\lambda_i$, it minimizes the mean squared prediction error:

\begin{equation}
MSE(\hat{Y}(x_0))=\expt{(Y(x_0)-\hat{Y}(x_0))^2}
\end{equation}

Above minimization must meet the condition that the predictor is unbiased: 

\begin{equation}
\expt{Y(x_0)} = \expt{\hat{Y}(x_0)}
\end{equation}

The optimal values for the kriging weights $\lambda_i$ depends on the correlations between the simulation outputs in the kriging model (Equ. \ref{equ:kriging}). The popular correlation function in kriging is Gaussian correlation function: 


\begin{equation}
corr[Y(x_i),Y(x_j)]=\prod^m_{k=1} exp(-\theta_k|x_{ik}-x_{jk}|^2)
\end{equation}

where $\theta_k$ denotes the important of input $k$ (the higher $\theta_k$ is, the less effect input $k$ has). The correlation function has inverse relation with the distance. In case of simulation, the type of the correlation function and the parameter values must be estimated. The parameters $(\mu, \sigma^2, \theta)$, where $\theta=(\theta_1)$




%% ==================================================================
%% Experiment setting and results
%% ==================================================================

\section{Experiments - Forecasting}
The network in testing is the mobile network in Besancon, which contains 196 cellulars. The data consists of the traffic density for each cellular per hour during the whole day on 3 June 2013 ( 00:00-23:00). 

The learning data is from 


The predict data is the traffic density in the following 3 hours, and the prediction data will be measured with real data during same time stamps. 



\section{Experiments - Kriging metamodel}
In order to simplify the problem, the configuration of each antenna is restricted into only one parameter - electrical tilt, and  another variable to concern is the traffic for each antenna. There are 200 antennae in the examined region. As in DACE, the function accept the data as the matrix of $m\times n$, where $m$ is the number of variables and $n$ is the simulation experiments. The variables consist of two parts, the antennae configuration and the network traffic. 

The experimental setting is given as following. The intervals of antenna electronic tilt is between $[-4^{\circ},4^{\circ}]$ of current position, the interval of network traffic is set between the highest and the lowest for each cellular during 9am to 9pm \footnote{See the Appx 1}.  




\subsection{Cellular selection}


The number of variables (features, in ML) and number of observations 
The number of cellulars chosen is defined by the variance of traffic between 10am to 7pm. Thus 33 cellulars were chosen by the variance of their network traffics between such period. Correspondingly, the configuration of the antennae inside such subset network were considered for Kriging. 




\subsection{Sampling}
LHS (Latin Hypercube Sampling) is a frequently used sampling technique in Kriging. The method consists in finding the combination of valuation of variables among their intervals. The experimental setting for 




\section{Measure of forecasting}

\begin{itemize}
\item Mean Percentage Error
\item Mean Squared Error
\item Root Mean Squared Error
\end{itemize}


Mean Absolute Error


Mean Absolute Scaled Error

\begin{equation}
MASE = \frac{1}{n}\sum_{t=1}^{n}\left(\frac{|e_t|}{\frac{1}{n-1}\sum_{i=2}^{n}|y_i-y_{i-1}|}\right)
\end{equation}


\begin{equation}
ME=\frac{1}{n}\sum_{t=1}^{n}e_t
\end{equation}

\begin{equation}
Var(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]
\end{equation}


\pagebreak



% ===================================================================
% ==================================== SPLIT LINE ===================
% THE FOLLOWING TEXT IS ONLY FOR REFERENCE
% ===================================================================


\subsection{Sensitivity analysis}
The sensitivity analysis consists in finding the impact of variables variance on the function value. 


\section{Introduction}
Telecommunication operators typically use the simulation model as a surrogate to do the network design in daily based operation. Simulation model is frequently adopted to aid the network design. While the simulation may be a time consuming process and the decision maker may be not able to predict all design scenario in prior. The metamodel (the model of the model) 

In , the Kriging metamodel demonstrates it advantage by comparing to linear and quadratic metamodels in stochastic dynamic system simulation \cite{}. Such advantage are surfaced by including both intrinsic and extrinsic noises. The stochastic kriging takes into account the uncertainty associated with measurement and/or model inaccuracy in stochastic simulation. 

Kriging metamodel is applicable in capturing nonlinear behavior because the correlation functions can be tuned by the sample data. 

\begin{equation}
Y(x)=f(x)\beta + M(x) 
\end{equation}

where $M(x)$ is the extrinsic noise, which is a stochastic process with zero mean (not necessarily) and correlation function given by


\begin{equation}
Corr[M(x_a),M(x_b)] = e^{-\sum\theta_j(x_{aj}-x_{bj})^2}
\end{equation}


Krige' idea was that the difference between the prediction of a linear model and the true mean value of a function at some point can be related with the distance between the point of interest and a set of sampled points in the input space. 


In nowadays, complex simulation models provide us the accessibility of system analysis without constructing the real system. These models act as surrogates to system design and optimization. However, for certain systems, the complexity of the simulation models leads to the high time consuming and the expensive investment cost. Also it is important to mention that the uncertainty introduced by the unknown physical phenomenons and the inaccurate system parameters may have considerable impact on simulating real systems. The metamodel, as the model of the simulation model, is introduced to reduce the simulation cost and to cope with the uncertainty. 

Stochastic simulation +

The metamodel response surface technique are frequently introduced in the case which the simulation is time consuming and expensive. Among different approaches, Kriging metamodel has kept good promise in various problems involving with both deterministic and stochastic simulations. 



\pagebreak

\section{Description}
Stochastic dynamic system, 

Error estimation (error propagation)


Why metamodeling? First, the metamodel predicts the target values without expensive simulation; second, the multi-objective optimization benefits from the construction of pareto-front of metamodel. 

The metamodel is a response surface method in simulation. 


The multivariate Gaussian process regression is achieved by the linearization of  multi-outputs. 

\pagebreak

\section{Introduction}
In the domain of operational research, there exists the applications whose objective function is not given explicitly. In stead, the evaluation of the objective function is given by the simulation. Thus an optimization problem may be solved by a simulation-optimization procedure, the objective function $f(x)$ becomes the observed values ($\tilde{f}(x)$) obtained by the simulation. The simulation evolves in two classes of models, the deterministic model and the random (including discrete-event) model. The former represents the real world systems obeying the physical law, while the latter involves in representing the social activities where the human beings create the noise. For example, in a typical telecommunication network, the physical network is simulated by the deterministic model while the daily call demands are interpolated by the stochastic model. In this paper, we isolate the random simulation model from our case. 




\section{Simulation-Optimization}
Simulating a real world scenario can be computational intensive even with the simplification of computer model. Metamodel techniques have been widely deployed to improve the efficiency in simulation-optimization procedure. When the metamodel techniques are applied, it may introduce another inevitable error, metamodel error, which coexists with the simulation model error. 


Due to the lack knowledge of unknown environment or the simplification of the computer model, several types of errors may be introduced in the simulation. 

There exists at least two types the errors in the simulation, 

\begin{itemize}
\item Simulation model error: the inaccuracy between the reality and the computer model
\item Metamodel error: the inaccuracy between the simulation model and the metamodel
\end{itemize}

The first error is due to the simulation model simplification facing complex real world case.  Either due to the lack of the knowledge on real environment, or due to computational reason, 


Stinstra and den Hertog \cite{Stinstra2008experiments} applied the Ben-Tal and Nemirovski's robust counterpart theory \cite{BenTel2002} on metamodels to cope with the simulation model error and metamodel error during optimization. Figure \ref{fig:errorDiagram} shows the diagram of the simulation-optimization procedure and the introduced errors during the simulation. 

\begin{figure}[h]\label{fig:errorDiagram}
\centering
\includegraphics[width=.35\textwidth]{pic/errorsDiag} 
\caption{Errors in simulation-optimization procedure}
\end{figure}



The Kriging metamodel (also Gaussian Process regression) distinguishes the response features from the complex simulation model, it tends to perform better than polynomial models (linear, quadratic) when robustness is concerned \cite{Jin2003}, while Beyer and Sendhoff \cite{Beyer2007} argued the lack of the evidences on its scalability. Recently, Kleijnen gave a comprehensive survey of using Kriging in simulation \cite{Kleijnen2009}. 



Metamodel can be expressed as an approximation \cite{Stinstra2006thesis}:
\begin{equation}
\hat{f}:\Re^{n}\rightarrow\Re^{m}
\end{equation}

where $\Re^{n}$ is the design parameters space and $\Re^{m}$ is the response parameters (KPI:key performance indicators) space


\begin{equation}
\begin{split}
& \min_{x,z}  z \\ 
s.t.&\: \hat { f_{ 0 } }(x) \le z, \: \forall \breve { Y } _{ 0 }=(1+{ \varepsilon  }_{ 0 }^{ m })Y_{ 0 }+{ \varepsilon  }_{ 0 }^{ a },\; \forall \varepsilon _{ 0 }^{ m }\in U_{ 0 }^{ m },\forall \varepsilon _{ 0 }^{ a }\in U_{ 0 }^{ a } \\ 
&\: \hat { f_{ i } }(x) \le\gamma_i, \: \forall \breve { Y } _{ i }=(1+{ \varepsilon  }_{ i }^{ m })Y_{ i }+{ \varepsilon  }_{ i }^{ a },\; \forall \varepsilon _{ i }^{ m }\in U_{ i }^{ m },\forall \varepsilon _{ i }^{ a }\in U_{ i }^{ a }
\end{split}
\end{equation}

\section{Strict robust optimization problem}
\subsection{Robust Convex Optimization}

The bounds of uncertainty set are known or the uncertainty is given in the form of a set of discrete scenarios. 

A survey is given by Bertsimas et al. \cite{Bertsimas2010app}

 
In \cite{BenTal1998} Ben-Tal and Nemirovski gave a comprehensive introduction in Robust Convex Optimization, they suggested that even there may exist the case that certain constraints can be slightly violated, the constraint satisfaction still need to be concerned first. Consider an optimization problem of the form: 

\begin{equation}
\begin{split}
& \min  f_0(x,\zeta)\\
s.t. &\:f_i(x,\zeta)\le 0
\end{split}
\end{equation}

where $x\in\mathbb{R}^n$ are the decision variables, $i=1\dots m$. we can 

An optimal solution can be found under problem: 

\begin{equation}
\min\left\{ \sup_{\zeta\in\mathcal{U}}f(x,\zeta): F(x,\zeta)\in K, \forall\zeta\in\mathcal{U}\right\}
\end{equation}




where $\mathbb{E}$ is expected value function. 
\subsection{Stochastic Optimization}

Stochastic optimization
\begin{equation}
\min f(x)+\mathbb{E}[g(x,\zeta)]
\end{equation}

\subsection{Simulation-Optimization}
Recently, Bertsimas raised the issue when the problem is nonconvex \cite{Bertsimas2010} in operational research applications. The multiple gradient ascent method is applied to 



\section{Dynamic Robust Optimization}

\subsection{Online anticipatory stochastic optimization}
The {\bf REGRETS} \cite{Bent2004}




\subsection{Recoverable robust optimization}

Recoverable Robust Optimization\cite{Liebchen2007}

By given a problem: 

\begin{equation}
\min 
\end{equation}


and a set of scenarios $\mathcal{S}$. The objective is to find a pair $(x,A)$, where $x\in\mathcal{X}$ and $A\in\mathcal{A}$

\begin{equation}
=\{\Delta x \mathbb{E} \}
\end{equation}

where $x\in \mathbb{R}^{n}$

\section{Results}\label{results}
In this section we describe the results.


\cite{Bertsimas2010}



\section{Modeling}


\subsection{Stochastic simulation modeling}


\subsection{Deterministic simulation modeling}
Metamodel is a technique for deterministic simulation model.
Survey: \cite{Simpson1997}
Metamodel technique: 

\begin{itemize}
\item Frequency domain 
\item linear and quadratic metamodel
\end{itemize}




\section{ROOT optimization}
Robust Optimization Over Time (ROOT) deals with the robustness in dynamic optimization problem, which is deterministic at any time instant and changes over time. It takes into account both the uncertainties in the parameter space and the cumulative effect of these uncertainties in the time space. Given $x_i$ as decision variable, $\alpha(t)$ as the time-dependent problem parameters and a sequence $l=\lceil\frac{T}{\tau}\rceil$ during time $t\in[0,T]$, the problem can be formulated:

\begin{equation}
<f(\vec{x},\vec{\alpha}_1), f(\vec{x},\vec{\alpha}_2),\dots,f(\vec{x},\vec{\alpha}_l)>
\end{equation}
 
The objective is to find a sequence solutions $<S_1, \dots, S_k>$, where $1\leq k\leq l$, such that $k$ is minimal.

The measurement of robustness, 

\subsection{EA approach}
Hatzakis and Wallace \cite{Hatzakis2006eampc} divide the population in EA into three pools, the front, the cruft and the prediction set pools. 

The compete between the convergence and the diversity is balanced  
Past information (past fit solutions) might again become useful as the problem evolves. 


\subsection{Detecting change in dynamic optimization problem}



\section{Nonliear MPC control} 

MPC 
Grosman and Lewin proposed an automated nonlinear MPC using EA \cite{GrosmanAutomatedMPC2002}. 


Kalman filter is an iterative recursion algorithm which uses only most recent data for future estimation. 
Unscented Kalman Filter (UKF) \cite{Julier97ukf} for nonlinear estimation. 



\section{Kriging metamodel (Gaussian Process)\cite{Kleijnen2009}}
The Kriging metamodel is suitable for both deterministic and stochastic simulations, especially, the stochastic kriging dedicates itself to stochastic simulation exclusively. 


Different from the linear model, Kriging metamodel 

Kriging metamodel is proposed to handle both deterministic and stochastic simulation problems. It assumes that the distribution of functions respecting the Gaussian distribution. van Beers and Kleijnen \cite{vanBeers2008Kriging} replaced the popular LHS (Latin Hypercube Sampling) resampling technique by bootstrapping \cite{} on stochastic simulation problem, and obtained better results with the replacement. The bootstrapping technique decides 

Bootstrap consists in mimicking the data generation process by random sampling from the observed data instead of random sampling from the true population. 

\subsection{Stochastic Kriging metamodeling in stochastic simulation}
Stochastic gradient estimators enhanced stochastic Kriging metamodel by the gradient estimators. 

Multivariate statistic analysis, multivariate interpolation algorithms: Kriging algorithm. The algorithm dedicates to predict the realization values lying between the simulated samples. The Kriging system assumes that the hypothesis of constant mean is not reasonable, instead, it takes into account 

\begin{itemize}
\item the distance between the estimated point and the sample data points,
\item the distance between prior data points themselves,
\item the structure of the variable through the semivariogram.
\end{itemize}



A GP is completely by mean function and covariance function: 


\begin{align}
m(x) &= \expt{f(x)} \\
k(x,x') & =\expt{(f(x)-m(x))(f(x')-m(x'))}
\end{align}

then GP can be written as: 

\begin{equation}
f(x)\sim\mathcal{GP}(m(x),k(x,x'))
\end{equation}

Not necessary, the zero mean function is usually considered. 


\subsection{Multivariate regression}
Multivariate regression is a technique that estimates a single regression model with more than one outcome variable. When there is more than one predictor variable in a multivariate regression model, the model is a multivariate multiple regression. 

The mathematics model of multivariate regression can be expressed as:

\begin{equation}
Y= \beta X + \epsilon
\end{equation}

where $Y$ is the vector of dependent variables, $X$ is the matrix of explanatory variables (observation matrix), $\epsilon$ is the vector of errors and $\beta$ is the least squares estimator. 

\section{Conclusions}\label{conclusions}


\section*{Remarks}
\begin{itemize}
\item Kriging metamodel ,  Gaussian Process (GP) 
\item cope with simulation model error and metamodel error during optimization \cite{Stinstra2008experiments}
\item Proposed approach
\begin{enumerate}
\item first construct the network and cope with simulation model error and metamodel error for optimization
\item recoverable robust for uncertainty of call demands
\end{enumerate}
\item Beyer and Sendhoff gave a survey on robust optimization in engineering design \cite{Beyer2007}
\end{itemize}

Beyer and Sendhoff \cite{Beyer2007} list three categories methods to extract the useful information from the raw simulation data as the input for optimization:

\begin{itemize}
\item Monte-Carlo strategy 
\item Metamodel approach: serve as an approximation of the objective function, lack the evidence in scalability 
\item Direct approach 
\end{itemize}

In \cite{Benes1963}, Benes proposed a thermodynamic model to formulate the traffic in connecting networks. The two primaries are: 

\begin{itemize}
\item The entropy
\item The connecting subsystems  
\end{itemize}

The calls are heat, and sites are subsystems

Dissipation Inequality \cite{Ebenbauer2009dissipative}



In \cite{Haddad2005thermodynamic}, the authors propose a discrete time thermodynamic model which introduces, a dual definition of entropy (disorder in network), the ectropy (storage function as reducing disorder in network) in the form of:
\begin{equation}
\psi(x) =x^T P_x x
\end{equation}

where $P_x$ is a diagonal matrix with the element of $\frac{1}{x^*_i}$, where $x^*_i$ is the capacity of each subsystem $i$. It companies with a supply function:

\begin{equation}
S_\omega=x(t)^T P_x\omega(t)
\end{equation} 


 
\section{Scenarios generation}
One important research in stochastic programming and MPC is to generate the scenarios. The scenario reduction plays an extreme weighted role in the scenarios generation. Under the umbrella of stochastic programming, the discrete samples of limited size will be generated by the distribution approximation. Such approach is named scenario tree. 

In \cite{Heitsch2011scen}, the authors illustrate several scenario reduction algorithms. In stochastic programming, the problem can be described as: 


The scenarios generation consists in reducing certain similar scenarios.

The distance between the scenarios, 


\section{Stochastic kriging}
Stochastic kriging is designed to take into account the effects of both the intrinsic and extrinsic uncertainties, while normal kriging model only considers the extrinsic uncertainty caused by the measurement error. The stochastic kriging is dedicated to construct metamodel in stochastic simulation. 

Two types uncertainties: 
\begin{itemize}
\item intrinsic: errors inherent from the propriety of simulation model, 
\item extrinsic: measurement errors, which represents the uncertainty we have about the response surface at a point where we have not yet run a simulation.
\end{itemize}


\section{Spatio-temporal kringing}


\section{Kriging v.s. CoKriging}
When the primary and secondary variables exist at all data locations (isotopic data) and the direct-variograms and cross-variograms are alike, cokriging is similar to kriging \cite{isaaks1989applied}. 


\section*{Remarks}
The size of network will be reduced thus the feature dimension is relatively lower than the one in origin problem. These tasks can be achieved through a PCA analysis on the cells. 

The feature selection by GP is suffered from the over-fitting the marginal likelihood by tuning the kernel parameters. 

Random forest Feature selection. 

The traffic pattern in network is bi-model, it is proven by k-mean classification method. 


The automatic control system design is strongly related to the feedback. The response time of feedback or with/without feedback has dramatic impact on the chosen methods. 


\section{Box-Jenkins Schema}
Following the diagram of Box and Jenkins

\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{pic/BoxJenkinsMethod.pdf} 
\end{figure}

\bibliographystyle{abbrv}
\bibliography{robust}


\end{document}
