\documentclass[12pt]{article}
\usepackage{graphicx, epsfig, amsmath, url, subfigure, cite, color, ulem, IEEEtrantools, geometry, float, times}
%\pagestyle{empty}
\geometry{left = 1in, right = 1in, top = 1in, bottom = 1in}
\parskip 0pt         % sets spacing between paragraphs
\parindent 12pt      % sets leading space for paragraphs

\begin{document}
\setcounter{page}{1}
\bstctlcite{all:BSTcontrol}
\rm


\section*{Title:} A Scalable Multi-layer Approach to Internet Core
Network Design and Traffic Engineering

\section*{Abstract:}

See the document abstract.doc

\newpage


\noindent
\Large
\uline{\textbf{1. Impact and Objectives}}


\vspace{0.5 cm}

\normalsize
\noindent
\textbf{(a) Long-term impact}

\vspace{0.5 cm}

\noindent The Internet layered architecture is based on various
transport technologies (e.g. IP, MPLS, Ethernet, WDM) used at
various layers that have different attributes. In this project, we
aim to develop a cost-driven methodology to achieve near optimal use
of these technologies. In particular, we propose a new approach for
Internet core network design (or individual domains) that aims to
minimize cost (including energy cost) subject to meeting Grade of
Service (GoS) requirement (a linear combination of packet and flow loss
probabilities) by choosing transport technologies and routing flows
based on their size. The concept of flow size here refers to the
size [bits] of an application level flow; e.g., the size of a page
or movie download. We will also provide a new methodology for
performance analysis that enables benchmarking of the results
against the current Internet.

\vspace{0.5 cm}

\noindent This
project will provide a stepping-stone for development of a novel
cost-driven flow-based protocol that will lead to an Internet
traffic engineering evolution beyond GMPLS and MPLS.

\vspace{0.5 cm}

\noindent The  Internet  has been a remarkable success. Its flexible
architecture  has enabled more and more applications to be developed
independently of the transmission medium. Efficiency has been
achieved by sharing resources.
 So far, IP dominates the desktop. Users have
voted with their feet and are satisfied with service received at
supermarket prices.


\noindent However, the future brings new challenges. Internet
traffic is expected to grow at 34\% per annum \cite{Cisco2010}
causing energy consumption to increase at a much higher rate than in
other industry sectors. The recognition that this trend is
unsustainable is evidenced by the establishment of the
GreenTouch(TM) \cite{Green2010}  consortium committed to increase
``ICT energy efficiency by a factor of 1000'' within five years.

\vspace{0.5 cm}

\noindent This project is guided by the following principles:
\begin{enumerate}  \item  Internet energy efficiency is important
for sustainable growth of the Internet itself. \item Flow level
monitoring and management (not necessarily end-to-end) are key to
achieving efficiency and provision of Quality of Service (QoS)
\cite{Roberts2003,Roberts2009}. \item Internet traffic is
nonstationary, and flow-sizes are heavy-tailed distributed
\cite{Leland1994,Karagiannis2004,Cao2001,Crovella1998,Arlitt1997,Williams2005,Downey2005}.
\item Routing must be distributed and scalable. \item Evolution is
better than revolution given IP dominance. \item Flows, based on
their size, choose the switching/routing layer/technology to
minimize their cost. \end{enumerate}

\vspace{0.5 cm}

\noindent We consider four Internet energy consumption causes:
\begin{enumerate} \item Individual packet processing at routers
(including table look-up and repeated buffering).
\item Connection/path set-up and monitoring (on multiple layers);
%(particularly those associated with setting up paths through
%alternative layers);
\item Switching (optical or electronic); and \item Transmission.
\end{enumerate}

\vspace{0.5 cm}

\noindent We initially classify
flows into the following traffic classes \cite{Addie2010a}.
 \begin{enumerate} \item Mice: the smallest
flows and there are many of them. It could be efficient to aggregate
and transport them through tunnels that may be longer than shortest
path. \item Elephants: the largest flows. Their numbers are
relatively small but they account for most of the bytes. It is
cheaper to transport elephants using lightpaths rather than IP
\cite{Weichenberg2009}. \item Kangaroos: the remaining flows. They
are routed based on the current Internet architecture which is
consistent with our aim for evolution. \end{enumerate} Clearly,
efficiency can be improved if flows, based on their size, use the
right transport technology.

\vspace{0.5 cm}

A key contribution of this project is the consideration of flow
sizes in a cost-driven multi-layer network design. To avoid
the need for centralized optimization, we
 assume, at first, initial (nominal) utilization on all links, and
solve an unconstrained cost minimization problem by a heuristics
based on shortest path using a link-cost metric based on flow size
for each layer. Next, we set link capacities using a queueing or
loss model (depending on the relevant layer) according to GoS
requirements. Then, we repeat the flow assignment given the
capacities. Iterative repetitions of the process lead to a
fixed-point solution of routing, choice of layer, and capacity
assignment. Energy efficiency is achieved naturally by the
cost structure that considers flow-sizes and can include carbon tax.

\vspace{0.5 cm}

We assume that packets contain flow information. This assumption is
justified as flow identification is already available in
commercial products, e.g. Cisco IOS
NetFlow \cite{Cisco2007}. However, the exact definition of the traffic
classes and their number will be investigated here.

\vspace{0.5 cm}

This project focuses on performance evaluation and benchmarking of
our  approach. This will be a stepping
stone for design, implementation and commercialization.

\vspace{0.5 cm}

This project will lead to a greener Internet and lower cost to
telecommunications providers and to customers. This is especially
important for Hong Kong where the Internet data rate per capita is
of the highest in the world and improving energy efficiency is
a high priority.

\vspace{0.5 cm}

 \noindent
\textbf{(b) Objectives}

\vspace{0.5 cm}

\begin{enumerate} \item  For Poisson arrivals of heavy-tailed flows,
based on the new approach,
develop and validate scalable and accurate simulation and analytical
models to derive capacity dimensioning, routing and choices
of transport technology. This will include computation of optimal
boundaries between the traffic classes, and their optimal number.

\item Extend objective 1 to scenarios of nonstationary traffic
(including multi-hour traffic).

\item Demonstrate the robustness of the approach by considering a wide
range of networks and traffic scenarios.

\item Quantitatively evaluate the benefit of the new approach by
benchmarking it against the current Internet architecture.


 \end{enumerate}

\newpage
\noindent
\Large
\uline{\textbf{2. a. Background of research}}

%\vspace{0.3 cm}

\normalsize
We begin this section with a few statements
 about the novelty of this proposal, and then we
 discuss related published work and point out
specific ideas that in certain sense overlap with the present work
and further clarify the novelty of this project.


\vspace{0.1 cm}

{\noindent \bf \underline{The novelty of our project}}

%\vspace{0.1 cm}

The ideas of flow-based routing and mice and elephants are not new.
However, what we propose here is far more than a routing protocol.
 We aim for a near optimal solution of
 the use of Internet technologies in the various layers. This will
enable designers to make cost effective decisions about traffic
engineering, link dimensioning, choice of technologies,
and network topology (virtual and physical). The need for such an optimization
is pointed out in a recent Ericsson White Paper \cite{Ericsson2010}.

Our design will be cost effective and energy efficient. The energy
efficiency will be achieved in two ways: (1) by considering a cost
structure for flows based on their size will save energy, e.g., by
automatically sending elephants on lightpaths rather than through
IP, and (2) the designer may amplify energy cost (include carbon
tax) to choose a ``greener'' optimal solution. Currently, the design
of the Internet focuses more on performance and less on energy
efficiency \cite{Green2010}.

Our solution will be based on a distributed algorithm -- a natural
extension of the currently used Open Shortest Path First (OSPF) protocol
to the case where cost metrics
and route vary depending on flow size.

 A key novel contribution of this project is a methodological
 analysis that provides efficient network design
 under a general scenario involving an arbitrary number of nodes
 and realistic traffic conditions. In particular,  we  rely on well established
Internet traffic models where flow size distributions are heavy tailed
\cite{Crovella1998, Arlitt1997} and traffic is nonstationary
\cite{Karagiannis2004,Cao2001}.

\vspace{0.1 cm}

\noindent \uline{\textbf{2. a. i. Work done by others}}

\vspace{0.1 cm}

\noindent
\textbf{\textit{2. a. i. 1. Architecture}}

%\vspace{0.1 cm}

As Internet flows continuously grow while the maximum size of an IP
packet stays fixed, there is an opportunity to save energy and
efficiently provide QoS by capturing, monitoring and managing entire
flows rather than processing individual packets. One relevant
technology that seizes this opportunity is {\it flow routing}
\cite{Roberts2003,Roberts2009} where flow-state information is
captured on the fly by routers. Flow routing relies on TCP/IP at the
access, and interworks with conventional routers. With Flow routing,
specific flows can be targeted for discard during congestion.
Routing-table look-up per packet is avoided by the use of hash table
identifying packets as part of a flow. Unnecessary buffering can be
avoided by admitting only traffic that can be forwarded. This saves
power and space. According to \cite{Roberts2009}, flow routing can
save 80\% of the energy consumption, 90\% of the space, and reduce
operating expenses by a factor of 10. This further motivates our
flow-based traffic engineering approach.

Issues related to classifying Internet flows according to their size
(mice and elephants) have been considered in many publications:
fairness \cite{Guo2001}, congestion control \cite{Low2002},
definition and identification \cite{Cisco2007,Papagiannaki2002}, and
measurement and accounting \cite{Estan2003}.

Kist and Harris \cite{Kist2004a}  proposed flow-size dependent
routing, and route using preassigned paths. However, our approach
allows more flexibility in the choices of routing and technology to
minimize cost. We also propose a novel performance analysis of the
architecture that has not been done before. Recently, there have
been proposals for energy aware routing (see e.g.
\cite{Yetginer2009}) where the optimization based on Integer Linear
Programming (ILP) can demonstrate significant improvement for
relatively small networks. However, ILP is not scalable, and in
practice, for scalability, distributed routing algorithms are
required. Another relevant contribution is the generic graph model
\cite{Zhu2003} that allows for a multi-layer optimization of WDM
network. This has some similarities with our approach. However, our
objectives consider technology choices, and link capacity
dimensioning as part of the problem, whereas \cite{Zhu2003} assumes
that the link capacities are given, and a specific collection of
grooming technologies is available, so the task to be solved is how
to route traffic through this network. We also provide an analysis
of a statistical model of nonstationary traffic adopting a Pareto
distribution for flows, which is not considered by \cite{Zhu2003}.
Another flow-based architecture is Optical Flow Switching (see
\cite{Weichenberg2009} and references therein) that focuses only on
the elephants and routes them optically. It also demonstrates energy
savings which motivates the present study.

\vspace{0.1 cm}

\noindent \textbf{\textit{2. a. i. 2. Long Range Dependent (LRD) traffic modeling and performance studies}}

%\vspace{0.1 cm}

Based on traffic measurements, Internet traffic exhibits LRD and
self similarity (see e.g. \cite{Leland1994}), mainly due to
flow sizes being heavy tailed distributed
\cite{Crovella1998, Arlitt1997,Williams2005,Downey2005}.
Analysis and simulation of systems involving LRD processes and/or
heavy-tailed flows for the full range of parameter values have been
considered difficult. One traffic model that has attracted
significant attention is the so-called Poisson Pareto Burst Process
(PPBP) (a.k.a M/G/$\infty$ for the case where the 'G' represents a Pareto
distribution) characterized by a Poisson process of arriving
heavy-tailed Pareto-distributed flows. The Poisson arrivals are
justified by the large number of Internet users that generate them.
Analysis by others of PPBP queues focused mainly on asymptotic
results (see e.g. \cite{Parulekar1997} and \cite{Duffield1998}).
%By contrast, as we
%discuss below, our work in \cite{Addie2009}
%applies to the full range of situations.

Because it is difficult for a simulation that runs for a finite
amount of time to capture accurately the performance effects of a
random variable that has infinite variance, it has been a challenge
to accurately evaluate performance by simulations for queues
involving heavy-tailed service flows. See e.g. \cite{Mora2009} and
references therein. Rojas-Mora {\it et al.} \cite{Mora2009}
demonstrated improvement in both accuracy and simulation time of a
single processor sharing queue with Pareto distributed file sizes
using the Bootstrap method.




\vspace{0.1 cm}

 \noindent \textbf{\textit{2. a. i. 4. Internet
traffic engineering and MPLS}}

%\vspace{0.1 cm}

Multiprotocol Label Switching (MPLS) \cite{Li1999} is an IETF
initiative that facilitates Labeled Switch Paths (LSPs) by mapping
individual packet routing and QoS information into MPLS labels. In
principle, MPLS can be used to aggregate mice into an LSP that
bypasses IP routing and is routed to optimize efficiency. An
individual LSP can also be used to handle an individual elephant.
Because we are interested in flow-based traffic engineering, the
existing research on MPLS and Internet traffic engineering (e.g.
\cite{Movsichoff2007} and references therein) is relevant to this
proposal. According to \cite{Roberts2003}, MPLS ``cannot support QoS
for individual small flows, cannot provide delay guarantees, and
cannot reject new flows to protect current flows from packet loss.''
In our model, we aim to aggregate mice together in large pipes (e.g.
LSPs), but loss is avoided for the mice as we aim to give them
priority and monitor the larger flows, so that QoS for mice will be
provided by default. In particular, LSPs of individual elephants can
be monitored and managed to provide QoS, either to themselves, or to
protect other flows from them.

\vspace{0.1 cm}

\noindent \uline{\textbf{2. a. ii Work done by us}}

\vspace{0.1 cm}

\noindent
\textbf{\textit{2. a. ii. 1. PI Zukerman and Co-I Addie collaboration}}

Extensive collaboration between PI Zukerman and Co-I Addie over the last two
decades has led to over 30 joint publications relevant to this proposal (see e.g.
 \cite{Addie1994,Addie1995,Addie1998,Addie2002,Zukerman2003,Addie2003,Addie2009,Wang2010} and references therein).
 The work presented in \cite{Addie1998}, described, in
general terms, how to model real traffic streams using {PPBP}. Then
in \cite{Addie2002} we presented, for the first time, a method,
which we call the quasi-stationary ({QS}) approximation, to estimate
the queue-size distribution of a PPBP queue for any traffic and
congestion condition. This approximation was validated against a
specially tailored simulation in \cite{Addie2002}, and used in
\cite{Zukerman2003} to predict traffic smoothing and efficiency in
the core of a bufferless optical Internet. Further details on the
{QS} approximation and the {PPBP} queue simulation are available in
a PhD thesis by T. D. Neame \cite{Neame2003} jointly supervised  by
the PI and the Co-I. Then
 \cite{Addie2003} analytically showed that for PPBP
queues, large deviation theory is inaccurate for moderate buffer
content. In \cite{Addie2009}, we used the numerical procedure of the
QS approximation for a PPBP queue, and developed numerical methods
to compute performance results for the entire parameter range, showing consistency
 with asymptotic results obtained by
others. See Figure 1. Recently, in
\cite{Wang2010}, the PI and Co-I extended, to a multi-service
environment, an earlier work by PI Zukerman \cite{Potter1991} of a
multi-queue system served by a processor sharing (PS) queue.

{\bf \underline{The most relevant collaboration of the PI and Co-I}}
is the recent work presented by the PI in the ICTON 2010 Conference
\cite{Addie2010a} where an outline of the approach of this project
is provided.
 Although this initial effort has not yet addressed the objectives of
this proposal, it provides certain confidence in  {\bf
\underline{the feasibility of the project}}. In particular, {\bf
\underline{the work done so far includes}} an initial attempt of a
formulation  of an analytical model based on Poisson arrivals of
heavy-tailed flows and a fixed-point approximation to derive optimal
dimensioning and routing \cite{Addie2010a}. Link capacity
dimensioning in \cite{Addie2010a} is only based on a simple formula
of mean plus several standard deviations ignoring queueing effect.
This is only a first step towards Objective 1. The model has not
been validated yet. In fact, we have not started yet to write the
simulations required to validate the model. We also have not yet
defined the optimal boundaries between the traffic classes. The work
on objectives 2-4 has not yet commenced.

\vspace{0.1 cm}

\noindent \textbf{\textit{2. a. ii. 2. Other relevant work by us}}

%\vspace{0.1 cm}

For contributions of PI Zukerman to performance evaluation of hybrid
switching systems, see \cite{Wong2008} and references therein. He
also contributed to analyses of telecommunications networks of
arbitrary topologies using Erlang Fixed-Point Approximation (EFPA)
(e.g. \cite{Rosberg2003}) and to the network design that involves
design for multi-hour traffic (i.e., periodic traffic fluctuation)
and traffic growth over periods of years \cite{Maxemchuk2005}.
In \cite{Zukerman2009} he pointed out
advantages of maintaining state information for certain calls,
connections, or flows. In \cite{Parthiban2009}, he and collaborators
evaluated cost of various optical networks to include operational
expenditure (OPEX) and capital expenditure (CAPEX) cost.

Back in the 80s, Co-I Addie {\it et al.} \cite{Addie1988} introduced
the concept of ``Virtual Direct Routes'' which is equivalent to the
``Virtual-path'' concept in ATM networks and later led to its
standardization. This invention has also led to an Australian patent
\cite{Addie1989}. At the same time, Co-I Addie also developed the
concept of bandwidth switching \cite{Addie1988a} that together with
the ``Virtual Path'' formed the basic ideas of MPLS and Internet
Traffic Engineering.

Co-I Addie {\it et al.} \cite{Addie2002a} derived computable
formulae for arbitrarily correlated Gaussian queues. Addie
\cite{Addie1999} provided justification for using Gaussian models of
aggregate traffic. In \cite{Addie2008} and \cite{Addie2009a} Co-I
Addie provided a simulation technique which can give accurate
simulation results for systems that involve Pareto sized flows in a
fraction of the time required by conventional simulations.

Co-I Addie {\it et al.} \cite{Addie2007,Addie2007a} developed
analytical solutions for a queue with PPBP input considering Largest
Flow Last (LFL) and PS disciplines. The analyses were based on fluid
flow models and solutions of partial differential equations. While
this work considered primarily analyzes of single queues, in this
project we consider network-wide
 routing and queueing of flows based on flow size. Addie
{\it et al.} \cite{Addie2006a} developed flow dependent control
strategies applicable to collections of (small) flows which can be
distinguished, rather than identifying and monitoring individual
flows.

%\vspace{0.3 cm}


\begin{Large}

\noindent \uline{\textbf{2. b. Research methodology and
plan}}

\end{Large} This section focusses mainly on challenge of 
developing and evaluating a scalable method for network design. 
We begin by describing our cost-driven flexible framework.
Next, we describe the analytical model based on a
fixed-point solution and single link analyses. Then, we explain how
the analysis can be extended for nonstationary traffic.
After that, we discuss simulation challenges, followed by a
description of the work plan, and finally, we outline future work
beyond this project.

\vspace{0.1 cm}

\noindent \uline{\textbf{2. b. i. Framework and key idea}}

%\vspace{0.2 cm}

 To achieve our objectives we propose to develop a unified framework for
an arbitrary network with multiple transport technologies that interwork
 under realistic traffic demand assumptions. In particular, we
 assume a set of nodes in various locations and a traffic stream for
 each origin destination (OD)
 pair of nodes that is initially modelled as stationary PPBP and/or
 constant bit rate (CBR)
 which is later extended to a nonstationary process.

Our proposed solution is based on the concept of  {\bf Layered
transport}.  All transport technologies (e.g.: IP, GMPLS/MPLS,  ATM,
WDM, PHY) are available in all nodes. We associate each layer with a
unique technology, so henceforth we use the words technology(s) and
layer(s) interchangeably. In cases of competing technologies on the
same layer, or if a new technology (or an alternative design option)
is to be considered on a given layer, our algorithm run is repeated
for each alternative so that a cost comparison can be made.

Each layer spans the entire network. Layer 0
is the physical transmission layer. A flow that is transported
end-to-end may use different layers at different links on its path
depending on cost. It may also require services of more than one
layer at a given link. We initially assume a fully meshed network at
each layer but after we run our algorithm, it may transpire that
some links are excluded due to cost considerations.

Normally, the job of routing/transporting a flow will involve
several layers which will incur cost: OPEX (including energy cost)
and amortized CAPEX. They include cost of packet processing at the
IP layer, switching at ATM or WDM layers, transmission at PHY layer,
and the relevant layers will incur costs associated with connection/path set-up
(depending of the path end-points).
This way, the connection cost per bit will be negligible for
elephants. This will justify setting up connections and
transporting them on a lower layer to avoid costly individual
packet handling at the IP layer.

Each layer dynamically advertises a cost per packet
for each size of flow. Layers pass on costs so that the cost per
packet at one layer includes the costs incurred in the layers below.
Choices are made, at each layer, according to the standard principle
of cost-driven shortest path routing. This may mean, for example,
that multiple elephants are aggregated together on a lightpath and
bypass IP routers because this is the cheapest way for them to be
transported.

If a resource at any layer (e.g. a laser) is lightly utilized, the
cost per bit transmitted will be high. Thus, flows will be directed
by the algorithm to highly utilized resources.

A request for service of transmission of a flow arrives at the
highest layer first. A layer will delegate the ``service'' by
passing a request it receives directly
 to the layer below if this is the most cost effective way to
handle the request. If such delegation occurs, a link at layer
$k+1$ becomes composed of a path through layer $k$. As a bare
minimum, every layer provides, at no extra cost, the service of only
allowing access to the layer below. If for a given layer, at all
nodes, this is the only service it provides after the optimization
algorithm completes its run, then this layer/technology will not be
used.  For example, consider a futuristic scenario when optical
transmission and switching  becomes very cheap relative to IP. It
could then be more cost effective to outsource/delegate all
switching to lower layers, so IP would not be used.

OPEX and CAPEX of all technologies will be combined together by a
careful analysis (using amortization where appropriate) so that each
switching or transmission activity has a well-defined cost per bit,
and each flow or path setup has a well-defined cost per setup. Setup
and monitoring costs for each flow of a given class will be spread
across the bits of a flow, giving therefore a single unified measure
of cost as a cost per bit, which will then be used in routing
decisions.

This way, we ensure that the cheapest technology is used for
each traffic class. This is equivalent to a decentralized market
economy. Customers (flows) choose the most cost effective service.
 Then we find out which service provider survives and
which doesn't. We comment here that this leads to another important
issue of whether individual cost optimization means social cost
optimization, but this is beyond the scope of this proposal. The
optimization applies to technologies as well as to links. If too
little traffic uses a link, the cost per customer will be too high
and customers will choose other cheaper links. {\bf \underline{This
leads us to the key idea.}} The traffic chooses the cheapest
technologies and the cheapest routes. This tells us what
technologies should be used at what location, which links should
exist and how much capacity is required on them. 
This can be viewed  as a heuristics to minimize cost subject to GoS requirements, where
link capacity, flow conservation constraints are maintained, and where the
decision variables are: nonnegative link
capacities (where 0 means no link), technologies at each node, mean
and variance of the bit arrival process at each layer (mean and
variance). 

\vspace{0.1 cm}

\noindent \uline{\textbf{2. b. ii. Analytical modelling and capacity assignment
based on a fixed-point approximation}}

%\vspace{0.2 cm}
Here we describe how we analytically estimate link capacity under
the assumptions of CBR and PPBP traffic and flow size based routing.
Note that we extend the notion of PPBP to allow the flow size to be
modeled by a truncated Pareto distribution. This is a realistic
assumption for kangaroos and mice. For the elephants we still retain
the non-truncated Pareto assumption. We consider aggregation of
traffic into \underline{traffic streams}. The original traffic
streams generated by the users between any OD pair may split into
sub-streams (based on their traffic classes) each uses a different
layer/technology and/or route. At every link/layer traffic stream
merge and/or split. We assume that each traffic stream (or
sub-stream) is either CBR and/or PPBP. Henceforth, we discuss only
the PPBP component of the traffic as the equivalent treatments of
CBR are straightforward. Formulas for the relevant Pareto
distributions (truncated or non-truncated) and the mean bit rate of
any traffic stream (or sub-streams) are readily available (see e.g.
\cite{Addie2010a}).

As discussed, there is a network associated with each layer and all
traffic must obtain service from Layer 0 for transmission.
%A link in
%layer $k+{\rm 1}$ corresponds to traffic streams in layer $k$,
%recursively.
We use a design based on cost-driven shortest-path
routing for each layer which is {\em almost} independent from other
layers. However,
%the link cost in layer $k+{\rm 1}$ depends on
%the cost of the path associated with these links, in layer $k$,
the traffic in layer $k+{\rm 1}$ is a function of the costs at layer
$k$, and this affects the traffic at layer $k$ and thus also 
 the costs at layer $k$. We therefore use the following {\bf
\underline{iterative fixed-point algorithm}} to design all the
layers. We begin with unlimited capacity available at all links
and all layers assigning certain nominal (initial) utilization level
which give initial cost metric that determines an initial routing
table. In fact, we solve an unconstrained cost minimization problem
by a heuristics based on cost-driven shortest path. Having the
statistics of the traffic on each link at any layer, we set link
capacities using a queueing or loss model (depending on the relevant
layer) according to GoS requirements. Given the link capacities and
traffic statistics, we compute the flow-size dependent routing
tables for each router which enables flow assignment.  Accordingly,
at each iteration, we adjust capacities, and continue iterating
until they converge to a fixed-point solution of routing, choice of
layer, and capacity assignment. Identifying convergence conditions
to an optimal solution is beyond the scope of this proposal. Based
on our experiments, this converges quickly although the experiments
are at an early stage.

Two approaches for the fixed-point algorithms will be considered.
One is based on the so-called reduced load fixed-point solutions
\cite{Rosberg2003} where packet loss in one link affects the traffic
in the next link and a second simpler alternative where this effect
is neglected. The latter is applicable in cases of low loss.

For accurate results, we will need to accurately handle splitting
and merging of traffic streams. A traffic stream generated at its
source, or that arrives at a certain node, may split, choosing a
layer or a route, based on flow size. Thus it creates new
sub-streams that are characterized by upper and lower bounds of
their flow sizes. Consider a traffic stream (or sub-stream) with
mean flow arrival rate $\lambda$ with a range of flow sizes from $L$
to $U$. Then let
 $D$ $(L \leq D \leq U)$ be a threshold based on which the flows
of that stream split again. If $\lambda_L$ is the rate of the
smaller flows $(<D)$, then their stream is modelled by PPBP with
flow rate $\lambda_L$ and Pareto distributed flows truncated between
$U$ and $D$. The sub-stream with the larger flows will have rate of
$\lambda-\lambda_L$ and flows sized between $D$ and $U$. Merging of
flows can also be approximated by a PPBP by fitting parameters
\cite{Addie2010a}.

{\bf \underline{The choices of flow-size boundaries}}, 
i.e. the $D$ values, will also require
optimization. This can be done by iteratively computing the
performance achieved, or total cost, for each $D$ value, and
searching for the optimal values. If we consider, for example, three
class sizes, this means that only two values for the boundaries are
needed to be optimized.
Optimizing the boundaries may also lead to optimal decisions on the
number of traffic classes as they can reveal that certain classes
are redundant.

Several alternatives for {\bf \underline{queueing models}} to
evaluate link capacities at each iteration will be considered. For
layers that do not involve queueing the simplest loss model for link
dimensioning is mean traffic plus several standard deviations as we
used in \cite{Addie2010a}. For layers that involve queueing such as
the IP layer, we will consider PS and LFL. As discussed the
investigators have significant experience in studying such queueing
models. One approach is the use of fluid flow modeling and solution
of partial differential equations \cite{Addie2007a}. As we are
interested in GoS measures related to both flow- and packet-blocking
probability, the consideration of both LFL and PS is important.
While PS provides equal rate to all flows served, LFL will discard
the largest flows which will reduce flow blocking probability.

{\bf \underline{Challenges:}} One possible difficulty in this
approach is that in certain cases we will need to find a way past
many local minima which are far away from the global minimum.
Strategies to overcome this difficulty include: (i) an
``introductory offer discount''. When a virtual link is first
introduced, it is assigned a cheaper price for the first few
iterations; this discount is scaled back to zero as the iterations
proceed, and (ii) new virtual links are randomly introduced at every
iteration. These strategies are similar to simulated annealing, in
the sense that the solution is randomly adjusted, recurrently. Such
an approach could be further enhanced by adding some genetic
learning. The PI has experience in evolutionary algorithms
\cite{Guo2008}. 

\vspace{0.1 cm}

\noindent \uline{\textbf{2. b. iii. Traffic nonstationarity and
multi-hour optimization}}

%\vspace{0.2 cm}

It is well known that Internet traffic exhibits nonstationarity
\cite{Karagiannis2004,Cao2001}. We plan to consider in
this work scenarios where the arrival process parameters  
slowly evolve.  The proposal includes dynamically routed flows and
virtually-permanent links according to the current costs of the
network, and which may slowly evolve. As traffic changes, link cost will
change and cost of use of dynamically routed links will change,
leading to some paths being introduced, and others being withdrawn.

The link capacity assignments will be based on separate iterative
runs for each traffic scenario and for each link the capacity is set
based on the maximum value obtained for that link over all traffic
scenarios. This approach is different (and much simpler) than what
is known in the literature as multi-hour traffic optimization (see
\cite{Maxemchuk2005,Ouveysi2010} and references therein). We choose
the simpler approach because such optimizations are centralized and
therefore not scalable.


% But how can we expect a good multi-hour network design to emerge
% from a philosophy in which routing is flow-size dependent, and
% follows shortest paths according to flow-size dependent costs, and
% which is not explicitly traffic-dependent?






\vspace{0.1 cm}


\noindent \uline{\textbf{2. b. iv. Simulation}}

%\vspace{0.2 cm}

Simulation is an essential technique for the performance analysis of
communication systems. Simulation of systems with heavy-tailed flows
is fundamentally difficult,
%when the
%degree of heaviness of the tails of flow size distributions becomes
%extreme,
unless special simulation techniques are adopted. Fortunately such a
method has been developed by Co-I Addie \cite{Addie2008,Addie2009a}.
This technique is similar to hybrid simulation but with the
significant difference that the simulation of flows of different
lengths is undertaken over different simulation durations. In
particular, a simulation does not have a unique length. Long flows
see a long simulation, and short flows see a short one. Observations
are made only when the details necessary for them to be accurately
represented are present. We intend to publish a guide on the
simulation technique which will improve the appreciation of these
methods and facilitate newcomers to learn how to use the method more
easily. We will use several networks with different topologies in
our simulations to test the accuracy of the analytical model and to {\bf
\underline{benchmark the results against the current Internet architecture}}.
 The particular
networks that we will use are known topologies including NSFNET,
USNET and Germany 17. We will also use the network of
\cite{Maxemchuk2005} (see Figure 2). The traffic on these networks
will be assumed to follow a PPBP model (and CBR). The
benchmarking will be performed against the current state-of-the-art
that includes IP, MPLS and GMPLS.

The wide variety of topologies together with a wide range of
alternatives for the traffic parameters will be used to test {\bf
\underline{the robustness of our analytical model}} against
simulations. Notice that our algorithm is
based on shortest path, without alternate (or deflection) routing,
that are normally associated with instability. However, we will
further test simulations scenarios that involve deflections and reattempts, which
are not included in the analytical model, to further examine the
flow-size based approach for possibilities of instability.
Another way to evaluate the quality of our approach is to compare the results 
with a centralized optimizations for small problem.  


\vspace{0.1 cm}

\noindent \uline{\textbf{2. b. v. Plan}}

%\vspace{0.2 cm}

A list of the tasks, with estimates of duration, resourcing, and
pre-requisite tasks, is given in Table 1. These tasks will be
carried out in parallel to the extent allowed by their dependencies,
which are shown in the table. Except for writing the snapshot
simulation guide and the development of a guide for analysis of
PPBP like processes, the tasks of writing and publishing
the results are implied and associated with most of the tasks and
they are not explicitly mentioned for brevity. In addition to the
time spent by the SRA and RA, indicated in the table, the PI and the
Co-I will be heavily involved in the project.

\vspace{0.1 cm}

\noindent \uline{\textbf{2. b. vi. Future research and development}}

%\vspace{0.2 cm}

After developing our methodology and demonstrating its benefit
under the present project,
we plan to develop our flow-size-based protocol and implement it in
a lab-based network using real network equipment. Such a development
requires funding that goes beyond the GRF limit. We believe that success
in this GRF project
will provide a convincing case for further funding from
industry and UGC for this important work.

\newpage

\bibliographystyle{ieeetran} \bibliography{GRF}

\end{document}
