\documentclass[12pt]{article}
\usepackage{graphicx, epsfig, amsmath, url, subfigure, cite, color, ulem, IEEEtrantools, geometry, float, times}
%\pagestyle{empty}
\geometry{left = 1in, right = 1in, top = 1in, bottom = 1in}
\parskip 0pt         % sets spacing between paragraphs
\parindent 12pt      % sets leading space for paragraphs

\begin{document}
\setcounter{page}{1}
\bstctlcite{all:BSTcontrol}
\rm

\noindent
\Large
\uline{\textbf{1. Impact and Objectives}}

\normalsize
\noindent
\textbf{(a) Long-term impact}

In this proposal, we introduce a novel approach to Internet traffic engineering which systematically
differentiates both routing and queueing of packets according to the size of their containing flow.
We propose a method of performance analysis that enables the benefits of this new approach to be
readily benchmarked against the current Internet.

The current Netheads' Internet architecture has shown itself to be remarkably successful and popular.
Its IP layer, based on the Open Shortest Path First (OSPF) protocol and IP packets that carry any
type of information: data, voice, video, etc., is very flexible. More and more applications are
developed independently of the transmission medium without maintaining connection state information.
On the other hand, the Bell heads have promoted the view that state information must be maintained
and connection admission control (CAC) must be used for provision of quality of service (QoS) and
charging.

So far the Netheads have had the upper-hand - IP dominates the desktop. Users have voted with their
feet and are satisfied with service received because they get it at supermarket prices. Perhaps this
"simple" observation has much to teach us. How was this possible? There are two possible explanations:
(1) the efficiency achieved by sharing resources by many users and many service types is sufficient
that a modest level of over provisioning provides satisfactory quality at an acceptable cost to users,
and (2) given the flexibility of the Internet architecture, new services and applications are designed
to adapt successfully to the best effort quality provided by the Internet (e.g. Skype).

However, the future brings new challenges to the Internet because of its growing energy consumption.
According to a recent Cisco White Paper \cite{RefWorks:2193} "Annual global IP Traffic will exceed
half a zettabyte in four years", and "The sum of all forms of video (TV, VoD, Internet, and P2P)
will account for close to 90\% of consumer traffic by 2012". Internet traffic, that doubles every
two years \cite{RefWorks:2193}, causes Internet energy consumption to increase at a much higher rate
than other industry sectors (e.g. manufacturing, transportation and construction) that normally grow
in line with population or GNP rates. Reducing the growth of energy consumption is clearly important
for the environment and for cost savings, and is also a barrier to the growth of the Internet itself
\cite{RefWorks:578}.

\noindent
This project is guided by the following principles:
\begin{enumerate}
\item We aim for an evolutionary approach under the assumption that IP is dominant in the access
and part of the core.
\item The currently established Internet traffic modeling based on heavy tailed flows is here
to stay.
\item Efficiency and particularly energy efficiency is important for sustainable growth of the
Internet.
\item Flow level monitoring and management (not necessarily end-to-end) are key to achieving
(3) and the provision of quality of service.
\end{enumerate}

\noindent
We consider three Internet energy consumption causes:
\begin{enumerate}
\item Individual Packet Processing: In today's Internet packets are processed
(including repeated buffering, and routing-table look-up) individually at each
router \cite{Refworks:3375,RefWorks:3370} which is energy inefficient especially
for large flows, and fails to respect our carbon emission budget as we approach
the "zettabyte era";
\item Connection set-up (including recording and updating hash tables);
\item Transmission.
\end{enumerate}

\noindent
In line with these three causes, we consider three size-based flow types:
\begin{enumerate}
\item Mice: these are the smallest flows and there are many of them. It is not efficient to
treat them as flows. Their tunnels may be longer than shortest path (similar to busses that
drop off and take up passengers along their routes).
\item Elephants: these are the largest flows. Their numbers are relatively small, so they
justify complex flow setup and clear-down, possibly including the setup of an LSP or WDM
routed wavelength. Individual packet processing (cause 1) is avoided. Their aggregation is
achieved by using a pool of wavelength channels per trunk.
\item Kangaroos: they are routed based on the current IP architecture. The existence of
Kangaroos is practical and consistent with our aim for evolution. It may be beneficial to
use shortest-hop-path routing for them because their packets are treated individually at
every router.
\end{enumerate}

The concept of flow-size dependent routing appears to be logical for multiple reasons
(quality of service, energy savings, network efficiency, and management of layered routing),
and therefore warrants study, which we propose to do in this project.


\vspace{24pt}
\noindent
Objectives:
\begin{enumerate}
\item Define rigorously a flow-size dependent network optimization problem with cost and
energy consumption as the objective and performance as the constraint.

\item Provide a first cut benchmarking method against OSPF based on cost.

\item Develop and validate scalable and accurate simulation and analytical models to derive
flow loss and delay statistics for the optimal solution of Objective 1 considering a queueing
network model fed by Poisson arrivals of heavy-tailed flows.

\item Benchmark the results of Objective 3 against OSPF and quantitatively demonstrate
the benefit of the new approach.
\end{enumerate}

\newpage
\noindent
\Large
\uline{\textbf{2. a. Background of research}}
\normalsize

Our starting point in this project is the current Internet experienced by the majority of
users that obtain best effort service - excluding private networks and premium customers.
Although our approach can be implemented in various ways, here we adopt the view of favoring
small flows. That is, if all users are equal (best effort), it is justified, during a
congestion period, to reduce the rate (potentially to zero) of one elephant to allow many
users (mice), e.g., access their emails. However, before we reject flows, we aim to meet
demand efficiently. To this aim, our traffic engineering optimization is based on a variant
of shortest path routing where path route varies depending on flow size. It is a natural
extension of the current Internet routing approach (OSPF), and it is a scalable strategy
for providing quality service at minimal cost in a natural manner.
In our modeling, we focus on architectures that significantly reduce energy consumption
and thus support Internet growth in a scalable way. We also consider well established
Internet traffic models where flow sizes are heavy tailed. The concept of flow size here
refers to the size [bits] of an application level flow; for example, the number of bits
associated with a page download, a P2P movie download, or the number of bits associated
with one IPTV movie.
Our project is related to many fields such as mathematics, queueing theory, operations
research, Internet traffic engineering, and discrete event simulations. In the following
we attempt to provide the reader with background on the relevant issues. However, given
the page limit, we are not able to discuss and cite the entire relevant literature
published by others and by us.

\noindent
\textbf{2. a. i. Work done by others}

\noindent
\textbf{\textit{2. a. i. 1. Architecture}}

As sizes of Internet flows continuously grow while the size of an IP packet stays fixed,
there is an opportunity to save energy and efficiently provide QoS by capturing, monitoring
and managing entire flows rather than processing individual packets. One relevant technology
that seizes this opportunity is flow routing \cite{RefWorks:3375,RefWorks:3370} where
flow-state information is captured on the fly by routers from the first packet of a flow.
Identifying large flows (elephants) on the Internet is in itself an important research
challenge which is met in \cite{RefWorks:3594,RefWorks:3607} using Bloom filters. Flow
routing can lead to an evolutionary rather than revolutionary change. It relies on TCP/IP
at the access, and interworks with conventional routers. If flow-state information is
maintained, specific flows can be targeted for discard during congestion without affecting
other flows. This is similar to CAC, but without communicating with the end-users. It is
similar to circuit-switching in the sense that it maintains state information, but without
the need for synchronization. It is similar to active queue management (AQM) \cite{RefWorks:143}
as it indirectly controls the send-rate of end users, but unlike AQM it does not rely on
random losses because it controls individual flows. Routing-table look-up for each packet
is avoided by identifying packets as belonging to a flow and using a hash table. Unnecessary
buffering is avoided by admitting only the traffic that can be forwarded. This saves power
and space. According to \cite{RefWorks:3370}, flow routing can save 80\% of the energy
consumption, 90\% of the space, and reduce operating expenses by a factor of 10. Related
flow-rate management schemes \cite{RefWorks:3372} can maintain TCP connections at a stable
and fixed rate, avoiding slow start. Considering the potential benefit of flow-based
technologies such as flow routing, we propose to study, in this project, network
architectures similar to the current Internet but incorporating flow-size dependent
routing and queueing.

\noindent
\textbf{\textit{2. a. i. 2. LRD traffic models and analyses of queues with heavy tailed flows}}

Based on traffic measurements, Internet traffic exhibits Long Range Dependence and
selfsimilarity. This has been observed for Ethernet traffic \cite{RefWorks:2456},
metropolitan area traffic \cite{RefWorks:199}, and general Internet traffic \cite{RefWorks:2413,RefWorks:3545}.
Observations revealed that the causes of this are related to flow sizes being heavy tailed
\cite{RefWorks:3545, RefWorks:3542}. Williams et al. \cite{RefWorks:3544} observed this in 2005,
while Downey \cite{RefWorks:3546} concluded in his 2005 study that "The distribution of burst
sizes for ftp and HTTP transfers appears to be long-tailed." These studies justify our Pareto
distributed flow size assumption.

Analyses and simulations of systems involving LRD processes and/or heavy-tail flows for
the full range of parameter values have been considered difficult. One traffic model that
has attracted significant attention is the so-called Poisson Pareto Burst Process (PPBP)
(a.k.a M/G/infinity where the 'G' represents a Pareto distribution) characterized by a
Poisson process of arriving heavy-tailed Pareto-distributed flows. The Poisson arrivals
are justified by the large number of Internet users that generate them. Analyses by
others of PPBP queues focused mainly on asymptotic results: (i) where the buffer
threshold tends to infinity while the number of sources, the server rate, and the
offered load, are fixed \cite{RefWorks:2563}; (ii) where the buffer size (or threshold)
and server speed are linear in the number of sources, which tends to infinity
\cite{RefWorks:2654} and (iii) where the buffer size grows in proportion to the square
root of the number of sources \cite{RefWorks:2485} - this last result is associated with
the Central Limit Theorem (CLT), which is also often referred to as a heavy traffic limit.
The work by others that focused on asymptotics may not be applicable to practical conditions.
By contrast, as we discuss below in "work done by us", our work in \cite{RefWorks:1}
applies to the full range of situations.

\noindent
\textbf{\textit{2. a. i. 3. Simulations of queues with heavy tailed flows}}

Because it is difficult for a simulation that runs a finite amount of time to capture accurately
the performance effects of a random variable that has infinite variance, it has been a challenge
to accurately evaluate performance by simulations for queues involving heavy tailed service flows
\cite{RefWorks:3539,RefWorks:3554,RefWorks:3555}. Gross et al. \cite{RefWorks:3555} demonstrate
that significant error may occur in evaluating mean queue size of an M/P/1 queue (P represents
Pareto) by simulation, and note that in any simulation there is a maximal value for the generated
Pareto random samples and therefore the random samples are in fact from a truncated Pareto
distribution and not of a Pareto distribution. J. Julio Rojas-Mora et al. [20] demonstrated
improvement in both accuracy and simulation time of a processor sharing queue with Pareto file
sizes using the Bootstrap method. The bootstrap method is very useful in evaluating the error
of the simulations and we will also use it. It is complementary to what we are proposing here
which is to simulate at more than one time scale simultaneously, which is a more radical way to
address the difficulties we encounter in simulating these systems.

\noindent
\textbf{\textit{2. a. i. 4. Analyses of hybrid switching and multi-service models}}

Our model can be categorized as a hybrid switching model. Historically, hybrid switching models
(see e.g. \cite{RefWorks:3605}) have assumed a total fixed capacity that is available to several
types of traffic, which could be viewed as flow-length based, e.g., circuits versus packets,
long versus short lived flows. They normally assumed a single node and exponentially distributed
flow sizes, and the aim was to evaluate performance (loss or queueing delay). By contrast, we
consider heavy tailed flows and aim for an analysis of networks of arbitrary topology. An adaptive
 controller for flow classification is provided in \cite{RefWorks:3602}. However it also did not
 consider the impact of heavy tailed flows on loss and delay. Existing teletraffic theory for
 multi-service loss systems \cite{RefWorks:2396} provides the overall average blocking probability
 for arbitrarily distributed flow size. However, it does not provide queueing delay statistics,
 and the overall average loss does not allow for significant periods of starvation and loss due to
 heavy tailed flow size distributions.

\noindent
\textbf{\textit{2. a. i. 5. Internet traffic engineering and MPLS}}

Multiprotocol Label Switching (MPLS) \cite{RefWorks:3609} is an IETF initiative that facilitates
Labeled Switch Paths (LSPs) by mapping individual packet routing and QoS information into MPLS
labels. In principle, MPLS can be used to aggregate mice into an LSP that bypasses IP routing
and is routed to optimize efficiency and energy efficiency. An individual LSP can also be used
to handle an individual elephant. According to \cite{RefWorks:3375}, MPLS "cannot support QoS
for individual small flows, cannot provide delay guarantees, and cannot reject new flows to
protect current flows from packet loss." In our model, we do aim to aggregate mice together in
large pipes (e.g. LSPs), but loss is avoided for the mice as we aim to give them priority and
monitor the larger flows, so that QoS for mice will be provided by default. In particular, LSPs
of individual elephants can be monitored and managed to provide QoS, either to themselves, or
to protect other flows from them. Because we are interested in network design where traffic
engineering is optimized, the existing research on MPLS and Internet traffic engineering (e.g.
\cite{RefWorks:3611,RefWorks:1392,RefWorks:3606}) is relevant to this proposal. An advantage
of the routing architecture proposed here is that all the relevant paths are found by basically
the same procedure as in the present Internet, so scalability is not an issue.

\noindent
\textbf{2. a. ii Work done by us}

\noindent
\textbf{\textit{2. a. ii. 1. PI Zukerman and Co-I Addie collaboration}}

Extensive collaboration between PI Zukerman and Co-I Addie over the last two decades has led
to over 30 joint publications relevant to this proposal. In the early 90's they introduced a
general theory for a queue fed by SRD stationary traffic \cite{RefWorks:202} with special
application to Gaussian queues for which they provided a novel closed-form result for the
asymptotic queue-size distribution. Choe and Shroff \cite{RefWorks:3553} wrote: "The excellent
work by Addie and Zukerman \cite{RefWorks:578}...".

In \cite{RefWorks:199}, three important results are presented: 1) for the first time, {LRD}
has been observed in metropolitan area networks, 2) the results of \cite{RefWorks:202} have
been generalized to the case of {LRD} input and shown to be consistent with large deviation
theory \cite{RefWorks:2470}, and 3) a method to fit measurable traffic parameters with the
model parameters is presented. The work presented in \cite{RefWorks:185}, described, in
general terms, how to model real traffic streams using {PPBP}, and \cite{RefWorks:230}
presented, for the first time, a method, which we call the quasi stationary ({QS}) approximation,
to estimate the queuesize distribution of a PPBP queue for any traffic and congestion condition.
This approximation was validated against a specially tailored type of simulation in
\cite{RefWorks:230}. Further details on the {QS} approximation and the {PPBP} queue simulation
are available in a PhD thesis by T. D. Neame \cite{RefWorks:3147} supervised jointly by PI
Zukerman and Co-I Addie. Then \cite{RefWorks:134} used that parameter matching method to
predict traffic smoothing and efficiency in the core of a bufferless optical Internet, and
\cite{RefWorks:1552} (which won the Best paper award in ATNAC 2003) analytically showed that
for PPBP queues, large deviation theory is not applicable to cannot provide accurate queueing
performance in the practical cases where the buffer content is moderate. Recently in \cite{RefWorks:1},
we used the numerical procedure of the QS approximation for a PPBP queue, and developed
numerical methods to compute the results obtained by others described in the previous section.
See Figure 1. We demonstrate that the QS approximation is consistent with the asymptotic results,
although the buffer level at which the large buffer approximation becomes approximately the same
as the QS approximation can be very large. It is clear from the results presented that the large
deviation estimates based on the large buffer threshold limit do not apply to cases of practical
interest where buffer thresholds are of moderate size.

\noindent
\textbf{\textit{2. a. ii. 2. Other relevant work by us}}

For contributions of PI Zukerman to performance evaluation of hybrid switching systems, see
\cite{RefWorks:5,RefWorks:930,RefWorks:218,RefWorks:220} and references therein. He also
contributed to analyses of telecommunications networks of arbitrary topologies
\cite{RefWorks:942,RefWorks:88,RefWorks:135} and to the design of such networks
\cite{RefWorks:106}. In a recent commentary \cite{RefWorks:3859} he pointed out advantages
of maintaining state information for certain calls, connections, or flows.

Back in the 80s, Co-I Addie et al. \cite{RefWorks:2523} introduced the concept of "Virtual
Direct Routes" which is equivalent to the "Virtual-path" concept in ATM networks and later
led to its standardization. This invention has also led to an Australian patent \cite{RefWorks:3612}.
At the same time, Co-I Addie also developed the concept of bandwidth switching \cite{RefWorks:2505}
that together with the "Virtual Path" formed the basic ideas of MPLS and Internet Traffic Engineering.

Co-I Addie et al. \cite{RefWorks:2574} derived readily computable formulae for arbitrarily
correlated Gaussian queues. Addie \cite{RefWorks:2571} provided justification for using
Gaussian models of aggregate traffic. These were used very effectively to analyse an SRD
component of the PPBP traffic analysed in \cite{RefWorks:1}.

In \cite{RefWorks:2446} and \cite{RefWorks:3591} Co-I Addie provided a simulation technique
which can give accurate simulation results for systems that involve Pareto sized flows in a
fraction of the time required by conventional simulations. Although the technique is related
to importance sampling, the key idea is to simulate with varying levels of details - more
detail for events with more direct and important impact on the observed statistics.

Co-I Addie et al.\cite{RefWorks:2442,RefWorks:2676,RefWorks:2443} developed analytical
solutions for a queue with PPBP input considering Short Job First (SJF) and Fair Queueing
(FQ) disciplines. While this work considered primarily analyses of single queues, in this
project we consider network-wide differential routing and queueing of flows based on flow size.
Of special interest is Addie et al. \cite{RefWorks:2439} that considered flow dependent control
strategies applicable to collections of (small) flows which can be readily distinguished,
rather than identifying and monitoring individual flows.

\Large
\noindent
\uline{\textbf{2. b. Research methodology and plan}}

\normalsize
\noindent
\textbf{2. b. i. Optimized Traffic Engineering using Flow-size Dependent Routing(Objective 1)}

A key objective in this project is to develop a new approach for Internet traffic engineering
that will be scalable and robust, and will have the potential to yield cost effective operation
 of the Internet. To achieve this objective, flow-size dependent routing is introduced in a
 form suitable for a large network such as the Internet. We choose this flow-size dependent
 routing to optimize cost, including energy cost, subject to simple but realistic performance
 constraints which are mandated in practice by ensuring that links have sufficient capacity.

Such an optimization problem is normally difficult especially given the size of the Internet
and the number of flows. However, by making pragmatic simplifications optimal solutions of
this model become easily characterized. They route flows on shortest paths - with path lengths,
as usual, the sum of link lengths. The novelty is that links are assigned different lengths
depending on the size of the flow. The first objective of this proposal is to rigorously state
and prove, under precise assumptions, the following result: If link cost is proportional to
mean offered traffic with a coefficient which depends on flow size, then flow-size dependent
shortest path routing is used in any optimal design. It is not claimed that this property
uniquely determines the optimal routing and capacity design. Simple examples exist in which
two designs of different cost which both satisfy this criterion. However, in the context of
the Internet it seems unlikely that shortest path routing will converge to a design significantly
less efficient than the optimal design. In the proposed routing architecture the processing
time involved in computing routing tables is expected to increase by a factor less than 3.

\noindent
\textbf{\textit{2. b. i. 1. Efficiency and Performance Considerations}}

Due to fluctuations in traffic demands, to provide quality of service, link capacity must
reflect the variance and potentially other traffic statistical characteristics (e.g. the
Pareto tail slope) as well as the mean, e.g. link capacity might be set to the mean plus
3 standard deviations of the traffic, measured over, for example, one second. Furthermore,
the additional bandwidth required to support an additional flow depends very strikingly
on the current traffic mix on the link. This is why traffic aggregation is so beneficial
to performance provision and cost reduction.

The standard deviation to mean ratio of long flows is much higher than for short flows.
Large flows can potentially contribute 1\% to aggregate mean traffic on the link, but
50\% or more to the aggregate standard deviation. Large flows have this potential for
disproportionate contribution to variance because the variance of the total bytes in
the entire length of the large flows arriving in a given interval is infinite. Moreover,
if a flow is moved from one link to another, its contribution to the variance of the
total traffic on the link changes very significantly. Therefore, individual large flows
should be routed so that they preferentially use large pipes with more traffic, to
reduce their own contribution to the variance - not necessarily the shortest path in
the usual sense. In the optimization, we explicitly account for the contribution to
link cost due to the larger standard deviation of larger flows. Furthermore, when an
identifiable large flow emerges it can be allocated exclusive and sufficient shared
network resources to carry its load without disrupting the remaining traffic. This
guarantees its performance and protects others from its impact.

\noindent
\textbf{\textit{2. b. i. 2. Energy Considerations}}

The traffic dependent activities which require energy in networks are: (i) flow set-up
and termination, (ii) packet routing and forwarding, (iii) transmission and signal
regeneration. Flow routing (virtual paths or circuits) uses more energy than IP routing
in flow set-up and termination (i), but saves energy in (ii) because the use of labels
or a hash table for packet forwarding is more energy efficient than packet-by-packet
routing using look-up tables at the router. Since for mice the set up cost per bit (i)
is most significant relative to the other cost components, they are likely to compromise
on (ii) and (iii) and choose permanent perhaps longer paths than the larger flows.

\noindent
\textbf{\textit{2. b. i. 3. Other Costs}}

In addition to the energy cost, we will also consider other operational expenditure
(OPEX) cost and capital expenditure (CAPEX) cost. (See relevant work done by the PI
in \cite{RefWorks:1540}.)

\noindent
\textbf{\textit{2. b. i. 4. Optimization}}

Assuming that cost is a function of the total investment in switching (in the broadest
sense) and transmission equipment, together with the use of switching and transmission
(in the broadest sense) and that we wish to constrain our design choices to achieve a
loss level of 1\% (for example) on all links, we take cost to be a linear function of
the form $\sum \int C_{k,s}\mu_{k,s} ds$, where $C_{k,s}$ is the cost per flow for flows
of length s through link k and $\mu_{k,s}$ is the intensity of flows of length s on link k.
If the cost coefficients are independent of path flows (which is true in a local sense,
i.e. in the vicinity of a certain design), the optimal solution to this probl
em, i.e. the optimal routing and link capacity allocation, will choose, for each flow,
the path with the minimum value for $\sum C_{k,s}$, i.e the minimal flow-size dependent
path length.

Optimal set-up and tear-down of permanent or semi-permanent virtual paths can be set
using known techniques \cite{RefWorks:3613,RefWorks:3615,RefWorks:1885} based on their
traffic load. Links which cannot be used without incurring an additional setup cost
whenever they are used -- because the link is actually a notional LSP, or WDM routed
wavelength, for example -- can be seamlessly included in our model of a network. The
additional computational complexity of including these "virtual links" as options for
routing is linear in the total number of links per node, because this is the complexity
of the shortest path algorithm used in routers.

As a consequence of this routing strategy, mice will use existing paths to avoid
significant set up cost (i), and reduce switching cost (ii); elephants on the other
hand will often choose a path set up on a one-off basis because setup cost is not
significant relative to the other costs they incur, and using a specially configured
path will save switching cost. The impact of the variance of elephants will be
minimized by using links from an aggregate of dynamically allocatable bandwidth,
e.g. LSP's or-$\lambda$ SP's. The kangaroos that use IP will avoid paths with high
setup cost and will choose the shortest path in terms of hops to reduce their
significant routing and transmission cost. The classification to the various types
will occur naturally by using flow-size dependent routing tables. Identifying the
class of flow to which a packet belongs is a subject with its own extensive literature
and is not considered in this project.

\noindent
\textbf{2. b. ii. First Cut Comparison with existing Internet (Objective 2)}

If traffic is routed on shortest paths, and decentralized management of links ensures
that they have adequate capacity, the emergent design will be in a certain sense
approximately optimal. In this project we improve on this concept in a practical way
by adopting flow-size dependent routing and queueing. Because flow sizes are heavy-tailed,
satisfactory estimates of flow sizes can be obtained in a scalable manner (to the
extent needed). For the same reason, we can dynamically estimate the variance contributed
by each flow, which is why the improvement in network cost - for a given performance
level - which can be achieved by allowing routing to depend on flow-size, is highly
significant. We will estimate this improvement by comparing the cost of two networks
carrying the same traffic, and with identical topologies (see Figure 2), where
capacities in both cases are selected using a simple mean plus two (or three) standard
deviations rule. A more complex comparison will also be made - with more practical link
capacity strategies, once the simplistic case has been completed. This is discussed in
the next section.

\noindent
\textbf{2. b. iii. Performance evaluation and benchmarking (Objectives 3 and 4)}

\noindent
\textbf{\textit{2. b. iii. 1.Analytical approach}}

A fixed-point algorithm will be used to model the autonomous process of traffic being
routed on flow-size dependent shortest paths in the Internet, with capacity upgrades
as needed according to measured throughput and congestion on links. The iteration
proceeds by (i) assuming a certain set of flow-size dependent routing tables and then
computing, the mean and variance of traffic on each link by adding up the mean and
variances of the flows which share this link, and then computing the appropriate
capacity, by the mean + 2 (or 3) sigma rule, in order to achieve the planned loss
rate on each link; then (ii) given the capacity, mean and variance of traffic on each
link, computing the flow-size dependent routing tables for each router. The steps (i)
and (ii) are repeated until the link capacities computed at each iteration are unchanged.

In order to focus on the essential difference between flow-size dependent routing and
conventional shortest path routing we will, in the first instance, allow link capacities
to be set at arbitrary values (i.e. not constrained to specific capacities) and assume
that they are managed continuously to have precisely the capacity mean + 2 (and alternatively 3)
standard deviations. This assumption will be adopted, for both the flow-size dependent
routed case, and the conventional routed network. This allows the two approaches to be
compared on the basis that both networks deliver similar performance, and it is therefore
fair to assess the advantages of the flow-size dependent routing concept by simply comparing
cost.

We then propose to investigate the following problems: \textit{What level of performance
will result for each separate flow category? How does this vary depending on whether
Fair Queueing (FQ) or Largest Flow Last (LFL) is used as the queue algorithm at the head
of each link? Is LFL necessary for satisfactory use of flow-size dependent routing? How
significant are the performance advantages gained from flow-size dependent routing? Does
this depend on whether FQ or LFL is used in queueing.}

Building on \cite{RefWorks:2442,RefWorks:2676,RefWorks:2443} and the work in GRF 124709
on single queues, we will extend the work here to evaluate end-to-end network wide performance.
We propose to investigate, by formulating and solving partial differential equations for
network state: (i) LFL and FQ as queue disciplines for each link; and for routing, (ii)
largest flows (elephants) use one-off tunnels (either MPLS or GMPLS tunnels), (iii) shortest
flows (mice) use permanent tunnels, and (iv) flow-size dependent routing in general. If the
flow management protocol ensures moderately better treatment for shorter flows than for
longer ones, it is possible to establish, and solve, equations for the stationary probability
distribution of flows shorter than a certain length in terms of the stationary probability
distribution of flows shorter than another infinitesimally shorter length. On the other hand,
if all flows are treated identically, as in FQ, the impact of short flows on long flows can
be replaced by their mean impact with very little error. In this case as well we can solve for
the stationary distribution of the network state. This method of analysis applies equally well
to both flow-size dependent shortest path routing, and conventional shortest path routing,
since the latter is a special case of the former. We can therefore analyse networks designed
in both ways and make the necessary comparisons to achieve the proposed benchmarking. A
reference manual for techniques needed in analysing Poisson-Pareto-like processes will be
developed as a basic tool for use in undertaking this analysis work.

\noindent
\textbf{2. b. iii. 2. Simulation}

Simulation is an essential technique for analysis of the performance of communication systems.
Simulation of systems with heavy-tailed flows is fundamentally difficult, not to say impossible,
when the degree of heaviness of the tails of flow size distributions becomes extreme, unless a
special simulation technique is adopted. Fortunately such a method has been developed by Co-I
Addie \cite{RefWorks:2446,RefWorks:3591}. This technique is similar to hybrid simulation but
with the significant difference that the simulation of flows of different lengths is undertaken
over different simulation durations. In particular, a simulation does not have a unique length.
Long flows see a long simulation, and short flows see a short one. Observations are made only
when the details necessary for them to be accurately represented are present. This simulation
method is at an early stage of development and consequently each new application requires
innovative software development. We intend to publish a guide on the simulation technique
which will improve the appreciation of these methods and facilitate newcomers to learn how to
use the method more easily.
The network of \cite{RefWorks:106} (see Figure 2) will be used to provide an example network.
The traffic on this network will be assumed to follow a Poisson-Pareto burst model. A mixture
of flows which seek to affect their desired transfer of bytes as quickly as the network will
allow, and others which do not seek to exceed a preset rate will be included.

\textbf{2. b. iv. Plan}

A list of the tasks, with estimates of duration, resourcing, and pre-requisite tasks, is given
in Table 1. These tasks will be carried out in parallel to the extent allowed by their
dependencies, which are shown in the table. Except for writing the snapshot simulation guide
and the development of a guide for analysis of Poisson-Pareto like processes, the tasks of
writing and publishing the results are implied and associated with most of the tasks and they
are not explicitly mentioned for brevity. In addition to the time spent by the SRA and RA,
indicated in the table, the PI and the Co-I will be heavily involved in the project as described
in Part II.8.

\newpage

\bibliographystyle{IEEEtran}
\bibliography{all}

\end{document}
