\begin{savequote}[8cm]
Here we are now, entertain us! 
\qauthor{Kurt Cobain}
\end{savequote}

\chapter{Improving content delivery networks using geosocial measures}
\chaptermark{Improving content delivery networks}
\label{ch:caching}
The amount of Internet traffic generated every day by online multimedia
streaming providers has reached unprecedented levels.  For instance, there are
more than 4 billion videos viewed everyday on YouTube, which has more than 70\%
of its traffic coming from outside the USA\footnote{YouTube official statistics
are available at \url{http://www.youtube.com/t/press_statistics}}.  These
providers often rely on Content Delivery Networks (CDNs) to distribute their
content from storage servers to multiple locations over the planet. CDN
servers exchange content in a cooperative way to maximise the overall
efficiency.

Nowadays content diffusion is fostered by weblinks shared on online social
networks, which may often generate floods of requests to the provider through
cascading across a user's social links.  This type of  ``word-of-mouth
spreading'' occurring in these services is already driving many of
the daily requests to content providers.  In fact, the proportion of traffic
generated by social spreading is high and increasing. Broxton et
al.~\cite{BIV10:youtube} discussed how about 25\% of YouTube views are
generated via person-to-person sharing, with a much higher  fraction during the
first days after a video is uploaded. A more recent study by Brodersen et
al.~\cite{BSW12:youtube} presents data about individual videos having, on
average, about 37\% of views socially generated.  Thus, social sharing
represents a crucial source of traffic for content providers. Given the
increasing size of online social platforms, with hundreds of millions of users,
           they generate millions of accesses to YouTube, accounting for a
           consistent fraction of the total number of daily requests.  
          % The
          % resulting load on a CDN  is exacerbated by all those users who see
          % the links and who might request the content, even if not reposting
          % it. 

%The type of load service providers experience is both in terms of bandwidth and caching: bandwidth is required
%to move content among servers, while servers act as content caches for future
%requests. While storage caches can be large, they are often insufficient to store the
%large selection of material made available by providers: as an example, YouTube
%hosts hundreds of millions of videos and every day hundreds of thousands of new
%ones are uploaded. Such a large catalogue of items can be hard to store in every
%server location.  In addition, it is clear that
%the trend for streaming of bulkier and longer content is just at its dawn.

In this chapter we show that geographic information extracted from social
cascades taking place over online social platforms can be exploited to improve
the design of large-scale systems, such as CDNs. We rely on this novel finding:
\textit{social cascades are likely to spread over geographically local distances.}
Users tend to share content over short-distance social connections, despite the
presence of several long-range links; although many users have these
long-distance connections, we have found that about 40\% of steps in
social cascades involve users that are, on average, less than 1,000 km away from
each other.  

Our key idea is that \textit{content should be kept on servers
that are close to interested users, minimising the impact on
network traffic}.  In other words, since content servers act as caches of items,
we aim to exploit the social and spatial characteristics of the users that are
sharing the content in the design of large-scale systems such as CDNs, so that we can
serve more requests immediately from the closest server rather than
waiting for the content to be transferred to the server from somewhere
else.

In order to validate our approach, we analyse a dataset from Twitter containing
geographic location, follower lists and tweets. We have tracked the spreading of
about one million  YouTube videos over this social network, analysing a corpus
with more than 334 million messages and extracting about 3 million single
messages with a video link.  Finally, we have designed a proof-of-concept model
of a planetary content delivery network using the geographic properties of the
commercial platform once used by YouTube. We show that new cache replacement
policies, driven by one of the geosocial measures  presented in
Section~\ref{sec:metrics}, improve the overall system performance.

\paragraph{Chapter outline}
In Section~\ref{sec:cdn} we describe content delivery networks and
their current problems, discussing the issues that motivate our study.
Section~\ref{sec:geocascades} presents an analysis of social cascades of YouTube
links over Twitter; by taking into account user geographic information we are
able to investigate the extent of these social cascades and to characterise them
over space and time.

In Section~\ref{sec:cdn_model} we describe a model of a content delivery network
that exploits the spatial properties of social cascades to characterise
individual spreading items, prioritising their presence across different
distributed caches, while Section~\ref{sec:cdn_evaluation} reports the
results of our evaluation, driven by our cascade dataset.
Section~\ref{sec:disc6} discusses
the implications of our results  and 
Section~\ref{sec:related6} offers a review of related work. 
Section~\ref{sec:concl6} closes the chapter.

\section{Content delivery networks}
\label{sec:cdn}
%In this section we describe content delivery networks and we outline the factors
%that influence them, debating ideas about how to improve their performance.
A Content Delivery Network (CDN) is a system of networked servers holding copies
of data items, placed at different geographic locations. The aim of a CDN is to
deliver content efficiently to clients; each request is served by a
geographically close server, while content is moved between servers to optimise
the quality of service perceived by users. Modern commercial CDNs deploy
numerous servers all over the world, often over multiple backbones and ISPs,
and offer their services to other companies that want to deliver
content to users on a planetary scale, such as dynamic Web pages,
software updates, multimedia content, live streams and so
on.

\subsection{Factors impacting performance}
CDNs have become progressively more important; the number of
users with broadband Internet connections is constantly increasing and along with
faster connectivity come greater expectations for better content delivery.
Even exploiting additional resources provided by CDNs,
this demand puts considerable pressure on the entire Internet.
This issue becomes even more important if we consider future trends:
as the size of distributed
content keeps growing, the distance between server and client becomes more
critical to the overall performance, since longer distances increase the
likelihood of network congestion and packet loss, which result in longer transfer
times~\cite{Lei09:cdn}. 

In addition, the geography of the requests influences the performance of CDNs;
it would be extremely useful to understand whether an item becomes popular on a
planetary scale or just in a particular geographic area. Recent research on
YouTube videos confirms that the majority of individual videos tend to exhibit
highly localised geographic patterns of popularity, with views mainly arising
from a small set of regional areas~\cite{BSW12:youtube}. This has crucial
consequences for CDN performance. A globally popular content item should be
replicated at every location, since it experiences many requests from all around
the world.  On the other hand, when content is only locally popular,  it should
be cached only in the locations that will experience most requests.  The key to
such a strategy is being able to predict quickly whether a piece of content 
will become locally popular, in order to optimise its placement over the CDN before
it undergoes the popularity surge.

\subsection{Improving performance through social cascades}
The popularity of content over the Web can be driven by public media coverage or
through word-of-mouth spreading~\cite{CMG09:flickr}. The former takes
place when content is advertised by large information sources, such as search
engines, news, and social aggregator websites (e.g., SlashDot, Reddit, etc.). This
type of phenomenon often results in globally popular items, which should be widely
replicated throughout a CDN, since they are likely to experience requests from
all over the world. On the other hand, content may become popular because people
share it and talk about it, leading to some sort of viral spreading along social
connections. These connections may be real-life contacts or interactions on
online social networks, with the latter becoming increasingly common.  
       
As a result, content may easily spread from a small set of users to a vast
audience through \textit{social cascades}.  The number
of content requests generated by these social cascades is hard to estimate.
However, recent findings confirm that about 30\% of Twitter messages contain URL
links, and that YouTube is one of the most popular services present in those
URLs~\cite{RBC11:urls}. This suggests that a potentially large number of content requests might be
generated by a cascade.  The combined effect of the popularity of several online
services may cause millions of such requests.

Social cascades can be tracked and analysed; online social services can provide all the information,
including user location, to track items shared by users and to understand
the properties of a social cascade while it evolves over time.  
To exploit these aspects to improve CDN performance, we need to
understand the key geographic properties of social networks.  For
instance, we need to study and characterise how social cascades are
unrolling over space and analyse whether geography affects the spreading
process.  In particular, \textit{is it possible to estimate whether
cascades will spread globally or locally just from the geosocial
properties of the users participating in the cascade?}
        
In the rest of this chapter  we will answer this question, characterising how
social cascades evolve over space, and then we will 
exploit our findings to improve CDN performance.

\section{Geographic online social cascades}
\label{sec:geocascades}
In this section our aim is to extract and study social cascades over
a geographic social network.  Since online social services represent a popular
way of sharing information,  a piece of information can quickly spread from
one user to another as a virus in an epidemic: somebody shares some new content
with their friends, who might share it again, and so on.  
%This phenomenon is often referred to as a \textit{social cascade}~\cite{CMG09:flickr}.  
Here we define
two measures that quantify the spatial spreading of a social cascade and the
extent of its propagation; we then use them to analyse information spreading
over Twitter, focussing on traceable pieces of information: Web links to YouTube
videos contained in tweets.

\subsection{Extracting geographic social cascades}
A cascade over a social network begins when the first user shares some
content and becomes the initiator of the cascade.  After this event,
some of his contacts will share the same content again, with the result
that the cascade will recursively spread over the social links.  

In order to estimate the influence of the
social network on the information dissemination process, we combine the information
about social connections with the temporal information about the posts of each user.

More formally, we say that user \emt{B} was reached by a social cascade about content \emt{c}
if and only if:

\begin{itemize}
\item{ there is another user \emt{A} who posted content \emt{c} \textit{and}}
\item{ user \emt{A} posted content \emt{c} before \emt{B} posted it \textit{and}}
\item{ there was a social connection from user \emt{A} to user \emt{B} when \emt{A} posted \emt{c}.}
\end{itemize}

While this does not guarantee the dependency of the posts, in most
cases we conjecture that there is a correlation between the two
events.  If more than one user among the social
connections of \emt{B} posted \emt{c}, we say that \emt{B} was reached by the cascade only
through the user who posted it last. Therefore, we always have only one
previous user in the cascade process. This arbitrary choice will only affect the
shape of the cascade, not its size or its overall geographic properties. 

In order to describe these social cascades we exploit the spatial social
network model defined in Section~\ref{sec:socio_spatial}. This cascade can be represented
as a tree over the  spatial network, with the initiator node as the
root of the tree.  For the same item there might be more than a single cascade.
Moreover, the same user may publish the same content at different times. In
order to take into account these details, we need to annotate the cascade links
with temporal information.  Each social cascade is represented as a tree, where
a link from user \emt{A} to user \emt{B} indicates that user \emt{B} has received some
content as a result of a social cascade from user \emt{A}. A link between \emt{A} and
\emt{B} is annotated with the time instants \emt{t_1} and \emt{t_2}: \emt{t_2} is the time
instant when \emt{B} posted content \emt{c} for the first time and \emt{t_1} is the time
instant of the last time user \emt{A} posted content \emt{c} before \emt{B} did, so that
\emt{t_2 \geq t_1}. We define such a cascade step by using a time threshold:
consecutive steps in a cascade must be within 48 hours of each other.

In order to investigate the geographic properties of a social cascade we
define two measures:
\begin{itemize}
\item{the \textbf{geodiversity} of a cascade is the geometric mean of the
    geographic distances between all the pairs of users in the cascade tree;
}
\item{the \textbf{georange} of a cascade is the geometric mean of the geographic
    distances between the users and the root user of the cascade tree.}
\end{itemize}

We adopt the geometric mean since geographic distances span several orders of
magnitude in our dataset.  For a given social cascade these two quantities are
correlated, however they can be used to emphasise different properties of the cascade.
The geodiversity is computed between all the pairs of users in a cascade, regardless
of whether they are connected or not, while the georange is only related to the
cascade initiator.  On the other hand, the georange allows us to understand how
close the initiator of a cascade is to the other people involved in it.

\begin{figure}[t]
  \centering
  \subfigure[Tweets]{
        \label{fig:video_tweet_popularity}
      \includegraphics[width=\doubleplotscale]{images/popularity_tweets.pdf}}
  \subfigure[Users]{
        \label{fig:video_user_popularity}
      \includegraphics[width=\doubleplotscale]{images/popularity_users.pdf}}
    \caption{Number of tweets containing a given video link (a) and number of users tweeting
            a given video link (b).}
        \label{fig:video_popularity}
\end{figure}

\subsection{Cascades of YouTube links on Twitter}
In this chapter we use the social network  extracted from the Twitter dataset
already described in Section~\ref{sec:metrics}.  We extract a directed graph
from our traces  where each node represents a user with a geographic location
and a link from user \emt{A} to user \emt{B} means that user \emt{A} follows
user \emt{B}. We recall that this graph has \emt{N=409,093} nodes and
\emt{K=182,986,353} directed links.

Through the Twitter API used to collect this dataset we had also access to the
3,200 most recent tweets for each
user in our geographic Twitter graph. We downloaded these tweets for all the
users in the graph, obtaining 334,407,185 tweets. The duration of the data
crawling was 12 days, from February 1 to February 11, 2010. For each tweet we
have crawled the author, the time when it was sent and the actual content of the
message. From these tweets we have isolated 570,617 messages containing a
direct link to a YouTube video.  We have extracted all the messages
containing a URL shortened with URL shortener services, obtaining an additional
2,332,390 messages with a YouTube link.  

\begin{figure}[t]
  \centering
\includegraphics[width=\plotscale]{images/cascade_size_cum.pdf}
    \caption{Complementary Cumulative Distribution Function (CCDF) of the size of
      social cascades.}
        \label{fig:virality}
\end{figure}
Thus, after removing invalid YouTube links, we extract a total of 2,903,007
tweets containing a valid direct link to a YouTube video. These links point to
1,111,586 different YouTube videos, hence some videos are contained in more than
a single tweet. The average number of tweets per YouTube video is 2.61.  In
Figure~\ref{fig:video_popularity} we present the popularity distribution for the video
links we have extracted: both the distribution of the number of tweets
containing a given video link and the number of different users tweeting a given
video link have heavy tails. Thus, there is a very small amount of videos that
are tweeted more than 4,000 times or by thousands of different users, while the
majority are tweeted only 1 or 2 times by a few users. Such a popularity
distribution can greatly affect content delivery, since popular
items can easily dominate in terms of number of requests. Furthermore, every
tweet is potentially spawning many more actual video requests, since all followers of the
author can view the link and follow it. While difficult to estimate, this
portion of additional traffic might constitute a large fraction of
social-driven web traffic.

Then, we use the cascade definition presented earlier
to analyse the tweets and extract 84,337 social cascades for
63,798 different videos. Each cascade involves the initiator and at least
one another user.  Unfortunately, we have no information about when a user started
to follow another one, so we assume that all the social relationships that we
have in our social graph were in place when the tweet was sent. 

\begin{figure}[t]
  \centering
\includegraphics[width=\plotscale]{images/dist_delay.pdf}
    \caption{Cumulative Distribution Function (CDF) of time delay between two consecutive tweets
        and total cascade duration (from the first to the last tweet). Cascade
            duration is shown only for cascades with at least two users in
            addition to the initiator.}
        \label{fig:cascade_delay}
\end{figure}


 

\subsection{Analysis of social cascades}
We define the \textit{size} of the cascade as the number of users involved in
it, including the initiator.  In Figure~\ref{fig:virality} we report the
distribution of the cascade size: we notice again a long tail, with
more than 60,000 cascades involving only two nodes and a few cascades reaching
up to hundreds of users.  This measure of popularity demonstrates that
it is rare to have large cascades, but when they do take place they can become
extremely large. Again, it is worth noting that social cascades include only
users who have tweeted a certain video link; however, each tweet can be viewed by all the
followers of the author, thus the potential audience that a YouTube video may
reach by means of a social cascade is much larger, even if only a few users are
involved.

In Figure~\ref{fig:cascade_delay} we illustrate the distribution of the
\textit{time delay} between two consecutive tweets in a cascade. About 40\% of
the tweets in cascades have a delay of about 15 minutes from the previous
message, with around 10\% having a delay of around 2 minutes.  This result shows
that YouTube links can spread on Twitter on a time scale of some minutes, even
though further spreading does happen even after some hours. This indicates that
links to videos can quickly spread over the social network, potentially leading
to many views in a short period of time.  In Figure~\ref{fig:cascade_delay} we
show the distribution of cascade duration from the first tweet to the last
tweet for each cascade with at least 2 users in addition to the initiator:  about 80\%
of the cascades end within 24 hours, with 40\% ending in under 3 hours.
\begin{figure}[t]
  \centering
\includegraphics[width=\plotscale]{images/cascade_edge.pdf}
    \caption{Cumulative Distribution Function (CDF) of cascade step distance and of
        social connection distance: social cascades take place over short-range
            social connections.
          Logarithmic binning has been adopted to estimate the
    number of samples in each range of values.}
        \label{fig:cascade_step}
\end{figure}

In Figure~\ref{fig:cascade_step} we show the distribution of the \textit{geographic
distance} between authors of two consecutive tweets in a social cascade.  Around
10\% of cascade steps are less than 1 km, with 20\% of them 
shorter than 100 km and more than 30\% shorter than 1,000 km.  This result is in
slight contrast with the distribution of link lengths of the Twitter network,
already presented in Figure~\ref{fig:four_edge_length_dist} and again shown in
Figure~\ref{fig:cascade_step} to aid comparison: even if
fewer than 5\% of the social connections are shorter than 100 km, within 
cascade steps this fraction increases up to 20\%. Content spreading through social
cascades is more likely than expected to travel over geographically short-range
social connections rather than over the more numerous long-distance links. 
\begin{figure}[t]
  \centering
\includegraphics[width=\plotscale]{images/geo_dist_3.pdf}
    \caption{Cumulative Distribution Function (CDF) of geodiversity and georange 
        for social cascades with at least 2 users
            after the initiator.
          Logarithmic binning has been adopted to estimate the
    number of samples in each range of values.}
        \label{fig:cascade_geo}
\end{figure}

In Figure~\ref{fig:cascade_geo} we show the distribution of
\textit{geodiversity}
and \textit{georange} for all the social cascades that involve at least two
users in addition to 
the initiator. About 40\% of these cascades have geodiversity lower than
1,000 km, with around 20\% of geodiversity values lower than 300 km. Thus, even
though many cascades reach a broad audience, some of them remain geographically
limited. On the other hand, about 90\% of georange values are smaller than 1,000
km, with about 30\% of values smaller than 100 km. This is an indication that a
cascade may take place in a broad region but with each user still close to the
initiator. 

\begin{figure}[t]
  \centering
  \subfigure[]
{\includegraphics[width=\doubleplotscale]{images/geodiv_1.pdf}}
  \subfigure[]
{\includegraphics[width=\doubleplotscale]{images/geodiv_2.pdf}}
    \caption{Average geodiversity of a social cascade as a function of the
        average locality of the first nodes in the cascade: locality of the
            first node (a) and of the first two nodes (b).
            Error bars show standard deviation around the average.}
        \label{fig:avg_geodiv}
\end{figure}

\subsection{Geosocial measures and social cascades}
Finally, we are interested in properties of a social cascade that may help
us predict its geographic spreading from the very first messages that
are tracked. Towards this aim, we use one of the two geosocial measures
introduced in Section~\ref{sec:metrics}: node locality. Recall that this
measure indicates whether a user has social connections mainly over short-range
distances, with a node locality close to 1, or over longer distances, with a
value closer to 0. The node locality of a user offers an indication about
the potential geographic spread of the  information passing through the user.

We investigate whether the node locality of the first
users who participate in a social cascade is related to the final
geodiversity and georange values.
We report in Figure~\ref{fig:avg_geodiv} the average cascade geodiversity as a
function of the average locality of the first users involved in the
cascade. We observe that even the initial locality of the first user is 
correlated with the geographic spreading of the cascade and with a reduction in
the variance of this spreading. Moreover, by
including the locality of the second user we get a stronger 
relationship.
%Lower locality values become noisy when we add the third user, while higher locality
%values are unaffected.  
A similar result can be seen in
Figure~\ref{fig:avg_georange} for the georange: in this case 
the correlation is clearer, with less variance especially for high locality
values. It is important to consider both the correlation with the average value
and with the reduction of the variance, denoting a more indicative estimation
for higher values of node locality.


\begin{figure}[]
  \centering
  \subfigure[]
{\includegraphics[width=\doubleplotscale]{images/georange_1.pdf}}
  \subfigure[]
{\includegraphics[width=\doubleplotscale]{images/georange_2.pdf}}
    \caption{Average georange of a social cascade as a function of the
        average locality of the first nodes in the cascade: locality of the
            first node (a) and of the first two nodes (b).
            Error bars show standard deviation around the average.}
        \label{fig:avg_georange}
\end{figure}

Thus, the final properties of a cascade can be estimated
even from the users involved in the initial stages. Also, even the
geographic and social properties of the initiator are sufficient to understand
whether a cascade will spread locally or globally, and by taking into account a few
more steps we are able to give a more accurate estimate of the final
outcome.

Given the importance of social cascades and their geographic properties, being
able to correlate their properties with the geographic range they will reach
makes it possible  to exploit these findings to improve the design of cache
replacement strategies for CDNs.

\section{Distribution of content using geosocial measures}
\label{sec:cdn_model}
We have described the geographic properties of social cascades. In
this section we exploit these findings in the design of a proof-of-concept
CDN that adopts geosocial measures to improve caching performance.

\subsection{Assumptions and model}
We envisage a single entity able to  access information about content shared by
users on social networks and control the CDN which delivers the content that
users are sharing. This can be mapped to reality in various ways: i) assuming
that CDNs will have access to information from online social services about the
cascade dynamics,
which is reasonable as they are providing the content sharing service or
ii) assuming that, plausibly, in future online social networks  and content providers might
merge into single entities or cooperate (e.g.,  companies like
Facebook and Google already offer social networking features and serve online
content).

We model our system as a collection of server clusters placed around
the planet.  Each cluster contains a certain number of servers: we
assume that all servers within the CDN have identical properties.
The only difference between clusters is the number of
deployed servers.  We assume that there is a central catalogue of content
items: clients from all over the world request content items from the CDN
and they are redirected to the geographically closest server. If the
server already contains the requested item, it is immediately served.
Otherwise, the item is retrieved from another portion of the CDN and
served.  

We assume that, as observed in real systems~\cite{Lei09:cdn}, different clusters
are interconnected by a dedicated network. Then, we assume that it is faster to
move content between servers to bring an item as close as possible to the client,
than to redirect the request to another server further away that
already holds a copy. This seems plausible, even if geographic distance may
not always be the only factor influencing performance. 

Server clusters act as caches: they keep copies of already requested items for
future requests, but they have finite storage.  A cache replacement strategy is
used to remove an item from the cache when it is full.  We also assume that
the servers within a cluster coordinate to act as a single large cache.
Therefore, every server can host up to \emt{k} items and if there are \emt{N} servers in
a cluster, that cluster is equivalent to a single cache able to host \emt{kN}
different items. This simplifies the definition of the model but still captures
the heterogeneity of cluster sizes around the planet.   We do not model file
size: we assume that the size of a file does not vary much across the items, as
we have observed in our specific dataset of YouTube videos.  

\begin{table}[t]
\centering
    \begin{tabular}{|c|c|c||c|c|c|}
    \hline
        Location & Country & Servers & Location & Country & Servers \\
        \hline                        
    \hline
        Washington & USA & 552 & Frankfurt & Germany & 314 \\
        \hline                       
        Los Angeles & USA & 523 &  London & UK & 300 \\
        \hline                
        New York & USA & 438 &  Amsterdam & Netherlands & 199\\ 
        \hline                 
        Chicago & USA & 374 &   Tokyo & Japan & 126 \\
        \hline               
        San Jose & USA & 372 &  Toronto & Canada & 121 \\
        \hline              
        Dallas & USA & 195 &     Paris & France & 120 \\
        \hline             
        Seattle & USA & 151 &    Hong Kong & Hong Kong & 83\\
        \hline            
        Atlanta & USA & 111 &   Changi & Singapore & 53 \\
        \hline           
        Miami & USA & 111  &     Sydney & Australia & 1 \\
        \hline                   
        Phoenix & USA & 3 & & & \\          
    \hline                        
\end{tabular}
    \hfill
\caption{Geographic distribution of the server clusters in the Limelight
    network.}
\label{tab:cdn_servers}
\end{table}

\subsection{Model parameters}
In order to ground our model in reality we have parametrised it with
the real properties of Limelight, the commercial CDN once used by YouTube to
deliver content to users, as measured by Huang et
al.~\cite{HWL08:limelight}\footnote{This paper has been withdrawn by
  Microsoft due to some criticisms about the system performance results
    presented. However, we only use information about server locations from this
    work.}. 
Limelight has clusters of servers deployed at 19 different locations around the
world and each cluster has a different number of servers. 
In Table~\ref{tab:cdn_servers} we report the details of each server cluster: 
Limelight deploys 2,830 out of its 4,147 servers in the United States, where there are 10
clusters.  Europe and Asia are served only by seven clusters
in total and Australia only by one, while the rest of the world does not contain any
cluster. 

In our model, cache size should be interpreted with respect to the total number
of items present in the system and not as an absolute number, since we do not
have access to the whole YouTube item catalogue. Hence, we will also express
cache size as a percentage of the total data catalogue.  As an example, since we
have about 1 million videos in our dataset, a cache size of 100 items is
comparable to a cache that can host about 0.01\% of a real catalogue; in the
case of YouTube, with hundreds of thousands of videos added every day, there are
more than 100 million videos, hence this would represent a cache size with more
than 10,000 different videos. 
\subsection{Content caching policies}
We now define the caching policies adopted by our model to store and replace
content within the servers.  A server cluster adopts a \textit{cache replacement
strategy} to remove an item when the cache is full and a new request
arrives. Each strategy assigns priorities to the items in memory and, when a
deletion is needed because the cache is full, the item with the lowest
priority is removed. The priority of an item might be updated whenever a
request for that item is issued.

Our approach is to use standard caching policies and then augment them with
geosocial information. Each policy assigns a priority \emt{P(v)} to a video \emt{v}
and, when a video has to be removed, that with the lowest priority is chosen for
deletion. A random choice is made when more than one video has the lowest priority.  We
adopt three different caching policies: \textit{Least-Recently-Used
    (LRU)}, \textit{Least-Frequently-Used (LFU)} and \textit{Mixed}.  
    
    In LRU the
    priority of a video \emt{v} is given by \emt{P(v) = clock}, where \emt{clock} is an
    internal counter incremented by one whenever a new item is
    requested.  This policy provides a simple aging effect: when an item is not
    requested for a long time, it is eventually removed.  However, it does not
    take into account item popularity.  In LFU  the priority of a video \emt{v} is
    given by \emt{P(v) = Freq(v)}, where \emt{Freq(v)} is the number of times video \emt{v}
    has been requested since it was stored in the cache for the last time.  LFU
    favours popular content: if an item receives a large number of requests it
    will stay in the cache for a long time.  However, LFU is less flexible: an
    item that was popular in the past tends to stay in the cache even
    if it is not requested anymore.  The Mixed policy combines both LRU and LFU
    features and the priority of video \emt{v} is given by \emt{P(v) = clock +
      Freq(v)},  in order to balance both temporal and popularity
    effects~\cite{Che98:cache}. In this case \emt{clock} starts at 0
    and it is updated for each replacement with the priority value of the
    removed file. Thus, a video increases its priority when it is requested many
    times, but, if there are no more requests, it will eventually be removed
    from the cache.

Then, we define two \textit{priority weights} for each video \emt{v}, based on the
geosocial characteristics of users participating in the social cascades
involving this video, measured using node locality:
\begin{itemize}
\item{\textbf{Geosocial}: the weight of video \emt{v} is given by the
    sum of the node locality values of all the users who have posted a message
        about it, even if they are not involved in a social cascade;}
\item{\textbf{Geocascade}: the weight of video \emt{v} is given by the
    sum of the node locality values of all the users participating in the
        item's social cascade (or cascades, if an item happens to be posted on
                more than one cascade).}
\end{itemize}

These weights are used to capture the idea that if a video is tweeted  many
times by users with high node locality values, then it is likely that it is
spreading in a local region, thus future requests will hit the same content
server.  While the first weight takes into account all the messages regarding a
particular content item, the second one only uses the messages caused by a
social cascade. By using two different weights based on geosocial information, we
want to investigate the contribution of social cascades with respect
to using only geographic information of social ties.

The weight of every tweet with a link to a video is updated according to
whether that tweet is or is not in a cascade. For every request, content servers
get also the video weight and multiply it with the priority of the underlying
cache replacement policy. Hence, we have three versions of every cache
replacement policy: with no weight, with a Geosocial weight 
and with a Geocascade weight. 

\begin{figure}[t]
  \centering
\includegraphics[width=\plotscale]{images/server_hits.pdf}
    \caption{Fraction of video requests handled by each cluster and fraction of
        servers contained in each cluster. Different workloads do not
            significantly change the distributions of requests.}
        \label{fig:server_req_stats}
\end{figure}
\section{Evaluation}
\label{sec:cdn_evaluation}
In this section we test our idea that information extracted from geographic social cascades
can effectively be exploited to improve the performance of CDNs.  We have
investigated through simulation how different cache replacement policies impact
the performance of the system.  Our results show that global system performance
can be improved with respect to standard policies, which means
potentially avoiding millions of video file transfers per day.  

\subsection{Simulation strategy}
In our simulation we create a sequence of content requests to the CDN directly from the
Twitter messages within our dataset.  We assume that every video contained in a
Twitter message is requested by each follower of the author with a certain
probability \emt{p} and with a random temporal delay modelled with the same
distribution of delay between cascade steps. This assumption is simple
and can be far from reality, as the real load is likely to be a function of the
particular user and the particular content item: nonetheless, we do not have
precise information about 
real traffic requests spawned by Twitter messages. However, our simulation results show
performance improvements for
any value of \emt{p} we adopted. We generate 5 different workloads, corresponding to
the values of \emt{p=0.001, 0.002, \ldots, 0.005}, and we run every workload 20
times, averaging the results. 




We always route a request to the server cluster closest to the user.
However, the geographic distribution of the requests does not change for \emt{p},
since it is only influenced by the geographic distribution of Twitter users,
which does not change for different workloads. As shown in
Fig.~\ref{fig:server_req_stats}, some servers receive much more traffic than
others: as an example, the cluster in Dallas accounts for more than 11\% of
global requests.  Additionally, some locations receive a large fraction of
traffic even though they contain only a small number of servers. These
properties may impact the performance of the cache replacement strategies
for different locations.


\begin{figure}[t]
  \centering
  \subfigure[No weight]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_0_0.pdf}}
  \subfigure[Geosocial]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_0_1.pdf}}
  \subfigure[Geoscade]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_0_2.pdf}}
    \caption{Percentage of total hits with respect to the infinite cache case as
      a function of cache size for the \textbf{LRU} cache polity and different weights:
        no weight (a), Geosocial weight (b) and Geocascade weight (c).  
        Cache size is expressed as a fraction of the
        entire data catalogue.  Every simulation is run 20 times with randomly
        generated workloads and the average is presented (standard deviation is
            negligible and not shown).}
        \label{fig:results_cache_lru}
\end{figure}

\begin{figure}[t]
  \centering
  \subfigure[No weight]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_1_0.pdf}}
  \subfigure[Geosocial]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_1_1.pdf}}
  \subfigure[Geoscade]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_1_2.pdf}}
    \caption{Percentage of total hits with respect to the infinite cache case as
      a function of cache size for the \textbf{LFU} cache polity and different weights:
        no weight (a), Geosocial weight (b) and Geocascade weight (c).  
        Cache size is expressed as a fraction of the
        entire data catalogue.  Every simulation is run 20 times with randomly
        generated workloads and the average is presented (standard deviation is
            negligible and not shown).}
        \label{fig:results_cache_lfu}
\end{figure}

\begin{figure}[t]
  \centering
  \subfigure[No weight]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_2_0.pdf}}
  \subfigure[Geosocial]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_2_1.pdf}}
  \subfigure[Geoscade]
{\includegraphics[width=\tripleplotscale]{images/cache_hits_2_2.pdf}}
    \caption{Percentage of total hits with respect to the infinite cache case as
      a function of cache size for the \textbf{Mixed} cache polity and different weights:
        no weight (a), Geosocial weight (b) and Geocascade weight (c).  
        Cache size is expressed as a fraction of the
        entire data catalogue.  Every simulation is run 20 times with randomly
        generated workloads and the average is presented (standard deviation is
            negligible and not shown).}
        \label{fig:results_cache_mixed}
\end{figure}


\subsection{Global performance}
First we investigate the performance of different policies with respect
to the case of infinite cache size, i.e., in conditions where no item is ever
removed from the cache.  The number of hits in this case is the maximum
achievable, both on each cluster and globally.

As a global performance metric for our system we consider all the hits on all the
clusters; every request is directed to the closest server and there it may
result in a hit or a miss. For each cache replacement strategy and for each different
workload, we compute the total number of hits obtained and we take the ratio
between this value and the performance with infinite cache.  This metric shows
how different policies react when some parameters of the system are changed, but
it does not emphasise differences in their performance. 

In Figures~\ref{fig:results_cache_lru}-\ref{fig:results_cache_mixed} we show
the change in system performance as a function of cache size and for different
workloads; when the size increases, every policy steadily improves its
performance. 
%For instance, with
%cache size equal to 0.02\% of , performance with the workload for \emt{p=0.01} are
%between 50\% and 70\% for all the cache strategies.  
Larger workloads have worse performance, but differences between them disappear at
larger cache sizes.  Moreover, as the cache size grows larger, all workloads
reach a plateau, since increasing the cache size beyond a certain limit provides
a diminishing performance increment.  This is due to the fact that there is
a portion of content that is requested only a few times and for which caching
policies can hardly offer advantages.  In addition, we observe that while using
no weights results  in the lowest hit ratio, by adopting instead the Geosocial
and Geocascade weights we achieve noticeable improvements, because the servers
are now able to identify geographically popular items and keep them in memory for
future local requests. However, we need a direct comparison to appreciate the
difference in performance achieved by using these weights. 

\begin{figure}[ht]
  \centering
  \subfigure[Geosocial]
{\includegraphics[width=\doubleplotscale]{images/cache_gain_0_0_0_1.pdf}}
  \subfigure[Geocascade]
{\includegraphics[width=\doubleplotscale]{images/cache_gain_0_0_0_2.pdf}}
    \caption{Increment (\%) of average performance with respect to the case without weight
        as a function of the cache size and for different workloads when the
          \textbf{LRU}
            strategy is used with Geosocial weight (a) and with Geocascade
            weight (b).} \label{fig:results_gain_1}
\end{figure}

\begin{figure}[ht]
  \centering
  \subfigure[Geosocial]
{\includegraphics[width=\doubleplotscale]{images/cache_gain_1_0_1_1.pdf}}
  \subfigure[Geocascade]
{\includegraphics[width=\doubleplotscale]{images/cache_gain_1_0_1_2.pdf}}
    \caption{Increment (\%) of average performance with respect to the case without weight
        as a function of the cache size and for
        different workloads when the \textbf{LFU} strategy is used with Geosocial weight (a) and with
            Geocascade weight (b).}
        \label{fig:results_gain_2}
\end{figure}

\begin{figure}[ht]
  \centering
  \subfigure[Geosocial]
{\includegraphics[width=\doubleplotscale]{images/cache_gain_2_0_2_1.pdf}}
  \subfigure[Geocascade]
{\includegraphics[width=\doubleplotscale]{images/cache_gain_2_0_2_2.pdf}}
    \caption{Increment (\%) of average performance with respect to the case without weight
        as a function of the cache size and for
        different workloads when the \textbf{Mixed} strategy is used with Geosocial weight (a) and with
            Geocascade weight (b).}
        \label{fig:results_gain_3}
\end{figure}

\subsection{Policy comparison}
In order to understand which policy provides better results, we 
evaluate the relative performance improvements between the weighted policies and
the other strategies. 

We illustrate in Fig.~\ref{fig:results_gain_1} the performance increment when we
augment the LRU strategy with geosocial information.  Geosocial-LRU reaches a
maximum 55\% performance increment, while increasing the cache size results in a smaller
increment. Instead, Geocascade-LRU achieves more than a 70\% increment over LRU for
smaller cache sizes, while the benefit decreases as cache size increases.
In Fig.~\ref{fig:results_gain_2} we investigate how the use of priority weights
improves LFU. Geosocial-LFU achieves a top increment of about 50\%
against LRU with small cache sizes, with the increment going down  as the
size increases.  However, the improvement is larger in the case of the Geocascade
weight, with a maximum increment of 70\% and a smaller decrement with cache size.
Finally, in Fig.~\ref{fig:results_gain_3} we investigate the difference between
the Geocascade and the Geosocial weight for the Mixed cache policy. Again, 
the Geosocial weight gives a maximum improvement of 50\%, while
the Geocascade one improves the baseline performance by up to 65\%.

Both weights improve cache performance, since they recognise content that is
more likely to become popular only locally and to result in many requests to the
same local servers. Indeed, items that are popular on a global scale may be
requested from different servers around the planet and may not trigger cache
prioritisation in single CDN clusters. Furthermore, including information about
the spreading of social cascades appears to be  a better predictor of local
popularity, since the Geocascade weight gives higher performance than 
Geosocial.
It is also important to note that the performance improvement is smaller when
the cache becomes larger. Indeed, with a cache so large that it can host 0.1\% of
the entire data corpus, it becomes easier to accommodate more items and
performance easily reaches a saturation point, as seen in
Figures~\ref{fig:results_cache_lru}-\ref{fig:results_cache_mixed}. 
  For a given cache size larger workloads have a larger relative improvement,
  since their absolute performance is lower.

%\begin{figure}[h]
%  \centering
%    \includegraphics[width=0.7\columnwidth]{images/world_stats.pdf}
%    \caption{Percentage of hits with respect to the
%        number of hits with infinite cache size in different locations for the Geocascade-LRU,
%               Geosocial-LFU and Geocascade-Mixed policies. Workload for \emt{p=0.001} and cache
%                   size equal to 100.}
%        \label{fig:results_clusters}
%\end{figure}
%
%\subsection{Geographic Aspects}
%We finally consider the performance of server clusters around the world. Since clusters in
%different places experience different workloads, we investigate whether 
%the performance improvement that we obtain is homogeneous over the different
%locations.
%
%We adopt again the infinite cache case as benchmark and we compute the
%proportion of hits achieved in each cluster in that case. In the case of LFU,
%Fig.~\ref{fig:results_clusters} shows that the use of Geosocial and
%Geocascade weights outperforms the case without any weight in every
%location. Similar results are obtained for LRU and Mixed cache
%strategies and are not shown here.  This is an indication that the tracking of social
%cascades brings a performance improvement across different geographic
%locations, regardless of cluster size, number of requests and
%distance between users and clusters.  As an example, in the USA there
%is a high density of both clusters and requests, whereas in Europe
%and Asia there are less requests and less clusters. At the same time,
%users in South America have to rely on distant content servers
%located in the USA.  Nonetheless, our framework based on tracking the
%geosocial properties an evolving cascade is always able to discern
%what type of content should be prioritized.
%
%We also observe that the cluster in Tokyo
%always holds the best result, with Washington and Los Angeles just behind it.
%This is consistent with our previous traffic analysis, reported in
%Fig.~\ref{fig:server_req_stats}: these three clusters are over-provisioned since
%they serve a fraction of all the requests which is smaller than the fraction of
%servers they include.  Moreover, Tokyo may obtain better performance because
%local users are more interested in local content, due to cultural and language
%differences with the rest of the world.  On the other hand, clusters such as
%Sydney and Phoenix score less hits: again, these clusters experience a large
%number of requests compared to their resources (only 1 and 3 servers
%        respectively).

\section{Discussion and implications}
\label{sec:disc6}
The main result presented in this chapter is that locality information from
social cascades can be extracted and used to improve large-scale system design.
We see a great potential in exploiting geographic properties of online user
communication. Geographic locality of online interactions can be exploited to do
pre-fetching of Web content, caching of normal HTTP traffic, datacentre design
and placement and even to devise security mechanisms~\cite{WPD10:locality,
  BSM10:findme,THT12:tailgate}.

In addition, our approach can be generalised to be used on a number of different
social platforms. The information needed can be efficiently
exposed by an anonymised API, which could provide only the aggregated geosocial
measures corresponding to a given cascade of a certain shared item. Moreover,
information coming through public Twitter feeds, private Facebook posts
and emails can be anonymised and exposed in order to classify items
according to their geographic popularity and feed this information into
CDNs.  As with any other anonymisation procedure, this approach would anyway present
associated risks.

%The main result of this work is that locality information from
%social cascades can be extracted in order to improve caching performance in CDNs. 
In the specific example we have discussed, improvement largely depends on cache
size: when it is possible to cache a considerable portion of the whole item
catalogue, cache policies matter less and the improvement obtained by social
information is smaller. However, if cache size is not sufficient to store that
portion, because it is too small with respect to item size or because the
catalogue contains too many items, geosocial properties can make a difference.
Moreover, if in the future social cascades can be tracked on a larger scale, the
advantage given by geosocial measures may impact not only CDN caching policies
but other large-scale systems in general.

As already mentioned, our results are obtained using a sample from a single
service.
Although it is generally unknown which portion of the traffic directed to CDNs is
coming from online social networks, it is not inconceivable that this traffic may become
considerable; the fraction of messages containing content in our
dataset is already appreciable and the number of users of social services is still
increasing.

An improvement in the number of cache hits for requests coming from these
services,
   as observed in our simulation, would mean that millions of video daily
   requests could be served locally  instead of being transferred over the
   network. In addition, videos are getting larger, with higher quality demanded
   by users, meaning bulkier files.  Caches need to
   grow larger and larger to cope with this trend or, alternatively, need to
   cache fewer items. This is impacting
   (and will impact increasingly more) on the running costs of modern CDNs.
   For instance, Limelight runs a global private fibre-optic network that avoids
   sending files over costly public Internet connections. As a result,
   any reduction in the number of files sent across the network would reduce the
   investments needed in network infrastructure, which account for a considerable
   part of the total expenditure of a CDN~\cite{QWB09:bill}.

\section{Related work}
\label{sec:related6}
Two research areas are related to this discussion: the analysis
of online social cascades and the design of large-scale CDNs.

\paragraph{Social Cascades}
Social cascades have been studied in sociology, economics and marketing for more
than 60 years; an eminent example is the threshold model proposed by
Granovetter~\cite{Gra87:threshold}.  Recently, thanks to the availability of
large datasets, many other studies have been presented. 
%~\cite{MMG07:measurement,SFK09:understanding}.  
In~\cite{AA05:blogspace} Adar
and Adamic analyse the diffusion of information in blogs by applying epidemic
models of information spreading.  Similarly, a characterisation of cascades
using data from Flickr, a photo-sharing website, is illustrated
in~\cite{CMG09:flickr}.

Finding ways of harnessing the potential of information constantly generated by
users is a key and promising research area for the networking community and it
is still largely unexplored. An initial proposal was presented by Sastry et
al.~\cite{SYC09:buzztraq}: their suggestion is to place replicas of
items already posted by a user closer to the location of friends, anticipating future
requests.  Our proposal is a different example that uses information extracted
from social cascades to effectively improve the performance of large-scale
networked systems, and, more specifically, of CDNs. In addition, we present a
large-scale study of geographic social cascades that supports our claims. 

\paragraph{Content Distribution Networks}
Given the success and economic importance of
CDNs, many solutions to improve the performance of this class of systems have
been proposed  with respect to the location-aware selection of servers.  Key
examples of experimental systems in this area are Meridian~\cite{WSS05:meridian}, a
node selection mechanism based on network locality, and OASIS~\cite{FLM06:oasis}, an
overlay anycast service infrastructure. 
WhyHigh is a system to redirect queries based on the measurements of the latency
of the Google's CDN~\cite{KMS09:whyhigh}.  This system is not only
based on geographic proximity but also on measurements of client latencies
across all CDN nodes, in order to identify the prefixes with
inflated latencies.

While these systems have used some knowledge of the geographic properties of
traffic load to improve performance, we have also taken advantage of information
from online user interaction to enhance the content placement decision process. 

\section{Summary}
\label{sec:concl6}
Taking into account how
online social services are affected by spatial distance could improve
system design, as we argued in Section~\ref{subsec:system_design}.
Already in Chapter~\ref{ch:prediction} we have demonstrated that
the additional layer of spatial information about user behaviour can greatly
benefit applications based on data mining. Furthermore, spatial properties of
online platforms become important when services are deployed on distributed
architectures that span and serve the entire planet. Since content storage and
content delivery must happen on a global scale, because online platforms
serve hundreds of millions of users all around the world, spatial constraints
affecting  user interactions are of vital importance to improve resource
usage.

In this chapter we have shown how geosocial properties of users
participating in online social cascades can be exploited to improve the
efficiency of caching in CDNs. We have studied cascades on Twitter, finding
that users preferentially share content over short-range links, despite the
significant presence of long-distance connections. Using one new geosocial
measure introduced in Section~\ref{sec:metrics}, we have taken advantage of
these findings to design content caching policies that prioritise content that
experiences geographically local popularity, validating our design through model
simulation.  While our study is limited in scope by the choice of online
social network and dataset, our results are more generally applicable and the
impact of the approach could be high for large-scale systems whose
traffic is driven by online social services.

