\documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}


\usepackage{amsmath, amsthm, amssymb}
\usepackage{algpseudocode}
\usepackage{algorithm}
\usepackage{multirow}
\usepackage{bigstrut}
\usepackage{array}


\ifCLASSINFOpdf
  \usepackage[pdftex]{graphicx}
  \graphicspath{{../pdf/}{../jpeg/}{./eps/}}
  \DeclareGraphicsExtensions{.eps,.pdf,.jpeg,.png}
\else
  \usepackage[dvips]{graphicx}
  \graphicspath{{./figures/}{./eps/}}
  \DeclareGraphicsExtensions{.eps}
\fi

\IEEEoverridecommandlockouts

\newcommand{\setS}{\mathbf{G}}
\newcommand{\comp}{C}
\newcommand{\TranscodeIndicator}{P}
\newcommand{\RequestingSeg}{\mathbf{K}}
\newcommand{\TranscodeSet}{\mathbf{E}}
\newcommand{\IdleComp}{I}
\newcommand{\SegRepCost}{F}
\newcommand{\Assign}{A}
\newcommand{\ReqFromRegion}{J}
\newcommand{\CDNRegions}{\mathbf{R}}
\newcommand{\Users}{\mathbf{U}}
\newcommand{\Redirect}{D}
\newcommand{\RegionRepCost}{Z}
\newcommand{\bandwidth}{W}
\newcommand{\Version}{L}
\newcommand{\USPref}{H}
\newcommand{\ReqOfSeg}{Q}
\newcommand{\SegQuality}{Y}
\newcommand{\Bitrate}{B}

\newtheorem{theorem}{Theorem}

\usepackage{color}
\newcommand{\BeginRevision}{\color{blue}}%revise of the text
\newcommand{\EndRevision}{\color{black}}%comment of the text



\begin{document}






\title{A Joint Online Transcoding and Delivery Approach for Dynamic Adaptive Streaming}


\author{
			Zhi Wang,~\IEEEmembership{Member,~IEEE,}
			Lifeng Sun,~\IEEEmembership{Member,~IEEE,}
			Chuan Wu,~\IEEEmembership{Member,~IEEE,}
			Wenwu Zhu,~\IEEEmembership{Fellow,~IEEE,}
			Qidong Zhuang,~\IEEEmembership{Student Member,~IEEE,}
			and Shiqiang Yang,~\IEEEmembership{Senior Member,~IEEE} 
			
		\IEEEauthorblockN{Tsinghua University, University of Hong Kong, The Hong Kong University of Science and Technology}
			
		\thanks{Z.~Wang is with the Graduate School at Shenzhen, Tsinghua University, Shenzhen, China (email: wangzhi@sz.tsinghua.edu.cn). L.~Sun, W.~Zhu and S.~Yang are with the Department of Computer Science and Technology, Tsinghua University, Beijing, China (e-mail: \{sunlf, wwzhu, yangshq\}@tsinghua.edu.cn). C.~Wu is with the Department of Computer Science, the University of Hong Kong, Hong Kong (e-mail: cwu@cs.hku.hk). Q.~Zhuang was with Tencent when this work was done, and is now with The Hong Kong University of Science and Technology, Hong Kong (e-mail: qzhuang@ust.hk).}
	}

 \maketitle

\begin{abstract}

Dynamic adaptive streaming has emerged as a popular approach for video services in today's Internet. To date the two important components in dynamic adaptive streaming, video transcoding which generates the adaptive bitrates of a video and video delivery which streams the videos to users, have been separately studied, resulting in a huge waste of computation and storage resource due to producing and caching video versions regardless of their demands. We conduct extensive measurement studies of video sharing systems, including an IPTV service which streams regular, professionally-made videos and an instant video clip sharing service which provides extremely-short user-generated videos, as well as the availability of computation resource in conventional Content Delivery Networks (CDNs). Based on the measurement insights, we propose an online joint transcoding and delivery approach for adaptive video streaming. We formulate optimization problems to enable high streaming quality for the users, and low computation and replication costs for the system. In particular, our strategy connects video transcoding and video delivery based on users' preferences of CDN regions and regional preferences of video versions. We analyze hardness of these problems and design distributed solutions. Extensive trace-driven experiments further demonstrate the superiority of our design.

\end{abstract}

\section{Introduction} \label{sec:intro}

Dynamic adaptive streaming over HTTP (DASH) has emerged as a popular video streaming approach \cite{mpegdash2010}, widely implemented and supported by the industry, including Apple HTTP Live Streaming, Microsoft Live Smooth Streaming, and Adobe Adaptive Streaming. It allows users with heterogeneous and dynamically changing network conditions to receive an adaptive bitrate, achieving the best video streaming experience in different contexts \cite{stockhammer2011dynamic}.

In adaptive streaming, video service providers have to not only deliver the video {\em segments} (data blocks in a video that can be downloaded over HTTP and played independently), but also transcode the videos to different \emph{versions} (\emph{i.e.}, videos with different bitrates of the same content) for the users. In this paper, we refer to \emph{transcoding} as transcoding a video to different bitrates, which may consume a lot of computation resource \cite{li2012cloud}. To date, existing approaches separately perform video transcoding and video delivery: being unaware of which segments users will request, they transcode every video published to a set of fixed versions, and replicate segments of different versions using the same strategy.

Problems of the traditional approaches for adaptive streaming are as follows: (1) Prefixed versions only allow users to choose from a small set of candidate bitrates, which cannot effectively adapt to the changing network conditions. (2) To address this problem, video providers increase the number of adaptive versions --- as both the number of uploaded videos and the number of versions are increasing, a huge amount of computation resource is required to transcode all the videos to all the versions \cite{lao2012parallelizing}. As the distribution of video segment popularity is becoming significantly heavy-tailed, {\em i.e.}, a substantial fraction of video segments are not requested at all, pre-transcoding them could be a huge waste of valuable computation resource. The situation is exacerbated by today's user-generated-content (UGC)-based video sharing services \cite{cha2009analyzing}. (3) In addition, traditional approaches are oblivious of users' \emph{preferences} of different \emph{peering servers} (\emph{i.e.}, servers directly uploading the video segments to users) when streaming video segments to users, leading to a mismatch between the download speed and the required segment bitrate, \emph{e.g.}, a user being able to receive a high-bitrate segment might be redirected to a peering server with a slow connection to the user \cite{adhikari2012unreeling}.

To address these problems, we propose to jointly schedule segment transcoding and delivery in an online manner, using geo-distributed computation and bandwidth resources. This design philosophy allows us to jointly optimize the streaming quality for users and minimize the computation and bandwidth cost for transcoding and replicating the video segments. To motivate our study, we measure the user request patterns of different video streams in a representative video streaming service in China, BesTV \cite{bestv} (an IPTV system serving over $16$ million users) and Weishi \cite{weishi} (an instant video clip sharing service serving over $18$ million users. We have made the following observations: (\textbf{i}) due to the skewness of the distributions of popularity of the videos, segments and versions, the online transcoding paradigm is promising to significantly reduce the demand for computation resource.

To exploit the practical feasibility of implementing joint transcoding and delivery using widely deployed CDNs, we further measure the availability of computation and bandwidth resources in Tencent CDN \cite{tencent}, which serves over $70\%$ of the traffic from one of the largest content providers in China. We have further made the following observations: (\textbf{ii}) A substantial amount of idle computation resource is available on the \emph{backend servers} (\emph{i.e.}, servers supporting the peering servers), and the amount is relatively stable over time, indicating that online transcoding can be effectively performed by these backend servers. (\textbf{iii}) Peering servers are deployed at different geographic locations and connect to different Internet Service Providers (ISPs), such that users' download speeds differ when streaming from servers in different CDN regions. Therefore, users have different preferences of CDN servers in different regions from which to receive segments, while servers in different regions have different preferences of versions of video segments to transcode.

Based on our previous work \cite{zhi-infocom2014}, we substantially extend the measurement study, the analysis and evaluation in this paper. Our contributions are summarized as follows:

$\rhd$ We conduct large-scale measurement studies not only on traditional adaptive video streaming services, but also on the recent instant social video sharing services, to motivate our proposal, to derive guidelines for our design, and to investigate the feasibility of its practical implementation.

\BeginRevision
$\rhd$ To achieve good streaming qualities, low computation resource consumption, and low segment replication cost, we connect video transcoding and video delivery based on users' region preferences and regional version preferences, \emph{i.e.}, we use users' preferences of regions to redirect them to their ideal peering servers, and use the regional version preferences to schedule the transcoding tasks. 
In particular, segments are transcoded to a set of pre-defined versions according to the strategy designed, in the on-demand fashion. We formulate optimization problems for video transcoding/delivery decisions, analyze the NP-hardness of the problems and design practical algorithms to compute the solutions. 
\EndRevision

$\rhd$ We conduct extensive trace-driven experiments to evaluate the effectiveness of our algorithms, and show that both users' experience and system's computation resource utilization are improved by our proposal.

The rest of the paper is organized as follows. We discuss related works in Sec.~\ref{sec:relatedwork}. We present the measurement insights that motivate our design in Sec.~\ref{sec:measure}, present our detailed design in Sec.~\ref{sec:design}, and verify its effectiveness by experiments in Sec.~\ref{sec:evaluation}. Finally we conclude the paper in Sec.~\ref{sec:conclusion}.

\section{Related Work} \label{sec:relatedwork}

We survey related literature on today's HTTP-based adaptive streaming, quality of experience in adaptive streaming, CDN-based streaming architectures, and conventional video transcoding schemes.

\subsection{HTTP-based Adaptive Streaming}

The rapid growth of HTTP streaming is partly due to the extensive support from content distribution networks (CDN) \cite{peng2004cdn}. HTTP video streaming works by breaking the overall video stream into a sequence of small HTTP-based file downloads. Due to the best-effort nature of streaming videos over the Internet, Dynamic Adaptive Streaming over HTTP (DASH) has been proposed to adapt the streaming rates from Web servers. DASH was developed in 2010 \cite{mpegdash2010} and has become a new standard in 2011 \cite{stockhammer2011dynamic} to enable high-quality streaming of media content over the Internet, delivered from conventional HTTP Web servers. In the conventional DASH framework, how the segments should be efficiently transcoded and delivered is however not technically discussed.


\subsection{Quality of Experience in Adaptive Streaming}

The quality of experience (QoE) in video streaming is a critical factor that reflects the effectiveness of the streaming strategies. For conventional video streaming, metrics including bitrate, packet loss and delay are generally used \cite{klaue2003evalvid,wang2002image}. For adaptive video streaming, since bitrate changes during a session, the inference of the quality of experience has to take the new feature into consideration. Mok et al.~\cite{mok2012qdash} studied the inference of the quality of experience in adaptive video streaming such as DASH, and observed that both streaming bitrate and user activities can affect the quality of experience. In our design and evaluation, we take these metrics of quality of experience into account.


\subsection{Streaming Architecture based on CDN}

Many architectures have been proposed to implement large-scale video streaming services including the CDN-based architectures \cite{peng2004cdn}. CDNs can significantly assist in HTTP-based streaming with servers deployed in multiple geographical locations across multiple ISPs \cite{vakali2003content}. Since today's CDNs are mainly designed and optimized to serve web contents \cite{pallis2006insight}, HTTP video streaming can be regarded as downloading video segments progressively from web servers via the HTTP protocol. Users experience higher-quality streaming by receiving streams at more reliable bandwidth from the CDN servers. Recently, Adhikari {\em et al.}~\cite{adhikari2012unreeling} proposed a multi-CDN scheme for real-world video systems to further improve the streaming quality. Traditional studies on video streaming have been focusing on improving the connectivity between streaming servers and users from the network aspect.


\subsection{Video Transcoding Schemes}

Compared with the traditional video streaming paradigm, DASH enables a much larger number of quality versions, requiring a huge amount of computation resource to transcode these versions of videos. Dedicated transcoders are developed to speed up video transcoding \cite{wu2009streaming}. There have also been works on using the computation resource in a cloud cluster for video transcoding. Lao {\em et al.}~\cite{lao2012parallelizing} designed a MapReduce-based video transcoding scheme for distributing transcoding tasks. Huang \emph{et al.}~\cite{huang2011cloudstream} proposed CloudStream, which schedules the video transcoding tasks inside a cluster according to properties of the videos. Traditional studies on video transcoding have been exploiting dedicated devices or computation resource, leading to the decoupling of segment transcoding and delivery.





Most related works on adaptive streaming have investigated video delivery and video transcoding separately, \emph{i.e.}, videos are pre-transcoded centrally, and then replicated to CDN servers for delivery using the same strategy, \emph{e.g.}, a full replication scheme. In this paper, we explore the design space of joint transcoding and delivery using geo-distributed computation and network resources.

\section{Measurements and Observations} \label{sec:measure}

We conduct measurement studies to motivate our design, and summarize design principles learnt.

\subsection{Measurement Setup}

To demonstrate the benefits and feasibility of our proposal, we use large-scale measurement studies based on valuable traces collected from BesTV and Tencent CDN.


\subsubsection{Traces of Users' Video Viewing Patterns}

To study the potential of using an online transcoding scheme to save computation resource, we have collected real-world traces on video access patterns from two respective video services: (1) BesTV, an Internet Protocol Television (IPTV) system, and (2) Weishi, an instant social video sharing system, both based in China. Detailed traces on user behaviors in both systems are collected as follows.

$\rhd$ In BesTV, videos are published into $17$ categories, and pre-transcoded into $4$ versions ($700$ Kbps, $1300$ Kbps, $2300$ Kbps and $4000$ Kbps). We collected viewing activities of users in Heilongjiang province in November 2012, about how $190$K videos were watched by users from over $3$ million IP addresses. For each of the videos, the traces record which segments were downloaded by which users, including the time stamp when a segment was downloaded, the user ID, the video ID, the size and version of the segment, and the time spent on downloading the segment. Using these data, we can show the great potential of our joint transcoding and streaming paradigm in Sec.~\ref{sec:measure:potential}.

$\rhd$ In Weishi, extremely short videos (in $10$ seconds) are generated by individuals and shared with their ``followers''. Each video in Weishi is transcoded into the following versions: a) 480x480, $2000$ Kbps; b) 480x480, $1050$ Kbps; c) 480x480, $500$ Kbps; d) 480x480, $300$ Kbps. In the Weishi traces, each item records when a video with a particular version is download by which IP, and from which CDN server. Using these traces, we are able to demonstrate the effectiveness of online transcoding for instant video sharing services.


\subsubsection{Traces of CDN Characteristics}

To study the feasibility of online transcoding in a CDN system, which has already been widely used for adaptive streaming \cite{cahill2004efficient}, we collected traces of the backend and peering servers from Tencent CDN, as follows. (1) \emph{CPU load patterns}: To study the computation resource availability for segment transcoding, we collected the CPU load traces from the backend servers. In particular, the CPU load of $5,441$ servers was recorded every $5$ minutes, for the whole month of March 2013. Each CPU load trace item contains the timestamp and the CPU load, recorded as the average number of processes waiting on each CPU core, \emph{e.g.}, a CPU load greater than $1$ indicates that the server is fully loaded. (2) \emph{Bandwidth patterns}: To study the users' preferences of CDN regions, and the regional preferences of versions to transcode, we have collected traces including $3.39$ billion TCP connections from peering servers located at $55$ regions in May 2013. These TCP connections were established to download contents with sizes varying from tens of bytes to $4.8$ GB. Each of the trace items contains the following information: the timestamp indicating when a TCP connection was established, the client IP, the number of downloaded bytes and the duration of the connection. In Sec.~\ref{sec:possibility}, we use these data to study the feasibility and provide guidelines for our design.




\subsection{Users' Watching Behavior}
\label{sec:measure:potential}

\subsubsection{Popularity of Professional Videos Published in the IPTV Service} 

Based on the video viewing records in BesTV, Fig.~\ref{fig:bestv-popularity}(a) illustrates the distribution of video popularity. Each sample represents the number of user requests for a video in one month versus the rank of the video. We observe that over $53\%$ of these videos had no viewers in a month. This can be explained by the fact of today's video services that the time users spent in watching videos grows much slower than the growth of the number of videos, and such skewness of the popularity distribution is also prevalent in other UGC-based video sharing systems, such as YouTube \cite{cha2009analyzing}.

In Fig.~\ref{fig:bestv-popularity}(a), only $13\%$ of the videos have a monthly viewing number larger than $500$. We further investigate how different segments with different versions inside such a relatively popular video (with about $1,000$ segments), were requested by users. Fig.~\ref{fig:bestv-popularity}(b) illustrates the distribution of the requests for segments of one of the most popular videos. Each curve represents the number of segment requests versus the index of the segment. We observe that (1) only a small range of segments were requested by many users, {\em e.g.}, the first tens of the segments; (2) different versions received different numbers of requests, {\em e.g.}, the version of $4000$ Kbps was requested by much more users; and (3) a large fraction of segments were requested by no one for some versions, {\em e.g.}, the last segments of the $700$ Kbps and $1300$ Kbps versions.

\begin{figure}[t]
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{video-freq.eps}
			\centerline{\parbox[t]{\linewidth}{\scriptsize (a) Number of user views versus the rank of video.}}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{sy-req-vs-seg.eps}
		\centerline{\parbox[t]{\linewidth}{\scriptsize (b) Number of segment requests versus segment index for different versions.}}
	\end{minipage}
	\caption{Popularity of videos, segments and versions in BesTV (Heilongjiang, November 2012).}
	\label{fig:bestv-popularity}
\end{figure}


\subsubsection{Viewing Patterns in Instant Social Video Sharing Services}

We first study the distribution of the popularity of instant video clips generated by users in Weishi. As illustrated in Fig.~\ref{fig:weishi-popularity}, a sample represents the number of views of a video versus the rank of the video in one day. We have sampled over $10,000$ videos from over $1.6$ million videos viewed on February 27, 2014. We observe that the popularity distribution can be nicely fit using a zipf distribution (with the space parameter $s=1.209$), indicating that viewers in the instant video sharing service also concentrate on the most popular videos in a highly skewed manner. Uniformly transcoding every video is thus a waste of computation resource.

Next, we investigate how videos are generated by users in the instant video sharing service. In Fig.~\ref{fig:weishi-request-upload-overtime}, the two curves represent the number of video uploads and the number of video requests by users in each hour. Our observations are: (1) The number of requests is larger than the number of uploads by only $25$ times, indicating that over time there will be much more videos generated than what users may watch. (2) The peak hour of video requests is several hours later than that of video uploads: the time difference may allow us to perform the transcoding task schedule, detailed in Sec.~\ref{sec:design}.

\begin{figure}[t]
	\centering
		\includegraphics[width=0.7\linewidth]{video-popularity.eps}
	\caption{Number of views versus the rank of video in Weishi (February 27, 2014).}
	\label{fig:weishi-popularity}
\end{figure}

\begin{figure}[t]
	\centering
		\includegraphics[width=0.7\linewidth]{request-upload-overtime.eps}
	\caption{Number of video requests and video uploads over time in Weishi (February 27, 2014).}
	\label{fig:weishi-request-upload-overtime}
\end{figure}

We then study how the user-uploaded videos were requested in the three days after they were generated. In Fig.~\ref{fig:view-in-3-days}, the red reference line denotes the number of videos uploaded by users on Day 1 (February 26, 2014), and each bar represents the number of videos among them that are requested on the next $x$th day ($x = 1,2,3$) for the first time. We observe that around $70\%$ of the videos were requests in the same day when they were uploaded, but much fewer attract viewers after that. Around $30\%$ of user-generated videos were not viewed at all by others after they were published.%, especially the ones that are not viewed on the same day they are uploaded.

Finally, we study when the videos (that were ever requested) were requested by their first viewers. We plot the CDF of the elapse between the upload time of a video and the time the first request for the video was issued in Fig.~\ref{fig:updowntimediff}, over $300,000$ videos. We observe that over $40\%$ (resp. $55\%$ and $85\%$) of the videos were requested $1$ hour (resp. $8$ hours and $24$ hours) after they were uploaded. These observations show that (1) pre-transcoding which transcodes videos that are not needed in the near future wastes computation resource, and (2) the upload-download elapse of videos is different, resulting in the different ``urgency levels''  for them to be transcoded.

\begin{figure}[t]
	\centering
		\includegraphics[width=0.7\linewidth]{view-in-3-days.eps}
	\caption{Number of videos requested by users in the first three days after they were generated.}
	\label{fig:view-in-3-days}
\end{figure}

\begin{figure}[t]
	\centering
		\includegraphics[width=0.7\linewidth]{updowntimediff.eps}
	\caption{CDF of elapse between upload and the first request.}
	\label{fig:updowntimediff}
\end{figure}


The above observations indicate that as more and more videos are published, by both the professional content providers and individuals, pre-transcoding every segment of all videos into an increasing number of versions can be a huge waste of computation resource, motivating our online transcoding solution.



\subsection{Availability of Idle Computation Resource in CDNs}
\label{sec:possibility}


Realtime monitoring of Akamai's severs \cite{cohen2010keeping} has revealed that the CPU load on their CDN servers can be efficiently measured, which varies across different servers. We further measure the idle computation resource on Tencent CDN servers, to explore the feasibility of performing video transcoding using such idle resource. %Next, we present that our proposal can be effectively implemented in today's CDN deployment.

Fig.~\ref{fig:cpu-load}(a) plots the CPU load --- the average number of processes waiting on each CPU core of the backend servers --- in a representative time slot of $15$ minutes. We observe that in this time slot, the CPU load of the $5,441$ backend servers varied from around $0$ to $8.6$, and as many as $72.4\%$ (\emph{resp.} $55.9\%$) of the backend servers had a CPU load smaller than $1.0$ (\emph{resp.} $0.5$), showing that a substantial amount of the computation resource in the CDN is available and can be exploited. The reason for such high availability is that many backend servers are only assigned with simple I/O tasks, {\em e.g.}, loading data from the distributed storage system for the peering servers.

We further study the availability of computation resource in a region of the CDN. %In our design, 
 We use a {\em city-ISP} pair to identify a region. Fig.~\ref{fig:cpu-load}(b) illustrates the CPU load on four largest regions (Xian-China Telecom, Tianjin-China Telecom, Chengdu-China Telecom and Beijing-China Telecom) that Tencent CDN covers, {\em i.e.}, the average CPU load of all the backend servers in that region. %In this figure, each curve represents the CPU load of the four largest regions, {\em i.e.}, Xian, Tianjin, Chengdu and Beijing
  We also observe the existence of available computation resource at the region level, and the variance of the CPU load in different regions, \emph{e.g.}, the CPU load of Xian-China Telecom is much lower than that of Beijing-China Telecom. 
  
\begin{figure}[t]
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{cpuload_vs_server.eps}
		\centerline{\parbox[t]{\linewidth}{\scriptsize (a) Average CPU load in $15$ minutes versus the rank of the server (8PM, March 5, 2013).}}
		\label{fig:cpu-load-vs-server}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{idc_cpuload_over_time.eps}
			\centerline{\parbox[t]{\linewidth}{\scriptsize (b) Average CPU load on servers of different regions (March 5, 2013).}}
	\end{minipage}
	\caption{Average CPU load of backend servers.}
	\label{fig:cpu-load}
\end{figure}


Since these back-end servers can be scheduled to run different tasks, their idle computation resource may vary over time. To make use of these resource for segment transcoding, relative stability of the idle resource which reflects the churning level of CPU loads of servers overtime, is a key. Fig.~\ref{fig:server-cpu-load-overtime}(a) plots the CPU load of $3$ servers, which is sampled once every $5$ minutes. We calculate an average coefficient of variation ($CV$) to evaluate the daily churning level of the CPU load on each server, as follows: $CV = 1/24 \sum_{h=0}^{23} {\sqrt{E[(X_h - \bar{X_h})^2]}}/{\bar{X_h}}$, where random variable $X_h$ denotes the CPU load in hour $h$ (expectation computed using $12$ samples in each hour). A large $CV$ implies more churns of the CPU load over time. We observe that the server with a relatively stable CPU load over time achieve $CV=0.07$, and the server with significantly varying CPU load achieves $CV=0.69$.

Fig.~\ref{fig:server-cpu-load-overtime}(b) further plots the distribution of $CV$s of all the backend servers. We observe that over $70\%$ of the servers achieve a $CV$ smaller than $0.5$, indicating that the CPU load of many backend servers is relatively stable, such that their capacity for performing video transcoding in the near future can be guaranteed. We seek to exploit such resource availability in our transcoding task scheduling.

\begin{figure}[t]
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{server_cpu_load_overtime.eps}
		\centerline{\parbox[t]{\linewidth}{\scriptsize (a) CPU load on three representative back-end servers (March 5, 2013).}}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{server_cv_cdf.eps}
		\centerline{\parbox[t]{\linewidth}{\scriptsize (b) CDF of the coefficient of variation of all the servers (March 5, 2013).}}
	\end{minipage}
	\caption{Variation of server CPU load over time.}
	\label{fig:server-cpu-load-overtime}
\end{figure}





\subsection{Users' Preferences of Streaming from Different CDN Regions}

In the context of online transcoding, we are allowed the degree of freedom that segments can be transcoded in different CDN regions. Next, we explore the guidelines for such transcoding schedule.

We show the diversity in downloading speed across regional CDN servers. Which version of a video segment is requested by a user depends on the user's download speed. Based on the TCP traces of the peering servers, we compare the download speeds of about $150$ users who downloaded from different peering servers in the same $10$ minutes on May 4, 2013. In Fig.~\ref{fig:avg-user}(a), each sample is the average download speed of a user downloading from a regional CDN server deployed in Shanghai, versus the average download speed of the same user downloading from another server deployed in Shenzhen, both with the same ISP. 
We further plot the CDF of the ratio of the two speeds in Fig.~\ref{fig:avg-user}(b), and we observe that for over $79\%$ of the users, their download speeds change over $2$ times when they download from different servers (marked red in this figure), indicating that redirecting users to their ideal peering servers can help a majority of users to receive a better streaming quality.

\begin{figure*}[t]
	\begin{minipage}[t]{.64\linewidth}

			\begin{minipage}[t]{.48\linewidth}
				\centering
				\includegraphics[width=\linewidth]{user-server-speed.eps}
				\centerline{\parbox[t]{\linewidth}{\scriptsize (a) Comparison of average download speeds of users downloading from different peering servers (May 4, 2013).}}
			\end{minipage}
			\hfill
			\begin{minipage}[t]{.48\linewidth}
				\centering
				\includegraphics[width=\linewidth]{user-server-speed-ratio.eps}
				\centerline{\parbox[t]{\linewidth}{\scriptsize (b) CDF of ratio of two speeds.}}
			\end{minipage}
			\caption{Download speeds of users downloading from different peering servers.}
			\label{fig:avg-user}

	\end{minipage}
	\hfill
	\begin{minipage}[t]{.32\linewidth}
		\centering	
			\includegraphics[width=\linewidth]{tencent_cdn_server_downloadspeed_20130504.eps}
		\caption{Average download speed of users from different peering servers on May 4, 2013.}
		\label{fig:avg-server}
	\end{minipage}
\end{figure*}


\subsection{Regions' Preferences of Different Versions to Transcode} 

On the other hand, to study the regional preference of versions to transcode, we calculate the average download speeds of users downloading from the peering servers. In Fig.~\ref{fig:avg-server}, each sample represents the average download speed in one day, of all users served by a server versus the rank of the server. We observe that the average download speed varies quite significantly across these peering servers, from $170$ Kbps to $1.1$ Mbps.

We further study the download speeds from servers in different regions to users. In Fig.~\ref{fig:region-speed}, each bar represents the minimal, average and maximal download speeds from the peering servers in a region. We observe that the average download speeds across different regions vary from $180$ Kbps (region BJT, \emph{i.e.}, Beijing-China Telecom) to $512$ Kbps (region ZJM, \emph{i.e.}, Zhejiang-China Mobile). %Different regions ``cover'' users with quite different speeds, \emph{e.g.}, BJT serves most of the low-bitrate users while ZJM serves hit-bitrate users.

These observations tell us the following: Peering servers are physically deployed at different locations and with different ISPs, so that the Internet connectivity and the average bandwidth capacity are different. % (2) Peering servers at different regions are generally scheduled to serve different numbers of user requests, leading to the different server load.
 Given the different connectivities, servers in different regions are suitable to transcode segments into different versions, \emph{e.g.}, a region with a low server-to-user download speed should produce low-bitrate segments.  Satisfying such preferences of regions for transcoding task schedule can lead to reduced \emph{cost} of replicating transcoded segments, since segments are transcoded where they are to be requested.


\begin{figure}[t]
	\centering
		\includegraphics[width=0.7\linewidth]{region-speed-distribution.eps}
	\caption{Average user download speed in different CDN regions on May 4, 2013.}
	\label{fig:region-speed}
\end{figure}





\section{Joint Online Transcoding\\and Geo-Distributed Delivery}
\label{sec:design}

In this section, we present our design of joint online transcoding and geo-distributed delivery strategy. Before presenting the details, we give the assumptions/non-goals in our study: first, we assume the transcoding is carried out from high-bitrate versions to low-bitrate versions; second, we assume segments of the highest versions are already replicated to the geo-distributed CDN regions; third, we focus on improving users' streaming quality and reducing the computation resource consumption, while not particularly considering the allocation of storage resource.

\subsection{Framework}
\label{sec:framework}

Fig.~\ref{fig:geo-online} illustrates our design, where segments in different versions of videos are transcoded upon users' requests. In this example, $s1,s2, s3, s4$ represent segments of different versions, requested by a user during her streaming session. $R1$, $R2$ and $R3$ are CDN \emph{regions} (each is represented by a pair of a geographical location and an ISP) where backend servers and peering servers are deployed. Segments can be transcoded at selected regions (\emph{e.g.}, $s2$ is transcoded in region $R1$), replicated between regions (\emph{e.g.}, $s1$ is replicated from region $R3$ to $R1$), and delivered to users, all in an online manner.

\begin{figure}[t]
	\centering
		\includegraphics[width=\linewidth]{geo-on-demand.eps}
	\caption{Joint online transcoding and geo-distributed streaming: an illustration}
	\label{fig:geo-online}
\end{figure}

Fig.~\ref{fig:framework} further illustrates the framework of our online transcoding and delivery scheme, which schedules segment transcoding and replication periodically: based on the recent information collected in time slot $T-1$, we perform transcoding and replication of segments that are likely to be requested in time slot $T$.
\BeginRevision
In practice, the length of a time slot should be similar to the playback duration of a segment. In our experiments later, set the length of a time slot to be $10$ seconds, the same as the playback duration of a segment.
\EndRevision
The collected information includes: (1) Users' preferences of different regions to receive segments. In our design, we allow users to use a bandwidth estimation approach (\emph{e.g.}, abget \cite{antoniades2006available} which incurs little bandwidth overhead to measure the end-to-end bandwidth) to rank a set of candidate peering servers, in the descending order of the estimated download speed. (2) The number of requests for a particular segment, which can be predicted according to users' segment requests in previous time slots, 
{\em e.g.}, we assume most of users will play videos in a consecutive way, issuing very few seeks during the playback \cite{cheng2007supporting}, and we predict the number of users requesting a segment accordingly as the number of users downloading the previous segment in the previous time slot. 
Based on the CDN-to-user bandwidth information in the previous time slot, we further estimate which particular versions of the segments will be requested by users in the next time slot. (3) The idle computation resource. As many backend servers have stable CPU load over time according to our measurement study, we use the level of computation resource in the current time slot as the available computation resource in the next time slot. Other regression models (\emph{e.g.}, ARIMA \cite{zhang2003time}) can also be explored to achieve better prediction accuracy, which we will investigate in the future.

Using such information, we perform the following: (1) \emph{User redirection}. To enable high-quality streaming, our design redirects users to their ideal regions so that they can receive segments at high bitrates. We redirect users at a region level, \emph{i.e.},  a region with the highest CDN-to-user bandwidth will be selected to serve a user's request, and peering servers in the same region will serve user requests in a round-robin manner. (2) \emph{Transcoding segment selection}. Backend servers with idle CPU resource perform video transcoding, by slicing a video into multiple segments, containing closed groups of pictures (GoPs) that can be transcoded independently \cite{huang2011cloudstream}. 
To allow a smooth playback, when a segment of a particular version is not transcoded timely, we send the requesting user the segment of an alternative version, whose bitrate is lower but closest to that of the requested version. 
We prioritize requested versions of segments that are less desirable to be replaced by alternative versions due to large bitrate difference between them, to be transcoded when computation resource is not sufficient. (3) \emph{Transcoding task assignment}. A transcoded segment is cached by the backend servers and replicated to other regions, according to our replication strategy. Transcoding is performed at strategically selected regions, so that the cost of replicating the transcoded segments to other regions can be minimized.


\begin{figure}[t]
	\centering
		\includegraphics[width=\linewidth]{framework.eps}
	\caption{Framework of our joint online transcoding and delivery mechanism.}
	\label{fig:framework}
\end{figure}


Table \ref{tab:notations} summarizes important notation used in this paper.


\subsection{Quality-Driven Redirection}

In our design, a user is redirected to her ideal region where segments are generated on the fly. This design principle allows users to choose a CDN region with the largest download speed to receive the segments, without considering the segment availability.

We formulate the region-level user redirection problem as an optimization problem. Let $\Users^{(T)}$ denote the predicted set of users requesting different segments in the system in time slot $T$, and $\CDNRegions$ be the set of CDN regions. We use $\Redirect^{(T)}$ to denote a redirection strategy, where the binary variable $\Redirect^{(T)}_{u,r} = 1$ (\emph{resp.} $0$) indicates that user $u$ will (\emph{resp.} will not) be downloading from region $r$ in the next time slot $T$.

In the context of adaptive video streaming, we assume that users expect to receive a large bitrate for good streaming quality whenever possible. Thus, we use $\USPref_{u,r}$ to denote user $u$'s preference to download from region $r$. $\USPref_{u,r}$ can be defined as a concave increasing function of the estimated download speed achieved when user $u$ downloads from peering servers in region $r$. The optimization problem is as follows:
\begin{equation}
	\max_{\Redirect^{(T)}} \sum_{u \in \Users^{(T)}, r \in \CDNRegions} \USPref_{u,r} \Redirect^{(T)}_{u,r},
	\label{eq:redirect}
\end{equation}
subject to:
\[
\begin{split}
	\sum_{r \in \CDNRegions} \Redirect^{(T)}_{u,r} & \le 1, \forall u \in \Users^{(T)},\\	
	\sum_{u \in \Users^{(T)}} \Redirect^{(T)}_{u,r} \Bitrate_{\Version_{u,r}}) & \le \bandwidth_{r}, \forall r \in \CDNRegions,\\
	\Redirect^{(T)}_{u,r} & \in \{0,1\}, \forall u \in \Users^{(T)}, r \in \CDNRegions,
\end{split}
\]
where $\Version_{u,r}$ is the version with the highest bitrate that $u$ can receive when she downloads from region $r$, $\Bitrate_{v}$ is the bitrate of version $v$, and $\bandwidth_r$ is the bandwidth capacity of peering severs in region $r$. The rationale of the optimization is to maximize the streaming quality for users by the redirection.


\begin{theorem}
The problem of redirecting users to CDN regions such that their preferences can be maximally satisfied, as formulated in (\ref{eq:redirect}), is NP-hard.
\end{theorem}

\begin{proof}
We reduce a conventional 0/1 knapsack problem, which is NP-hard, %\footnote{http://en.wikipedia.org/wiki/Karp\%27s\_21\_NP-complete\_problems}, 
 to this problem. The 0/1 knapsack problem has the following structure:
$$
\max \sum_{i=1}^{n}v_i x_i,
$$
subject to
$$
	\sum_{i=1}^{n} \alpha_i x_i \le \beta,
$$
$$
	x_i \in \{0,1\},
$$
where $x_i,i=1,2,\ldots,n$ are the optimization variables. For any 0/1 knapsack problem as above, we reduce it to our redirection problem as follows: 
\begin{enumerate}
	\item Let $\Users^{(T)} = \{1,2,\ldots,n\}$, $\CDNRegions=\{1\}$;
	\item Let $\USPref_{i,1} = v_i, i = 1,2,\dots,n$;
	\item Let $\Bitrate_{\Version_{i,1}} = \alpha_i, i=1,2,\ldots,n$;
	\item Let $\bandwidth_1 = \beta$.
\end{enumerate}
According to the procedure above, these reduction operations take linear time to complete, and the final results for the 0/1 knapsack problem are $x_i = \Redirect^{(T)}_{i,1}, i=1,2,\dots,n$. Therefore, our problem is NP-hard. 
\end{proof}

We design an algorithm to heuristically solve the optimal user redirection problem in a distributed manner: (1) When a user starts to watch a video, the system assigns her a list of candidate peering servers from regions with the lowest load. (2) The user ranks these servers in descending order of the estimated download speeds as discussed in Sec.~\ref{sec:framework}, and sends connection requests to these servers. (3) On the other hand, a peering server may receive connection requests from many users, and can only accept a limited number of users according to its available bandwidth $\bandwidth_r$. The request from user $u$ is prioritized to be accepted if she has a larger $\USPref_{u,r}/\Bitrate_{\Version_{u,r}}$ with the CDN region $r$ --- this value reflects a marginal ``gain'' in streaming quality by a unit of bandwidth allocated. (4) The user selects the best peering server from the ones accepting her request according to the ranked list. In a real system, this algorithm can be effectively implemented and executed in a distributed manner.

\begin{table}[!t]
	\caption{Important notations.}
	\label{tab:notations}
	\renewcommand{\arraystretch}{1.3}
	\centering
	\begin{tabular}{|p{0.15\linewidth}||p{0.75\linewidth}|}
		\hline 
			Symbol & Definition \\ 
		\hline
		\emph{$\Users^{(T)}$} 		& Set of users requesting segments in time slot $T$ \\
		\emph{$\Redirect^{(T)}_{u,r}$} 		& Binary variable indicating whether user $u$ will download from region $r$ in time slot $T$\\
		\emph{$\USPref_{u,r}$} 		& Preference level for user $u$ to receive video stream from region $r$ \\ 
		\emph{$\bandwidth_r$} 		& Bandwidth capacity of  region $r$ \\ 
		\emph{$\RequestingSeg^{(T)}$} 		& The set of segments being requested in time slot $T$ \\ 
		\emph{$\TranscodeIndicator^{(T)}_{(s,v)}$} 		& Indicator that determines if segment $(s,v)$ will be transcoded \\ 
			\emph{$e_{(s,v)}^{(T)}$} 		& Importance level of a particular segment $(s,v)$ in time slot $T$  \\
			\emph{$\ReqOfSeg^{(T)}_{(s,v)}$} 		& Number of requests of segment $(s,v)$  from all regions in time slot $T$ \\
			\emph{$\SegQuality_{(s,v)}^{(T)}$} 		& Quality gain if segment $(s,v)$ is transcoded in time slot $T$\\
			\emph{$\Bitrate_v$} 		&  Bitrate of a particular version $v$ \\
			\emph{$\setS^{(T)}_{s}$} 		& The set of transcoded versions of segment $s$  \\
			\emph{$\CDNRegions$} 		& Set of CDN regions  \\
			\emph{$\TranscodeSet^{(T)}$} 		& Set of segments to be transcoded in time slot $T$  \\
			\emph{$\Version_{u,r}$} 		&  Highest version that $u$ can receive when she downloads from region $r$ \\
			\emph{$\comp_{(s,v)}$} 		& Computation resource required to perform the transcoding task to generate a segment $s$ of version $v$ \\
			\emph{$\IdleComp^{(T)}_r$} 		& Available computation resource that can be allocated for video transcoding from region $r$ in time slot $T$ \\
			\emph{$\SegRepCost_{(s,v),r}$} 		& Overall replication cost when segment $(s,v)$ is transcoded in region $r$ \\
			\emph{$\Assign^{(T)}_{(s,v),r}$} 		& Indicator determine whether segment $(s,v)$ is transcoded in region $r$ in time slot $T$ \\
		\hline 
	\end{tabular} 
\end{table}


\subsection{Region-Preference-Aware Transcoding Schedule}

After users are redirected to the CDN regions, they send requests for video segments of different versions. Based on the segment request prediction, %according to their requests in the previous time slot,
 we perform the transcoding task schedule, which works in two steps: (1) we prioritize the segment transcoding tasks such that important segments are transcoded more urgently; and (2) we distribute the transcoding tasks to CDN regions, such that segments are transcoded where they are more likely to be requested.

\subsubsection{Prioritizing Segment Transcoding Tasks}
\label{sec:prioritizing}

We prioritize the segment transcoding tasks according to the importance of these segments. We denote $e_{(s,v)}^{(T)}$ as the importance level of segment $s$ of version $v$ in time slot $T$, which depends on the following factors: (1) the estimated number of user requests for the segment, discussed in Sec.~\ref{sec:framework}; and (2) the quality-wise importance of the segment, which depends on the existing versions of the same segment. In particular, $e_{(s,v)}^{(T)}$ can be calculated as follows: 
$$e_{(s,v)}^{(T)} = \ReqOfSeg^{(T)}_{(s,v)} \SegQuality_{(s,v)}^{(T)},$$
where $\ReqOfSeg^{(T)}_{(s,v)}$ denotes the predicted number of requests of the particular segment $(s,v)$ in the next time slot $T$, and $\SegQuality_{(s,v)}^{(T)}$ is the streaming quality gain if the segment is transcoded to version $v$. $\SegQuality_{(s,v)}^{(T)}$ is calculated as the ``mismatch'' level of the bitrate if $v$ is not transcoded as follows:
$$
	\SegQuality_{s,v}^{(T)} = \begin{cases}
		\min_w(\Bitrate_{v} - \Bitrate_{w})/{\Bitrate_{v}}, & \exists w \in \setS^{(T)}_s, w < v \\
		1, & \text{otherwise}
	\end{cases},
$$
where $\setS^{(T)}_s$ is the set of all the versions of the segment existing in the system. 
\BeginRevision
$\SegQuality_{s,v}^{(T)}$ is in the range $(0,1]$. When there exist replacement versions with lower bitrates, the one with the closest bitrate will be served to users, yielding $\min_w(\Bitrate_{v} - \Bitrate_{w})/{\Bitrate_{v}}$, where $\Bitrate_{v}$ is the original bitrate and $\Bitrate_{w}$ is the closest bitrate; when there is no replacement version, $\SegQuality_{s,v}^{(T)}$ is set to $1$. A larger $\SegQuality_{s,v}^{(T)}$ thus indicates that users will receive a highly mismatched bitrate if the version $v$ is not transcoded, and version $v$ is important to segment $s$ in terms of streaming quality of its receivers. 
\EndRevision
Fig.~\ref{fig:seg-importance} gives an example of the importance of segments: a solid block represents a segment transcoded, and a dashed block represents one that is not transcoded yet. When segments $(s1,v3)$ and $(s2, v3)$  are both requested by the same number of users, $(s1,v3)$ is prioritized to be transcoded over $(s2,v3)$, since users requesting $(s2,v3)$ can be served by an alternative version $(s2,v2)$, which has a close bitrate to the original requested one, and no version of segment $s1$ exists in the system.

\begin{figure}[t]
	\centering
		\includegraphics[width=0.6\linewidth]{segment-importance.eps}
	\caption{Segment importance: an illustration.}
	\label{fig:seg-importance}
\end{figure}

Based on the definition of the importance level of segments, we determine which segments among all the requested ones in $\RequestingSeg^{(T)}$, to be transcoded by formulating it as an optimization problem, as follows:
\begin{equation}
	\max_{\TranscodeIndicator^{(T)}} \sum_{(s,v) \in \RequestingSeg^{(T)}} \TranscodeIndicator^{(T)}_{(s,v)} e_{(s,v)}^{(T)}, 
\end{equation}
subject to:
\[
\begin{split}
	\sum_{(s,v) \in \RequestingSeg^{(T)}} \TranscodeIndicator^{(T)}_{(s,v)} \comp_{(s,v)} & \le \sum_r \IdleComp^{(T)}_r, \\
	\TranscodeIndicator^{(T)}_{(s,v)} & \in \{0,1\}, \forall (s,v) \in \RequestingSeg^{(T)}, \\
\end{split}
\]
where $\TranscodeIndicator^{(T)}_{(s,v)}=1$ (\emph{resp.} $0$) indicates that segment $(s,v)$ will (\emph{resp.} will not) be transcoded in the next time slot $T$,
$\comp_{(s,v)}$ is the amount of computation resource required to transcode a segment $(s,v)$, and $\IdleComp^{(T)}_r$ is the aggregated idle computation resource from the CDN region $r$.  According to \cite{huang2011cloudstream}, it takes different CPU times to generate different segments in the same video. In our design, we use the average CPU time spent on generating historical segments of a particular version and size, to estimate the computation resource required to transcode any segment with that version and size. 


The optimization, which is a 0-1 knapsack problem, is to select a set of segments that are most important in the next time slot $T$. We design the following algorithm to solve this problem: (1) We collect the information for prediction in a centralized manner, \emph{e.g.}, users (\emph{resp.} backend servers) report which segments they are downloading (\emph{resp.} the CPU load information) to a centralized server, which will carry out the prediction; (2) Based on the prediction, we rank the requested segments in descending order of $e_{(s,v)}^{(T)}/\comp_{(s,v)}$; (3) We iteratively select segments from the ranked list to transcode, and update computation resource consumption, until the available idle computation resource is used up.



\subsubsection{Scheduling Transcoding Tasks across Regions}

After the tasks are selected, they are to be scheduled to different regions where backend servers can provide the computation resource. Without lose of generality, we let $\Assign^{(T)}_{(s,v),r}=1$ (resp. $0$) if region $r$ will (resp. will not) transcode segment $(s,v)$ --- the segment will be replicated from this region which originally stores the transcoded version, to other regions where users request it.

According to our measurement studies in Sec.~\ref{sec:measure}, heterogeneous preferences of video versions exist at different regions, due to the different download speeds from the servers at different regions. As a result, it is promising to strategically assign transcoding tasks of different segments to backend servers at different CDN regions for a minimized replication cost.

We use $\SegRepCost_{(s,v),r}$ to denote the overall replication cost if segment $(s,v)$ is transcoded in region $r$. It can be calculated as follows:
$$
	\SegRepCost_{(s,v),r} = \sum_{r' \ne r, \ReqFromRegion^{(T)}_{(s,v),r'} > \beta} \RegionRepCost_{r,r'}(s,v),
$$
where $\ReqFromRegion^{(T)}_{(s,v),r'}$ is the number of requests of segment $(s,v)$ to be served by a region $r'$, $\RegionRepCost_{r,r'}(s,v)$ represents the replication cost when segment $(s,v)$ is replicated from region $r$ to region $r'$, depending on the size of the segment and the bandwidth between CDN regions $r$ and $r'$. 
\BeginRevision
Important factors that affect the replication cost are as follows: 
(1) The pricing strategy for data transmission between different peering points, \emph{e.g.}, Internet service providers may charge differently for traffic across different regions;
(2) The load of the peering servers, \emph{e.g.}, a video service provider may want to reserve bandwidth on the heavy-loaded servers to directly serve users instead of replicating segments. 
\EndRevision
$\beta$ is a threshold of the number of requesting users from a region to trigger a replication. 
$\ReqFromRegion^{(T)}_{(s,v),r'}$ can be derived from the optimization in (\ref{eq:redirect}), which determines the redirection of users.
The rationale of this definition is that, in our design, a transcoded segment can be replicated from where it is transcoded to other regions where it is substantially requested ({\em i.e.}, $\ReqFromRegion^{(T)}_{(s,v),r'} > \beta$) --- a large $\SegRepCost_{(s,v),r}$ indicates a large replication cost between CDN regions if segment $(s,v)$ is transcoded by region $r$. 

Let $\TranscodeSet^{(T)}$ denote the set of segments to be transcoded in time slot $T$, calculated according to the segment selection procedure above. The task assignment problem is then formulated as follows:
\begin{equation}
	\min_{\Assign^{(T)}} \sum_{(s,v) \in \TranscodeSet^{(T)}} \sum_{r \in \CDNRegions} \Assign^{(T)}_{(s,v),r} \SegRepCost_{(s,v), r},
\end{equation}
subject to:
\[
\begin{split}
	\Assign^{(T)}_{(s,v),r} & \in \{0,1\}, \forall (s,v) \in \TranscodeSet^{(T)}, r \in \CDNRegions \\
	\sum_{r \in \CDNRegions} \Assign^{(T)}_{(s,v),r} &= 1, \forall (s,v) \in \TranscodeSet^{(T)}\\
	\sum_{(s,v) \in \TranscodeSet^{(T)}} \Assign^{(T)}_{(s,v),r} \comp_{(s,v)} & \le \IdleComp^{(T)}_r, \forall r \in \CDNRegions.
\end{split}
\]
The rationale of the optimization is to schedule the segment transcoding tasks to different CDN regions, so that the overall replication cost can be minimized. In our implementation, we design the following algorithm to  solve the problem, as follows: (1) We first rank all the pairs of the CDN regions and segments ({\em i.e.}, $\left|\CDNRegions\right| \times \left|\TranscodeSet^{(T)}\right|$ elements), in ascending order of $\SegRepCost_{(s,v), r}$; (2) we pick the region-segment pair, $r-(s,v)$, with the smallest $\SegRepCost_{(s,v), r}$ and assign the transcoding task of segment $(s,v)$ to region $r$; (3) we update the available computation resource of the selected region, and iteratively perform (2) until all computation resource in all the regions is fully used up. This algorithm can be implemented in a centralized manner, where a central server is deployed to collect the request information from streaming servers and make the decisions. 
The time complexity of the algorithm is determined by the sort operation to the paris of CDN regions and segments, {\em i.e.}, $N \log N, N = |\CDNRegions| |\TranscodeSet^{(T)}|$. Since we run the algorithm in each time slot, the number of requested segments is limited; and the number of CDN regions is a constant number.
Such implementation has been well applied in peer-assisted on-demand streaming systems \cite{huang2008challenges}, where a central server tracks the storage status of peers to help them find each other.

\begin{algorithm}[t]
	\caption{Transcoding task schedule.}\label{alg:transcoding-task-schedule}
	\begin{algorithmic}[1]
		\Procedure{Transcoding Task Schedule}{} 
			\State Let $M_{r} = \IdleComp^{(T)}_r, r \in \CDNRegions$
			\State Let $\Assign^{(T)}_{(s,v),r} = 0, \forall (s,v) \in \TranscodeSet^{(T)}, r \in \CDNRegions$
			\State Rank CDN region and segment pairs ($r-(s,v)$) in ascending order of $\SegRepCost_{(s,v),r}$
			\For{$\forall r-(s,v)$ in the ranked list}
				\If{$\comp_{(s,v)} \le M_r$}
					\State Let $M_r = M_r - \comp_{(s,v)}$
					\State Let $\Assign^{(T)}_{(s,v),r} = 1$
					\State Remove pairs with $(s,v)$ from the ranked list
				\EndIf
			\EndFor
		\EndProcedure
	\end{algorithmic}
\end{algorithm}

In our design, regions with a request number of a segment larger than $\beta$ will serve a replication of the segment; while for other regions with numbers of requests smaller than $\beta$, they will further redirect the users to other regions with the segment transcoded or replicated, according to the users' preferences.
\section{Performance Evaluation} \label{sec:evaluation}


\subsection{Experiment Setup}

We develop an event-driven simulation platform which takes users' viewing activities, and the transcoding and redirection decisions as events to drive the experiments. We set up our experiments following traces we have collected, and compare our design with a pre-transcoding baseline scheme. Details are as follows.

$\rhd$ \emph{Users}. According to models summarized from user viewing traces in BesTV, we simulate $10,000$ users, each of whom repeatedly joins different video sessions. A user is randomly associated with a region among $30$ regions, and has different download speeds from different regional CDN servers, based on the region-to-region speeds from our collected TCP traces, to be detailed later. After a user joins the system, she selects a video to watch according to the video popularity distribution, and the probability for a video to be selected is proportional to its popularity, counted based on the number of views in our traces. The indices of the first segments users start to view also follow a Zipf distribution, with a shape parameter $1.29$. When playing a video, a user plays (downloads) sequentially the segments, and may jump to a random segment ahead with a probability of $0.05$. The rationale is that in a video session, how users request segments follows a pattern that users generally play forward and issue a few seeks, most of which are forward seeks \cite{zheng2005distributed}. Before leaving a video session, the number of segments a user downloads follows a Zipf distribution with a shape parameter $1.12$.

\BeginRevision
$\rhd$ \emph{Video Provider}. 
New videos are published every $10,000$ time slots, as professional videos (e.g., movies and TV shows in BesTV) are generally published regularly on a daily basis. 
The popularity of videos follows a zipf distribution with a shape parameter $1.76$. In our experiments, the default number of segments in each video is $200$, and the default number of versions is $4$, if not specified otherwise. The bitrates of the versions are uniformly distributed between the lowest user download speed, and the highest user download speed. Each segment renders a $10$-second playback, and the computation resource required to transcode a segment is randomly distributed within $[5,10]$ CPU seconds \cite{liang2012cloud}.
\EndRevision

$\rhd$ \emph{CDN Regions}. We simulate $30$ regions. We set a region-to-user average download speed according to the download speed of $10,000$ IP prefixes randomly selected from the CDN traces, \emph{i.e.}, the download speed of an IP prefix is the average download speed of users with the same prefix in a one-week time span, varying from $70$ Kbps to $2.2$ Mbps. In our experiments, the aggregated CDN bandwidth is sufficient for all the users to stream at their ideal bitrates, and we randomly divide the bandwidth allocation across the regions. We assign the replication cost between each pair of regions within $[0,1]$, and a replication parameter $\beta=10$. A region has a varying idle computation resource over time with $CV$ randomly selected in $[0, 0.5]$, and the average amount of computation resource will be presented in the experiments.

\emph{Baseline Algorithm.} We compare our design with a general pre-transcoding and load-based redirection strategy: (1) For segment transcoding, all versions of the videos are transcoded before publication, and each transcoded segment is replicated to $3$ initial regions randomly selected ({\em i.e.}, the pre-transcoding scheme); (2) For user redirection, when requesting a segment, a user is redirected to a region which currently has the highest available upload bandwidth (\emph{i.e.}, the load-based redirection scheme).



\subsection{Experiment Results}

\subsubsection{Saving of Computation Resource}


In this experiment, we assume that the CDN can provide unlimited computation resource when transcoding is performed, such that we can satisfy all the segment requests of users. In Fig.~\ref{fig:comp-saved-overtime-vs-ver}, the curves represent computation resource saved by our design under different number of versions, compared with the pre-transcoding scheme. In particular, each sample is the fraction of computation resource that has been saved over the computation resource required to transcode videos to all the versions till a simulation round. We observe that as the number of versions increases, the computation resource saved by online transcoding increases, \emph{e.g.}, over $90\%$ of the computation resource can be saved when the number of versions is over $8$. The reason is that transcoding segments with no viewer to many versions costs a large amount of computation resource. We also observe that the amount of computation resource saving decreases in the first several rounds when new videos are published, and becomes stable afterwards. The reason is that in our design, computation resource is mainly used to transcode the most popular segments after the videos are published, and users who watch videos later largely request the segments that have already been transcoded.

Then, we investigate the impact of the number of videos published each time and the number of segments in each video. In this experiment, we fix the number of versions to $4$. In Fig.~\ref{fig:comp-saved-overtime-vs-vseg}, each bar represents the computation resource saving when a particular number of videos are published each time. We observe that publishing a large number of videos per time slot generally leads to larger computation resource saving. The reason is that the popularity distribution of the videos is heavy-tailed, and more videos with no viewer cause more waste of computation resource with the pre-transcoding scheme.

\begin{figure}[t]
	\begin{minipage}[t]{.48\linewidth}
		\centering	\includegraphics[width=\linewidth]{comp-saved-overtime-vs-ver-iter.eps}
		\caption{Computation resource saved by online transcoding under different number of versions.}
		\label{fig:comp-saved-overtime-vs-ver}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{comp-saved-overtime-vs-vseg.eps}
		\caption{Computation resource saved by online transcoding under different numbers of videos and segments.}
		\label{fig:comp-saved-overtime-vs-vseg}
	\end{minipage}
\end{figure}


\subsubsection{Improvement of Quality of Experience}

Taking advantage of online transcoding, users are able to be redirected to their ideal regions to achieve improved download speeds, and to receive segments with bitrates that best fit their download speeds. Next, we evaluate the improvement of the overall quality of experience.

Online transcoding allows users to be statically redirected to servers that serve particular segments, and improves the download speeds of users. We compare our redirection strategy with the load-based redirection scheme. In Fig.~\ref{fig:redirection-cmp}, the curves plot the user download speeds achieved by different strategies and the difference between them, versus the rank of users. We observe that our strategy can effectively schedule users to their ideal regions, with an average $181$ Kbps improvement of download bandwidth, as compared to the load-based redirection scheme. The reason is that the load-based redirection scheme only considers segment replication and available bandwidth of the regions, while our strategy allows users to choose their ideal regions.

With the improvement of download speeds, the startup delay for users to watch videos can be reduced. In this experiment, we assume users have to fill a buffer of $256$KB before starting playing a video. In Fig.~\ref{fig:startup-delay-cmp}, we plot the CDFs of startup delays achieved by our design and the load-based redirection strategy. Our design reduces the startup delay by over $2.5$ seconds on average against the load-based strategy. In particular, over $15\%$ more users in the load-based strategy have to experience a startup delay larger than $10$ seconds, than that in our design. 

The quality of experience in adaptive video streaming also depends on bitrates of segments users receive, which determine the image quality of videos. We compare the best versions users receive under different redirection strategies. Again, we fix the number of versions to $4$. As illustrated in Fig.~\ref{fig:redirection-ver-cmp}, each curve represents the version downloaded versus the user rank. We observe that as many as $44.8\%$ of the users receive a version of a higher bitrate with our strategy than that with the load-based redirection scheme. In particular, over $4.5$x users receive the version with the highest bitrate with our redirection strategy than with the load-based redirection scheme. %These observations indicate that user redirection powered by online transcoding can improve the streaming quality for users.

\begin{figure*}[t]
	\begin{minipage}[t]{.32\linewidth}
		\centering
			\includegraphics[width=\linewidth]{redirection-cmp.eps}
		\caption{Comparison of download speeds achieved at users under different redirection strategies.}
		\label{fig:redirection-cmp}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.32\linewidth}
		\centering
			\includegraphics[width=\linewidth]{startup-delay.eps}
		\caption{Comparison of startup delays achieved by different redirection strategies.}
		\label{fig:startup-delay-cmp}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.32\linewidth}
		\centering
			\includegraphics[width=\linewidth]{redirection-ver-cmp.eps}
		\caption{Comparison of best versions received at users under different redirection strategies.}
		\label{fig:redirection-ver-cmp}
	\end{minipage}
\end{figure*}



\subsubsection{Fitness of the Transcoded Segments}

In the following experiment, we will evaluate the effectiveness of our transcoding task schedule, on how well the transcoded segments match the users' requests. We compare our transcoding scheduling scheme with an FIFO-based scheme, where the transcoding tasks are performed according to request arrivals in an FIFO manner. For a fair comparison, we assume that users have already been redirected to regions according to our redirection strategy for both schemes. By varying the average computation resource in the regions, we evaluate the fitness of the transcoded segments. In Fig.~\ref{fig:trans-bit-cmp}, each sample represents the average bitrate difference between the bitrates of the received version and the requested version at all users versus the average computation resource of the region, calculated as the average number of segments that can be generated by the region. Note that the real computation resource may be different across the regions as it takes different amount of computation resource to transcode different versions. A larger difference indicates a larger streaming quality degradation, as users have to receive a replacement segment with a much smaller bitrate. We observe that the average bitrate difference is much smaller with our design. In particular, our strategy can reduce the number of users who have to receive a  segment of a mismatched version by over $42.2\%$.


\subsubsection{Version Mismatch at Users over Time}

To avoid users waiting for a segment that cannot be transcoded by the system on time, our design allows a low-version replacement segment to be sent to a user. We study the number of mismatch responses served to users since the publication of a set of $50$ videos. The two curves in Fig.~\ref{fig:mismatch-overtime} represent the number of mismatched responses served to users over time. The number of mismatched versions for users with our design is $40\%$ smaller, when the requested segments cannot be transcoded on time. Besides, the mismatched number of segments decreases over time as the transcoded segments are cached in our design.

\begin{figure}[t]
	\centering
		\includegraphics[width=0.9\linewidth]{mismatch-overtime.eps}
	\caption{Mismatch of versions of segments at users over time.}
	\label{fig:mismatch-overtime}
\end{figure}





\subsubsection{Replication Cost}


Our design utilizes regional preferences of versions when assigning transcoding tasks. Next, we evaluate the replication cost under different numbers of video versions. In Fig.~\ref{fig:trans-repcost-cmp}, each curve represents the replication cost versus the number of versions, with a particular number of segments in each video. We observe that a larger number of versions leads to a smaller replication cost. The reason is that when more versions are available, our design can effectively allow regions to transcode heterogeneous versions that best meet their users' demand. We also observe that the number of segments has little impact on the replication cost, implying that we can use a small amount of time for adaptive scheduling without incurring increased replication cost. As more and more versions are used in today's adaptive streaming systems, our design reduces not only the waste of computation resource for transcoding, but also the replication cost of the transcoded segments.

\begin{figure}[t]
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{transcode-cmp-mismatchbit.eps}
		\caption{Comparison of bitrate mismatch under different transcoding schedules.}
		\label{fig:trans-bit-cmp}
	\end{minipage}
	\hfill
	\begin{minipage}[t]{.48\linewidth}
		\centering
			\includegraphics[width=\linewidth]{replication-cost-vs-version.eps}
		\caption{Average replication cost per time slot versus the number of versions.}
		\label{fig:trans-repcost-cmp}
	\end{minipage}
\end{figure}


\subsubsection{Performance Difference Between the Optimal Solution and our Algorithm}

Our heuristic user-redirection algorithm is based on the strategy that users greedily choose the best regions that can serve them in a distributed manner, and servers also serve users in a best-effort way. We study how this strategy will perform compared with the optimal solution. As illustrated in Fig.~\ref{fig:npproblem}, each bar represents the performance difference, i.e., the difference of $\sum_{u \in \Users^{(T)}, r \in \CDNRegions} \USPref_{u,r} \Redirect^{(T)}_{u,r}$ achieved by our design, and the optimal solution based on a brute-force search. The average difference is $15.2\%$, indicating that our distributed algorithm can find most of the users their best regions to receive the segments.

\begin{figure}[t]
	\centering
		\includegraphics[width=0.7\linewidth]{npproblem.eps}
	\caption{Redirection performance difference between optimal solution and our algorithm.}
	\label{fig:npproblem}
\end{figure}





\section{Concluding Remarks} \label{sec:conclusion}

The rapid move to HTTP-based adaptive streaming requires dedicated strategies for improving the streaming quality for users and reducing the operation costs for content providers. Transcoding and delivery have been separately studied for adaptive video streaming, resulting in a significant waste of computation resource to transcode useless segments, and suboptimal streaming quality due to homogeneous replication of segments of different versions. Motivated by extensive measurement studies on both professional and social video services, we propose a joint online transcoding and geo-distributed delivery strategy, which allows us to explore a new design space for adaptive video streaming. We connect video transcoding and video delivery based on users' preferences of CDN regions and regional preference of versions to transcode. Aware of users' preferences of CDN regions, our design strategically performs user redirection so that videos can be streamed at large bitrates to the users. Taking into consideration heterogeneous importance of segments and regional preferences of versions to transcode, our design carefully schedules the transcoding tasks so that segments are transcoded to satisfy users' demands in each region, with little need of cross-region replication. Optimization problems are formulated and their hardness has been analyzed; we design heuristic and distributed algorithms to solve them. Our trace-driven experiments demonstrate significantly lowered computation resource consumption for segment transcoding, improved streaming quality for users, and reduced replication cost for video delivery, all with our design.

\section*{Acknowledgment}


We thank the CDN Team and SNG Team at Tencent, and the Data Analysis Team at BesTV for providing the valuable traces used in this paper.

	\bibliographystyle{IEEEtran}
	%\bibliography{mylib}
	\input{mylib.bbl}
\end{document}


