\section{PRAN Planes}
\label{sec:design}
In PRAN, the radio plane enables the radio resource sharing among operators;
%erran:
%L1/L2 processing is decoupled to data plane and control plane, so as to provide
%flexible programmability; 
To achieve flexible control, the control plane of L1/L2 processing tasks is
decoupled from the data plane; 
and the management plane assigns computational resources
to the base station processing so as to provide performance guarantee.
\subsection{Radio Plane}
Our radio plane ensures isolation of radio resources among different operators 
applying the technique in RadioVisor~\cite{RadioVisor14}. Our radio plane enables 
each operator to flexibly processing its data plane. In particular, each operator 
controls its own match-action table at the radio slicing tier. The radio slicing 
tier performs iFFT and demaps the I and Q samples of resource blocks to their 
relevant operators. An operator can match on RRH, one or more resource blocks 
and direct the I and Q samples to a specific server for further processing.


\subsection{Data Plane}
\label{sec:dataplane}

\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{./figures/data_path.pdf}
\vspace*{-0.25in}
\caption{Data Plane and Control Plane (PHY)}
\label{fig:datap-downlink}
\end{figure}
%\notepanda{Can we expand some of the acronyms here, it is strange to have PDPC
%as an acronym but have DAG expanded} 
LTE link layer has three sublayers: packet data convergence protocol (PDCP), 
radio link control (RLC) and media access control (MAC).
When a data stream traverses L1/L2, it is transformed multiple times. The data
streams are segmented into fixed sized data blocks. These data blocks are called
transport blocks in MAC, code blocks in PHY. 
 In the downlink, 
an IP packet experiences header compression in PDCP, segmentation in RLC, coding, scrambling and
modulation in PHY; in the uplink, the transformation is reversed. 
%We call the transformation of the original data payload, and we attach a meta data field to the payload. 
PRAN attaches meta data to each block. 
In meta data, some properties of a block are attached (e.g. UE ID, subframe
number). These meta data fields are used to select specific data path. 

A base station's data plane is abstracted as a directed graph
composed of \textbf{decision blocks} and \textbf{processing blocks}. Note that,
unlike OpenRadio~\cite{openradio}, the graph can be cyclic. For example, if
successive interference cancellation is used, there will be a cycle in the
graph. Suppose there are two transmissions in the received I and Q samples. Once
one stream is decoded. Its effect is subtracted from the original I and Q
samples, and the resulting I and Q samples (together with the decoded bits) are
routed to the start of the decoding pipeline. The cycle traversal will terminate
when all the component data streams are processed. 

\textbf{Processing blocks:} A processing block is usually single-in-single-out
with data streams traversing it, and it implements a simple functionality in
L1/L2. A processing block can also attach properties to the data blocks it
processes. A processing block may need configurations from the control plane;
and they can also write to base station information base (BIB) in the control
plane (more on BIB in control plane section). 

\textbf{Decision blocks:} A decision block usually has one or more inputs and
multiple outputs to different processing blocks. 
%, and it used to switch between processing blocks of the same functionality. 
A decision block has a table of match-action rules. The match field of a rule is
properties of the input data blocks, and the action field is one or more of the
output ports. Some ports connect to upper or lower layer. Some ports connect to
processing blocks. Some ports are NULL ports (data blocks will be dropped,
e.g. missed deadline). The match field can use the meta data attached to data
blocks. The decision block has a pipeline of reading meta data, matching meta
data, and taking action.  
Decision block is dumb and only accept configurations from the scheduler in the
control plane. 

\textbf{State caching and invalidation:} Processing and decision blocks cache
the state information read from the control plane. The state information is
tagged with its subframe number. When a processing or decision block processes a
data block with the next subframe number, the old state will be invalidated and
new state will be requested or configured by the control plane. This simple
mechanism ensures consistent state. 


Figure~\ref{fig:datap-downlink} is an example of control plane and PHY part of data plane.
Turbo/Reed-muller coding/decoding are processing blocks which require no
configuration; modulation/demodulation needs 
the configuration of current modulation and coding scheme (MCS) (i.e. the
modulation and coding rate) from the control plane; channel estimation block
writes the result to the control plane. A transport block's meta data contains
the UE ID. The control plane decides the MCS that a UE should use, and writes
the rules (UE ID as match and output port as action), so that the  transport
block is switched to its current coding/decoding processing block.


\textbf{Reconfiguring Data Plane:} The operator can also reconfigure the
directed graph of the data plane. For example, it can dynamically insert  
a processing block, add the associated rules in the decision block and activate
%erran: a particular ports of the connecting blocks.  
specific ports of the connected blocks.  

\subsection{Control Plane}
The control plane of a base station consists of a scheduler and a base station
information base (BIB). 

\textbf{Information base:} BIB is used to share information between the
scheduler and the processing blocks in the data plane. 
BIB has three types of information: (1) cell (each base station has multiple cells) specific information such as
cell ID, cell specific scrambling sequence, number of antennas, (2) UE specific information which contains static 
information such as UE capability, UE ID and dynamic information such as flow ID, MCS, application type, 
transmission mode (MIMO or not), (3) network-wide configuration information such
as frame length, size of control channel, etc. 
The union of BIBs are the RAN information base (RIB). 

\textbf{Lock free shared access of BIB:} There are two writers: the scheduler
and the channel estimation block. The scheduler writes to the BIB that are read by
multiple processing or decision blocks. The scheduler will not initiate a
subframe's processing pipeline before it finishes writing the BIB. Therefore,
there is no need for locks for the shared access of BIB. The channel estimation
block writes to the BIB. The scheduler reads the BIB only after it was notified
that new channel estimation information is ready. 

Each base station scheduler or the cooperative RAN scheduler has the logic to
determine a UE flow's processing pipeline and configures the blocks in the data
plane. 
As Shown in Figure~\ref{fig:datap-downlink}, the scheduler decides and
configures what modulation and coding rate to use per subframe 
(1 ms) based on channel state information reported by UEs. 

In an LTE control plane, each base station scheduler basically implements the following logic:
\begin{itemize}[leftmargin=*]
\itemsep0pt \parskip0pt \parsep0pt
\item Setup per-flow data path.
\item Receive channel condition estimation from UEs and determine their MCSs and
  transmission modes (e.g. MIMO or not). 
\item 
%Neighboring base stations' schedulers can cooperate to use the frequency of overlapped 
%regions~\cite{RadioVisor14}.
The RAN scheduler may make partial decisions on resource block  (time and
frequency) allocation in order to reduce interference.  
\item The base station scheduler assign resource blocks to UEs within the
  constraints made by the RAN scheduler.  
\item Schedule retransmission of failed data blocks (no acknowledgement or
  validation failure). 
\end{itemize}

\subsection{Management Plane}
\label{sec:managementplane}
\begin{figure}
\small
\centering
{\setlength{\tabcolsep}{0.1em}
\begin{tabular}{cc}
\includegraphics[width=0.24\textwidth]{figures/handover_1.pdf} &
\includegraphics[width=0.24\textwidth]{figures/handover_2.pdf} \\
(a) Deadline Satisfied & (b) Deadline Missed \\
\includegraphics[width=0.24\textwidth]{figures/handover_3.pdf} &
\includegraphics[width=0.24\textwidth]{figures/handover_4.pdf} \\
(c) Offload to a New Core & (d) Offload to a New Server
\end{tabular}
}
\vspace*{-0.15in}
\caption{Offload Processing}
\label{fig:handover}
\end{figure}

A base station's control plane and data plane are packed as one base-station
task. The management plane 
is in charge of allocating computational resources to these tasks. PRAN
dedicates computational 
resources to each base-station task adaptively and periodically, and it also reserves
a share resource pool. When a task receives bursty traffic whose processing requirement exceeds the current
resource allocation, the task offloads computation onto a shared-pool of
currently idle resources. 

%Software radio data 
\textbf{Predicting resource needed per subframe:} L1/L2 processing has the following 
properties which helps us to propose a precise resource allocation.
\begin{itemize}[leftmargin=*]
\itemsep0pt \parskip0pt \parsep0pt
\item The data processing in L1/L2 is CPU bound. The number of CPU cycles that a task gets 
is the critical factor that determines its processing time. So we only consider CPU cores allocation.
\item The data processing has fixed computation steps (e.g. FFT, turbo coding), so 
the processing time to transfer a data block is fixed and can be profiled.
\item Radio resource blocks are scheduled and assigned to UEs. Therefore, the scheduler 
knows the resource requirement of a subframe before the data plane processing of the subframe starts.
%, then they start to transmit/receive data according to the assignments. So
%before a data transfer block is %delievered, its resource requirement is
%predictable. 
\end{itemize}

We can predict the resource utilization of a base station in a subframe in terms of 
CPU cores. Assume a subframe lasts time $T$, if a data path takes $t$ to process a 
data block in 1 subframe, that data path's resource requirement can be profiled as 
$c=\frac{t}{T}$ CPU cores. In 1 subframe, if UE $i$ of a base station requires $c_i$ 
cores, then that base-station task requires at least $C=\sum_{i=1}^{n}c_i$ cores. To 
obtain the actual number of cores needed, we use a simple greedy bin packing algorithm 
that leaves sufficient slack times at each core (to absorb variations of processing time). 
%lierranli: following not clear
%The resource manager can pack 
%multiple tasks (requiring $R=\sum_{i=i}^mC_i$ cores) and dedicate multiple cores to them ($\ge \lceil R\rceil$). 
For example, in Figure~\ref{fig:handover}(a), the total processing time of 5 UEs' data 
blocks is less than 1 subframe's length, this base-station task can be dedicated to 1 
core; if a 6th UE joins (Figure~\ref{fig:handover}(b)) and the total processing time 
exceeds the subframe length, there must be a UE suffering from missed deadline, so the 
base-station task needs at least 2 cores. 

Although the resource requirement of each base-station task is predictable, it is not 
feasible to dynamically allocate resources for every subframe. The reason is that: when a base 
station task is reallocated to another server, the state migration consumes a significant 
amount of its subframe duration, and the remaining time is little for data processing. 
We propose to use historical data to 
reallocate resources periodically. For example, a base-station task should be given more 
resources during its daily peak hours; if in the past few periods, a base-station task 
always exceeds its dedicated resources, it should be assigned more dedicated resources 
in the next period.

\textbf{Dynamic resource pooling:} In the period between 2 resource allocation adjustments, 
a base-station task has a fixed amount of dedicated resources, so it is possible that 
traffic bursts happen and miss deadlines.
We propose to reserve a shared resource pool for bursty traffic. The resource pool 
can be an idle core in each server, or a few idle servers. When a scheduler predicts 
that it would miss deadline based on current dedicated resources, it would offload 
a part of its workload to the shared resource pool. For example, the scheduler can 
start a new thread to use the idle core in its server (Figure~\ref{fig:handover}(c)); 
or it can move data to an idle server and make that server process it (Figure~\ref{fig:handover}(d)). 
In the latter case, the data movement overhead should also be considered. Based on 
traffic pattern and data movement overhead, we optimally allocate the amount of 
dedicated resources and reserve the shared resource pool so that the total resources 
required are minimized. The detailed algorithm is omitted due to space limitation.



\begin{figure*}[htb]
\centering
        \begin{minipage}[htb]{0.22\textwidth}
        \centering
        \hspace{-0.045\textwidth}
        \includegraphics[width=0.99\textwidth]{./figures/act_dist.pdf}
        \myminicapii{Active Base Station(\%) CDF}{fig:act_dist}
        \end{minipage}
        \begin{minipage}[htb]{0.29\textwidth}
        \centering
        \hspace{-0.045\textwidth}
        \includegraphics[width=0.99\textwidth]{./figures/res_pool.pdf}
        \myminicapii{Resource Sharing v.s. Dedicating}{fig:res_pool}
        \end{minipage}
        \begin{minipage}[htb]{0.22\textwidth}
        \centering
        \hspace{-0.045\textwidth}
        \includegraphics[width=0.99\textwidth]{./figures/setup.pdf}
        \myminicapii{Scheduler Control}{fig:configure}
        \end{minipage}
        \begin{minipage}[htb]{0.22\textwidth}
        \centering
        \hspace{-0.045\textwidth}
        \includegraphics[width=0.99\textwidth]{./figures/res_alloc.pdf}
        \myminicapii{Resource Allocation}{fig:res_alloc}
        \end{minipage}
\end{figure*}
%\vspace{-0.2inch}

\subsection{Language and Interfaces}
We envision a set of programming tools and interfaces designed to help operators 
utilize our design~\cite{ziria, feldspar}. These tools are designed to help address various concerns
which we list below: 

%\begin{itemize}
%\item 
\textbf{Compiling processing blocks:} For programming processing blocks, we envision 
a compiler which can (a) target multiple backends including processors, DSP chips, etc. and (b) 
can create a performance profile for the processing block. The performance profile is then 
used by the scheduler when deciding how to schedule pieces of the data path.

%\item 
\textbf{Data path Linking:} Given a set of processing blocks and decision blocks we provide a 
linker that combines the code and estimates resources needed to run the entire data path.

%\item 
\textbf{Common Functions:} Many of the processing blocks are shared across multiple kinds of 
data paths, e.g., encoding and decoding functionality, etc, which are provided by a shared 
library and are optimized for a variety of hardware platforms.

%\item 
\textbf{Domain specific languages (DSL):} 
%Finally to aid operators we plan on providing a DSL to simplify the creation of
%processing blocks and other data path components.   
%This DSL will include features to help define the control plane interface to
%these controllers and can be optionally used to simplify data path
%programming. 
Finally to aid operators we plan on providing a DSL to simplify the creation of processing 
and decision blocks, control plane and management interfaces to controllers. Developers 
write code that derives from processing and decision blocks. The compiler automatically generates 
relevant code for control and configuration. For example,  developers can write 
{\small $Viterbi.insert(beforeBlockID, afterBlockID)$} to insert the Viterbi processing 
block without worrying about the details of how the blocks are inserted.  


%\end{itemize}
\eat{
\begin{figure}[htb]
%\footnotesize
\setlength{\tabcolsep}{0.1em}
\tiny
\begin{tabular}{c}

\begin{tabular}{cc}
\begin{tabular}{l}
\textbf{typedef struct} TransportBlock \{ \\
\hspace{1em} Bytes[] payload; \\
\hspace{1em} Mapping meta; \\
\} TB;\\
\\
\textbf{class} ProcessingBlock \textbf{extends} Block\\
\hspace{1em} Mapping params;\\
\hspace{1em} \textbf{virtual} TB Process(TB input);\\
\hspace{1em} void WriteRIB(Obj key, Obj value);\\
\hspace{1em} Obj ReadRIB(Obj key);\\
\\
\textbf{class} DecisionBlock \textbf{extends} Block \\
\hspace{1em} Mapping rules;\\
\hspace{1em} \textbf{virtual} int Processing(TB input);\\
\\
\textbf{class} Scheduler\\
\hspace{1em} List<Block> blocks;\\
\hspace{1em} Obj ReadRIB(Obj key);\\
\hspace{1em} void WriteRIB(Obj key, Obj value);\\
\hspace{1em} void WriteBlock(Block b, Obj key, \\
\hspace{8em} Obj value);\\
\hspace{1em} void Scheduling()\\
\hspace{2em} while(true)\\
\hspace{3em} ReadRIB();\\
\hspace{3em} SchedulingAlgorithm();\\
\hspace{3em} WriteRIB();\\
\hspace{3em} WriteBlocks();\\
\hspace{3em} PushDataToDatapath();
\end{tabular} 
&
\begin{tabular}{l}
\textbf{class} CRC \textbf{extends} ProcessingBlock\\
\hspace{1em} override TB Process (TB input)\\
\hspace{2em} CRC=GenCRC(input);\\
\hspace{2em} return input.append(CRC);\\
\\
\\
\textbf{class} ChannelEstimation \textbf{extends}\\
\hspace{6em} ProcessingBlock (TB input)\\
\hspace{1em} \textbf{override} TB Process (TB input)\\
\hspace{2em} UEID = input.meta["UEID"];\\
\hspace{2em} ChannelID = ReadRIB (UEID);\\
\hspace{2em} result = CEAlgorithm (input);\\
\hspace{2em} WriteRIB (ChannelID, result)\\
\hspace{2em} return input;\\
\\
\\
\textbf{class} Modulation \textbf{extends} ProcessingBlock\\
\hspace{1em} \textbf{override} TB Process (TB input)\\
\hspace{2em} mcs = ReadRIB("MCS");\\
\hspace{2em} output = Modulate(mcs, input);\\
\hspace{2em} return output;\\
\\
\\
\textbf{class} ModDecisionBlock \\
\hspace{8em} \textbf{extends} DecisionBlock\\
\hspace{1em} \textbf{override} int Processing(TB input)\\
\hspace{2em} UEID = input.meta["UEID"];\\
\hspace{2em} return rules[UEID];\\
\end{tabular} \\
{\small (a) Abstractions} & {\small (b) Block Examples}
\end{tabular} 
\\
\begin{tabular}{l}
\textbf{def} PHYDownlink: 1ms\\
FromMAC() -> mdb:ModDecisionBlock(); \\
mdb[0] -> tb:TurboCoding() -> scr:Scrambling() -> mod:Modulation()\\
\hspace{2em} -> fft:FFT() -> toRRH(); \\
mdb[1] -> rm:ReedMullerCoding() -> scr;
\end{tabular}
\\
{\small (c) PHY Downlink Data path}

\end{tabular}
\vspace*{-0.15in}
\caption{PRAN Program}
\label{fig:dsl}
\end{figure}


We design a domain specific synchronous data flow language. This language  
 (1) supports flexible programmability, (2) has time constraint into consideration
and (3) can integrated with accelerators (e.g. DSP chips). Figure~\ref{fig:dsl}(a)
shows the basic elements of the abstractions in the data plane (Section~\ref{sec:dataplane}).
The data in RIB, meta data in transport blocks and parameters in processing blocks are simplified
as key-value stores. The interfaces for processing blocks and the scheduler to read/write operations 
of the key-value system. The scheduler infinitely reads the RIB, schedules
resource blocks and UE data paths and configures the data plane blocks.  

The processing block and the decision block has a virtual function named ``process",
which should be overridden by their subclasses, so that the subclasses can implement specific functions.
A data path is constructed by these blocks. Specifically in PRAN, a data path
can have a time constraint (1ms in Figure~\ref{fig:dsl}).  

In Figure~\ref{fig:dsl}(b), we give examples of
data plane blocks implemented by using the basic elements. A CRC block takes in the block, computes and appends
CRC at the end of the data and output it; a Channel Estimation block process the incoming data, estimate
the channel conditions and write the result to RIB; a Modulation block read the current MCS from RIB as the
modulation parameter, and processes and outputs UE data. The Modulation Decision Block read the UE ID
of a transport block, and matches it to the match-action rules and switches it to corresponding successor
processing block. Figure~\ref{fig:dsl}(c) is the description of the downlink
data path in Figure~\ref{fig:datap-downlink}.

PRAN compiler should pass the time constraint to the scheduler, so that the scheduler can compute the resource
requirement (Section~\ref{sec:managementplane}). The PRAN program should also be
implemented to allow a data path to be run in multi-threading mode. 
}
