\section{Evaluation}
We modify OpenAirInterface~\cite{oai} LTE implementation to restructure the downlink
processing into our data plane blocks. We measure base station traces from a
national cellular operator and estimate the resource requirements in sharing and
dedicating modes. We also show the feasibility of data plane configuration and
dynamic resource allocation. 
Our testbed consists of Dell T5500 servers connected via Ethernet, each with 8 cores
and 16GB memory. 

\textbf{Necessity of resource pooling:}
We collect real-world base station bearer traces including MCS, UE ID, base
station ID in the granularity of 1ms. There are more than 3300 base stations and
20 million records per minute. 
A base station is active in a subframe if it
transmits downlink data to UEs or receives uplink data from UEs. 
Figure~\ref{fig:act_dist} is the distribution of
active base stations every 1ms. 
We profile the resource utilization of a data
path in terms of CPU cores.  
In the experiment, we guarantee 
$P$\% of each base station's subframes can be
processed by the dedicated resource, and the remaining subframes processed by
the resource pool. 
We find the max requirement on resource pool in the whole duration
as the shared resource requirement. We vary $P$ and draw the  
shared and dedicated resource utilization in Figure~\ref{fig:res_pool}. When $P=0$, the shared
resource is 210 cores, which means all the base stations processing can actually
be handled by 210 cores. When $P=1$, the dedicated resource is 5972 cores, which
indicate if each base station is provided resources for its peak-time workload,
they need 5972 cores in total. In the cases in between, the more resources are
dedicated, the less shared resources are needed; 
but dedicating resources are much less efficient than sharing one. 
%There is a potential efficiency improvement of about 30X on resource utilization.
There is a potential of 30$\times$ reduction on resources compared with peak
allocation. As shown in Figure~\ref{fig:act_dist}, in our traces, only 12\%
(400) base stations have data to transmit for more than 99\% of
subframes. Therefore, $P=0$ is optimal in terms of total resources used. 
%This may not be the case for other workloads.  

\textbf{Control speed:} We run a down-link processing pipeline, which includes
turbo encoding, scrambling and OFDM modulation. The scheduler dynamically
changes the encoding scheme (e.g. QPSK, 64QAM). We measure the speed of this
state change. We put scheduler at different location and show the results in
Figure~\ref{fig:configure}. If the scheduler and the processing pipeline are run
in different threads of the same process, the state change overhead is
negligible, which is about 0.01us. If the scheduler and the processing pipe line
are in the same server but different processes, they do inter-process
communication which takes about 5us. If the scheduler and the processing
pipeline are in different servers, network delay is introduced, which is about
23us.  

\textbf{Dynamic resource allocation validation:}
%We then demonstrate the feasibility of dynamic resource allocation. 
We still run the down-link processing of a base station
(Figure~\ref{fig:res_alloc}). Initially, the base station is sending data to 4
UEs, in each subframe (1ms) the total processing time of all 4 UEs' data is
about 800us, so the deadline is satisfied. After 1 frame (10ms), another 2 UEs
join the RAN, the total processing time of 6 UE's data in 1 subframe is about
1200us, which misses the deadline. The scheduler start to use another core and 3
UEs' processing is dedicated to 1 core. The processing is in parallel, so the
total completion time is reduced to about 700 us. 



