%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%2345678901234567890123456789012345678901234567890123456789012345678901234567890
%        1         2         3         4         5         6         7         8
\documentclass[letterpaper, 10 pt, onecolumn]{article}  % Comment this line out
                                                          % if you need a4paper
%\documentclass[a4paper, 10pt, conference]{ieeeconf}      % Use this line for a4
                                                          % paper
%\documentclass[article]{IEEEtran}
%\IEEEoverridecommandlockouts                              % This command is only
                                                          % needed if you want to
                                                          % use the \thanks command
%\overrideIEEEmargins
% See the \addtolength command later in the file to balance the column lengths
% on the last page of the document



% The following packages can be found on http:\\www.ctan.org
\usepackage{graphics} % for pdf, bitmapped graphics files
\usepackage{epstopdf}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{stfloats}
\usepackage{fixltx2e}
\usepackage{wrapfig}
%\usepackage{epsfig} % for postscript graphics files
%\usepackage{mathptmx} % assumes new font selection scheme installed
%\usepackage{times} % assumes new font selection scheme installed
%\usepackage{amsmath} % assumes amsmath package installed
%\usepackage{amssymb}  % assumes amsmath package installed
\usepackage{algorithm}
\usepackage{algorithmic}

\title{\LARGE \bf
Power Cell
}

\author{Edward Wertz, Gan Quan, and Arisoa Randrianasolo \\
Department of Computer Science\\
Texas Tech University\\
Lubbock, TX, USA\\
}% <-this % stops a space


\begin{document}



\maketitle
\date{}
\thispagestyle{empty}
\pagestyle{empty}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
The {\em PowerXCell 8i} processor is the second generation of the {\em Cell} processor line co-designed by Sony Computer Entertainment, Toshiba Corporation, and IBM (STI).  The first {\em Cell} processor was designed for multimedia computing with the initial application being the primary processor of Sony's Playstation 3 entertainment system.  The one drawback of the initial {\em Cell} processor design was that the double precision floating point (DPFP) operations were not optimized in hardware.  To address this, the second generation {\em PowerXCell 8i} processor was created with hardware optimization for DPFP calculations, cementing the processor as a high performance and scientific computing behemoth.  Both processors share the same architecture except for the advancement in DPFP. What we provide in this article is an overview of the {\em Cell/PowerXCell 8i} architecture, programming paradigm and real world applications.  
\end{abstract}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Introduction}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


 
\section{The Cell Architecture}
The Cell processor is heterogeneous multicore processor designed for scalability and versatility.  A block level diagram of the high-level architecture can be seen in in Figure~\ref{fig:1} and on-die layout in Figure~\ref{fig:2}. We present the analysis of the Cell architecture in the following way.  First we explore the versatility of the processor by describing the architecture of the two types of cores on the die.  Then we explore the scalability of the architecture by examining the bus connecting all on-die elements and the I/O controllers for interconnecting and networking the processor.  The typical processor configuration is to have a single {\em Power Processing Element} (PPE) for running the operating system and six to eight {\em Synergistic Processing Elements} (SPEs) as specialized SIMD co-processors. The {\em Element Interconnect Bus} (EIB) connects the processors to the {\em Memory Interface Controller} (MIC) and the I/O interface controller, called FlexIO \cite{Kah}.


\begin{figure}[h]
  \begin{center}
    \includegraphics[width=.75\textwidth]{cell_block.jpg}
  \end{center}
  \caption{Block diagram of the Cell processor from \cite{Kah}}
  \label{fig:1}
\end{figure}
\begin{wrapfigure}{r}{0.6\textwidth}
  \begin{center}
    \includegraphics[width=.4\textwidth]{chip.jpg}
  \end{center}
  \caption{Cell die layout from \cite{Kah}}
  \label{fig:2}
\end{wrapfigure}
%   The analysis of the architecture The primary processor, called the Power Processor Element (PPE) is composed of of the following main units: power processor element(PPE), synergistic processor element(SPE), element interconnect bus(EIB), memory interface controller(MIC) and I/O interface. Figure~\ref{fig:1} represents the overview of the Cell processor.
\subsection{Power Processing Element}
The PPE is a standard (RISC) 64-bit PowerPC processor that can run legacy 32-bit and 64-bit PowerPC programs at a clock frequency of 3.2 GHz.  Both the PowerPC instruction set and the Vector/SIMD Multimedia Extensions are supported.  The primary task of the PPE is to run the operating system and the non-computationally intensive components of applications. Computationally intensive tasks of Cell programms are to be sent to SPEs. This division between cores dedicated to administrative code and cores dedicated to computationally intensive code helps reduce the power requirement of the over all cell processor.\cite{CACourse2} 

\begin{figure}[h]
  \begin{center}
    \includegraphics[width=1\textwidth]{ppe_block.jpg}
  \end{center}
  \caption{PPE block diagram from \cite{Kah}}
  \label{fig:3}
\end{figure}
\begin{figure}[h]
  \begin{center}
    \includegraphics[width=1\textwidth]{ppe_pipeline.jpg}
  \end{center}
  \caption{PPE pipeline from \cite{Kah}}
  \label{fig:4}
\end{figure}
The PPE contains 32 general registers, 64 KBs of L1 cache, divided evenly between instructions and data\cite{CACourse2}, and 512 KB of level 2 cache\cite{Kah}. As depicted in the block diagram of Figure~\ref{fig:3} two independent threads are supported through alternating instruction fetching and execution, simulating a dual core processor.  The 23 stage pipeline of the PPE is depicted in Figure~\ref{fig:4}.  Both the L2 cache and the address translation cache are manually controlled by software and the programmer, facilitating hand tuned orchestrations for real time processing.   


 There are 3 core components of the PPE.  The instruction unit (IU) is responsible for instruction management such as fetching, decoding, branching and issuing to the other components for execution.  The fixed-point execution unit (XU) is responsible for executing load/store type instructions, Integer arithmatic and other fixed point operations.  The vector scalar unit (VSU is responsible for all vector and floating point instructions with some restrictions\cite{Kah}. The IU is able to fetch 4 instructions per cycle per thread and issues up to 2 instructions per cycle, in order.  To aid in branch prediction a 4 KB by 2-bit table containing branch history is kept along with 6 bits of global history per thread. The VSU has 32 128-bit register file per thread and all instructions are 128-bit SIMD allowing the following combinations: 2  64-bit, 4 32-bit, 8 16-bit, and 128 1-bit operations. There are four VSU subunits dedicated to performing sumple, complex, permute and single-precision floating point operations.  

 

\subsection{Synergistic Processing Element}
The 6-8 SPEs are the work horses of the Cell Processor, designed both for high data flow and intensive computation.  Computations within an SPE are handled by the synergistic processing unit(SPU), also a RISC processor, while data input and output is handled by the memory flow controller (MFC). The MFC includes a direct memory access(DMA) controller, a memory management unit (MMU), a bus interface unit, and an atomic unit for synchronization with other SPUs and the PPE \cite{Kis}. The block diagram of an SPE is depicted in Figure~\ref{fig:5}.  

\begin{figure}[h]
  \begin{center}
    \includegraphics[width=1\textwidth]{spe_block.jpg}
  \end{center}
  \caption{SPE block diagram from \cite{Kah}}
  \label{fig:5}
\end{figure}

\begin{figure}[h]
  \begin{center}
    \includegraphics[width=1\textwidth]{spe_pipeline.jpg}
  \end{center}
  \caption{SPE pipeline from \cite{Kah}}
  \label{fig:6}
\end{figure}

The SPU includes 256 KB of local store memory (LS) which contains both instructions and data with no hardware distinction. The (LS) is not an automated cache and data must be manually transferred to and from the SPE's LS through DMA reads and writes, handled by the MFC.  Data transfers through DMA may occur between SPE-PPE, SPE-Main Memory, and SPE-SPE. Each SPE can have up to 16 outstanding DMAs and each DMA can be up to 16 KB (1 page) in size\cite{CACourse2}, enabling the ability to continuously stream in and stream out the entire 256 KB LS. The SPU can transfer data to and from the LS at the rate of 16B per cycle and a DMA to/from the LS via the MFC and the EIB can occurr at the rate of 128B per cycle.  Through the versatility of the DMA architecture SPEs are able to implement streamed processing by storing specialized instruction for each SPE in part of it's LS while using DMA transfers between SPEs to construct a sequential pipeline, allowing data to be streamed through each SPE in 16KB chunks in the remaining space of the LS.     

Each SPU has 128 general registers that are 128-bit SIMD capable which breakdown as follows: 2 64-bit integer or double precision floating point operations, 4 32-bit integer or single precision floating point operations, 8 16-bit integers, 16 8-bit integers, and 128 1-bit operations.  The large register count allows for deep unrolling of loops to fill execution pipelines\cite{CACourse2} and allows the compiler to reorder a large number of instructions to keep the SPU working during instruction latencies\cite{Kah}. The register file has 6 read ports and 2 write ports.  Instructions are fetched from the LS in 32 4B groups, aligned to 64B boundaries for bandwidth purposes.  The SPE can issue up to two in-order instructions per cycle to seven execution units organized in two execution pipelines. One pipeline handles fixed and floating point operations and the other provides load/stores, byte permutations, and branch operations.  The SPE pipeline is depicted in Figure~\ref{fig:6}. Fixed point operations take 2 cycles, single precision floating point and load instructions take 6 cycles.   

The in-order execution of instructions combined with the large register file, non-volitile LS and DMA commands allows the programmer and compiler to fine tune SPE execution for the controlled timing needed by real-time processing. 
 
\subsection{Element Interconnect Bus}
\begin{figure}[h]
  \begin{center}
    \includegraphics[width=1\textwidth]{eib_topology.jpg}
  \end{center}
  \caption{EIB topology from \cite{CACourse2}}
  \label{fig:7}
\end{figure}
\begin{figure}[h]
  \begin{center}
    \includegraphics[width=1\textwidth]{eib_8example.jpg}
  \end{center}
  \caption{EIB example of concurrency from \cite{CACourse2}}
  \label{fig:8}
\end{figure}

The Element Interconnect Bus connects all the major components of a Cell Processor together through four unidirectional rings, two clockwise and two counter clockwise.  Each ring is capable of moving 16 Bytes of data every two clock cycles because the EIB operates at half the frequency of the processor.  Each ring is able to support up to 3 simultaneous transfers provided there are no collisions or overlap in communication.  At the center of the ring and connected to each latch for each component on the ring there is an arbitrator which helps orchestrate and negotiate the data flow.   

The theoretical peak throughput for the rings on EIB is 4*3*16/2 = 96B/(cpu cycle), at 3.2 GHz this is 307 GB/s \cite{CACourse2}, but the maximum bandwidth is limited to the rate at which the arbiter is able to interact with the components, which is 1 component per bus cycle. Each component may transmit up to 128Bytes in 16Byte increments.  Components may both send and receive data simultaneously.  At complete utilization, with the arbiter starting a 128bit transfer every bus clock cycle, the max throughput is 128B*1.6GHz = 204.8GB/s\cite{CADevworks}. Between going clockwise and counter clockwise, the arbiter chooses the shorter path so that data transfers do not have to travel more than half the length of the ring.   This ring topology was chosen instead of a cross-bar interconnect to save space on the die while maintaining a large bus bandwidth\cite{EIBInterview}.  

\subsection{Communicating To The PPE using Interrupts, Mailboxes, and DMAs}
\begin{wrapfigure}{R}{0.5\textwidth}
  \begin{center}
    \includegraphics[width=0.5\textwidth]{single_cell.jpg}
  \end{center}
  \caption{Single Cell Processor\cite{Kah}}
  \label{fig:9}
\end{wrapfigure}
There are three ways of communicating from SPEs to the PPE.  All SPEs have the ability to send a 32-bit hardware interrupt to the PPE which then context switches to process the interrupt with an OS level thread.  This method of communication is not efficient as all SPEs share the same interrupt channel and the PPE must clear the last interrupt sent before a new interrupt can be sent by any other SPE\cite{NBBA}.  The second form of interprocess communication are outbound 32-bit mailboxes.  While each SPE has their own outbound mailbox, there are still some inefficiencies to contend with.  Each mailbox resides on its sending SPE.  Because no interrupt is generated, the PPE must periodically POLL the SPEs for messages in the outbox.  Performing this polling requires a context switch to OS code and uses the EIB to perform the query\cite{NBBA}.  The third wa of communicating with the PPE is to have set aside specific regions in memory for each SPE to DMA messages into.  This reduces the need for a context switch to OS code and an outbound query on the EIB from the PPE to the SPEs.  


\subsection{Memory Interface Controller and Rambus XDR Memory}

The MIC is one of the components directly connected to the EIB, allowing DMA transfers to and from main memory.  There are two channels, each connecying to up to 8 (total of 16) Rambus XDR DRAM memory modules for a total of 512 MB dedicated memory possible per Cell Processor\cite{CADevworks}.  Each channel is clocked at 3.2GHz and is 32-bit wide, capable of delivering 12.8GB/s for a total bandwidth of 25.6GB/s\cite{Kah}.  Memory may be expanded through networking Cell processors together or adding networked storage through the IO interfaces.  Rambus XDR is a RAM interface emphasizing per pin bandwith reducing cost of the PCB while maintaining high performance\cite{Als}.  


\subsection{Networking with FlexIO and The Broadband Interface }

\begin{wrapfigure}{R}{0.6\textwidth}
  \begin{center}
    \includegraphics[width=0.55\textwidth]{cell_paired.jpg}
  \end{center}
  \caption{Dual Cell Configuration\cite{Kah}}
  \label{fig:10}
\end{wrapfigure}
The two IO modules, called IOIF1 and IOIF0, support the Rambus FlexIO interface.  The IOIF0 module further provides what is called the Broadband Interface Protocol which is a coherent connection seemlessly extending the Element Interconnect Bus to another device, usually another Cell processor.  The IOIF0/BIF module is capable of up to 6 Bytes outbound and 5 Bytes inbound per cycle or 30GB/s out and 25GB/S in, scalable at 5GB/s increments. The IOIF1 module scales from 0 to 2 Bytes in either direction, upto 10GB/s in or out.  The BIF coherent connection uses 4 outgoing and 4 incoming channels to extend the EIB.  The network protocol is layered into four abstractions, the physical layer, the data link layer (packet transmission), the transport layer (packet generation and parsing), and the logical layer at the highest level, compatible with the EIB\cite{CBEIMI}.  

\begin{wrapfigure}{R}{0.7\textwidth}
  \begin{center}
    \includegraphics[width=0.7\textwidth]{cell_quad.jpg}
  \end{center}
  \caption{Quad Cell Configuration\cite{Kah}}
  \label{fig:11}
\end{wrapfigure}
The Cell processor has three primary configurations.  The standalone processor in Figure~\ref{fig:9} uses the IOIF or FlexIO protocol with both IO modules.  Figure~\ref{fig:10} depicts two Cell processors directly connected to each other through the BIF protocol on IOIF0, leaving IOIF1 exposed for IO to the external network.  The max configurations for clustering Cells is 4 cells networked through a switch connecting the 4 EIB rings via the BIF protocol.  This configuration is designed for blades of a high performance computing cluster. In this scenario IOIF1 is used to connect to shared memory, more blades and other devices\cite{Kah}\cite{CBEIMI}.  

\section{Programming}
The PPE supports the following compilers: gcc, g++, gfortran, xlc, xlc++, xlf and OpenMP. This compilers come with 32 and 64-bit options. The SPE supports the following compilers: gcc, g++, xlc and xlc++. This compilers are only available in 32-bit option. The cell, therefore, can support standard programming language such as C, C++, Fortran and Ada.
Programming can be done in 3 options: Native programming, Assisted programming and Development tools. Native programming allow user to directly program native hardware using native assembly programming. This options requires the most coding out of the 3 programming options and require a high level of knowledge about the cell's native component. Assisted programming allow the manipulation of the native hardware component via the use of libraries and frameworks. The built in library and frameworks available to programmers are: Accelerated Library Frameworks (ALF), Data Communication and Synchronization (DaCS), Basic Linear Algebra Subroutines (BLAS) and Standardized SIMD Math libraries.  This options does not require as much coding as the native programming options. The last options is Development tools programming. This options is user tool-driven programming with hardware abstraction. The programmer is equipped with programming tools designed specifically to program one type of application. The performance achieved with this options may not equal the performance of native programming.
\subsection{Exploiting the SPE}
There exist 4 different programming strategies that can be used to exploit the SPE.
\subsubsection{Simple Function Offload}
This strategy is similar to remote procedure call styles. The PPE puts all the data and all the instructions needed into the local store of the SPE. The SPE will perform the computation needed and  then use DMA to return the results back to the PPE. This strategy is to be used only when the SPE working set fits in the local store. 
\subsubsection{Typical Function Offload}
This strategy is used when the SPE working set cannot fit in the local store. The PPE will stream to the SPE's local store the initial data and the code to the applied to the data. The SPE carries the computation and the stream the result back to the PPE via DMA. The streaming from the PPE to the SPE's local store and vice-versa continues until all the desired computations are accomplished.  
\subsubsection{Pipelining  For Complex Functions}
In this strategy each SPE are configured to be similar to a stage in a pipeline. The PPE load data to the first SPE in the pipeline. Each SPE is pre-configured with the specific instruction to be performed. The last SPE in the pipeline reports the result of the computation back to the PPE. In this strategy, the neighbor SPE can have local store to local store communication and DMA to a different local store communication. 
\subsubsection{Parallel Stages for compute-intense functions}
In this strategy, the SPEs are arranged to perform tasks in parallel. The PPE is tasked with dividing  each workload into a smaller parts and assigned the parts to the SPEs. The SPEs execute the computations in parallel and report each result to the PPE. The PPE combines the results and redistribute any further computation needed. 
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,refer}

\end{document}
