\section{Current Internet Architecture}
\label{sec:currentinternetarchitecture}
In this section we describe the current Internet architecture, its operation and, content delivery models and its limitations to the current use of Internet, where content data in form of multimedia application and file sharing are hugely distributed across the network. \bigskip

\noindent The current Internet Architecture focuses mainly on connecting two computers together to exchange data and WHO are exchanging information. WHAT the information of this data is left aside.\bigskip

\noindent Content providers need to distribute their content to the client or user, thus they need a system to perform this task. Current models for delivering content over the Internet are being deployed and used by the content providers and content users namely (1) Client/Server model, (2) Peer-to-Peer (P2P), and Content Distribution Network (CDN). \bigskip

\noindent We need to understand how these different content delivery model works and their basic structures. In this section we will define the different current content delivery models mentioned and their architectures. We will also cover some performance and efficiency issues as well as problems and limitations of these current architectures. Lastly we will see how Information Network Centric (ICN) architecture can be a possible solution for the problems that has been identified. For detailed implementation and other benefits of ICN please refer to Information Centric Network (ICN) Architecture section of this report.

\subsection{Client/Server Model}

\subsubsection{Basic Structure}
The very basic architecture of content delivery over the Internet is based on a client/server model. The process takes place when the client request for content via the Internet to the server, the server in return respond to the client by sending the content to the client. The client/server model involves a client computer which is the user and the contents that we see online are stored on a server or a host computer.

\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{./Pictures/Figure1.png}
\end{center}
\caption{An example of Internet connectivity from servers to clients, different websites, Internet Service Providers that creates a network of networks.}
\label{fig:1}
\end{figure}

\noindent The user or a client computer gets an Internet access from the local Internet Service Provider (ISP). \textit{``Internet service provider (ISP), company that provides Internet connections and services to individuals and organizations. In addition to providing access to the Internet, ISPs may also provide software packages (such as browsers), e-mail accounts, and a personal Web site or home page. ISPs can host Web sites for businesses and can also build the Web sites themselves. ISPs are all connected to each other through network access points, public network facilities on the Internet backbone.''}\footnote{\url{http://global.britannica.com/EBchecked/topic/746032/Internet-service-provider-IS}P} \bigskip

\noindent Users, operators, and/or others publish their content or services in the Internet and inform Search Engines, where the content is cashed in Content Servers. A user requests a content, and if the user does not know where to find it, the Search Engine Servers help the user with a number of choices. When the user select one choice, the Internet will then connect the user to the server, where the content exist, and delivers the content to him/her using host-to-host connection \citep{FIA}. \bigskip

\noindent Every computer that is connected to the Internet is assigned with a unique address in forms of Internet Protocol (IP) Address. IPs use binary numbers that are converted and displayed in decimal notations since computers only speaks one language and that is binary numbers. To make IP addresses more identifiable and easy to remember by users, DNS or Domain Name Service was implemented in forms of human readable language. In the above figure the IP Address 74.125.143.147 is assigned to www.google.com, 31.13.81.81 is assigned to www.facebook.com, and 194.182.232.110 is assigned to www.youtube.com.

\subsection{Web Clients and Servers Connection}

A common approach of a client/server model is a web server delivering content to a web client. Web communication between clients and servers happens through the web browsers by entering a URL or the Uniform Resource Locator, which is the address of the content that the user wants to locate. To be able for a server and a client to communicate with each other a common language is needed to successfully perform this process. The commonly used computer language of communication is HTTP or the Hypertext Transfer Protocol. \textit{``HTTP is the protocol that web browsers and web servers use to communicate with each other over the Internet. It is an application level protocol because it sits on top of the TCP layer in the protocol stack and is used by specific applications to talk to one another.''}\footnote{\url{http://www.theshulers.com/whitepapers/internet_whitepaper/index.html\#net_infra}} \marginpar{\scriptsize Figure \ref{fig:2}: When you browse to a page, such as \url{http://www.oreilly.com/index.html}, your browser sends an HTTP request to the server \url{www.oreilly.com}. The server tries to find the desired object (in this case, \url{/index.html}) and, if successful, sends the object to the client in an HTTP response, along with the type of the object, the length of the object, and other information}

\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.0]{./Pictures/Figure2.png}
\end{center}
\caption{Client/Server \citep[p. 4]{GOURLEY}}
\label{fig:2}
\end{figure}

\noindent TCP/IP or Transmission Control Protocol/Internet Protocol is responsible for transferring and moving the content in forms of packets and sending it to another computer in the network. ``The Internet itself is based on TCP/IP, a popular layered set of packet-switched network protocols spoken by computers and network devices around the world. TCP/IP hides the peculiarities and foibles of individual networks and hardware, letting computers and networks of any type talk together reliably'' \citep[p. 4]{GOURLEY}. To understand clearly the structure of how the content is being identified and transported in the network please refer to the picture below.

\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4]{./Pictures/Figure3.png}
\end{center}
\caption{The hourglass model of Internet protocol \citep[p. 4]{HOFMANN}}
\label{fig:3}
\end{figure}

\noindent The Open Systems Interconnect (OSI) also referring to the hourglass model of the Internet protocol shown in figure \ref{fig:3}. The hierarchy of the layers begins with the physical layer which is in the lowest layer and the application layer which sits on the top of the layer. The (1) physical layer includes electrical, mechanical, and optical medium such as copper wires, fiber optics, Network Interface Card (NIC), wireless systems components and many more, this component is responsible for transforming bits and bytes into signals, which the medium will transmit. The (2) link layer is the second layer in the OSI model that responds to the services requested by the network layer on top of it and issue this requests to the physical layer. Its main responsibility is to encode and decode bits into packets and transmit it to the destination. (3)Network Layer is where the Internet Protocol (IP) resides that forwards the data packets through the network. (4)The transport layer is responsible for coordinating the data exchange between the endpoints. (5)Application layer provides the user with application interface such as the web browser, email, and examples of protocol include HTTP, Telnet, and Domain Name System (DNS) \citep{HOFMANN}.

\subsubsection{Problems and Limitations}
The current Internet architecture is end-to-end approach. It focuses on the identification and location of end systems. This is managed by including the IP address of the destination host in packet headers and forwarded hop-by-hop \citep[p. 2]{BRITO}. It has limitations to the data security, scalability, quality of service, and many other new challenges. The current Internet is about 50 years old, where the main purpose of creating it that time was to let two computers talk together and share recourses, where one needs to use a recourse and the other provides access to it \citep{FIA}. Recourses were only text files. Each machine has its own message address that is used to communicate with each other using Transmission Control Protocol (TCP). \bigskip

\noindent Nowadays Internet users are more interested in and distributing data in all kind of forms, such as video, audio, photos and documents, regardless where it comes from. When millions of devices and users become connected to the Internet and share these data (also called contents \citep{PLAGEMANN}, the delay factor will be bigger, which is not acceptable especially with real-time streaming or gaming. \bigskip

\noindent With Web 2.0, users are able to publish `any' content on the Internet, which makes competition widespread and lack of trust and security threats, such as malware, spam, and phishing, have become more spread. Controlling this issue with the current Internet architecture is hard because it is not content aware, which also means it cannot assure content persistence or availability. Hypertext Transfer Protocol (HTTP) and Domain Name System (DNS) are used to minimize these problems but they are not sufficient to solve it \citep[p. 1]{KOPONEN}. \bigskip

\noindent Heavy Internet traffic and congestion is an issue in client/server model. \textit{``When a client requests processing time and data from the server, it transmits the request on the network. The request travels to the server and waits in a queue until the server is able to process it. The performance characteristics of this type of architecture degrade exponentially as the number of requests increase.''}\footnote{\url{http://global.britannica.com/EBchecked/topic/746032/Internet-service-provider-ISP}} The more the popular the content the more requests coming from the users therefore heavy loads of internet traffic is being handled by the server. This performance issue affects the efficiency of the delivery of content to the user, little the user knows about it. \bigskip

\noindent Scalability issue client/server model you have a limited resources of servers therefore it is not scalable to a larger audience or target users. This will of course depends on the economies of scale of such company because of the cost advantage due to large-scale operation. In persistence of the data patches are being done in the application level. One point-to-point communication channel is established between client and server. Handling multiple users' request needs a multiple point-to-point channels is to be established and same content is sent to each channel. The same with providing content authentication and secure communication, patches and other key mechanisms were applied, and establishing multiple connections to different users is not scalable. Scalability as well is highly affected by the increased number of users and content within the traditional Internet distribution mechanism \citep[pp. 2-4]{BRITO} \bigskip

\noindent Security threats in a client/server model are inevitable. From computer viruses to malwares, phishing, packet sniffing, eavesdropping, are only few examples that can destroy or harm the integrity of a content over the network. Current solution to these problems such as anti-viruses, firewalls, encryptions, software updates and patches etc. are being done on the application level. These solutions can assure that the content is secure but are also not perfect therefore having some limitations. The following scenario is a good example of security problem:

\begin{quote}
\textit{``Exploiting a flaw in Washington University's FTP server, the intruder had cracked the server's security and set up shop. Hall's system - in this case, Red Hat 6.2 - shipped with the software that contained the hole. While a patch for the vulnerability was readily available on Red Hat's Web site, like many other system administrators, Hall just didn't get around to installing it.
The scenario, repeated daily at sites across the Internet, exposes a common security problem largely unknown to the general public. Although software makers routinely release "fixes" designed to plug holes and reassure worried customers, these antidotes are often ignored by administrators in charge of the affected systems - if they are aware of the problem at all.''}\footnote{\url{http://news.cnet.com/2009-1017-251407.html}}
\end{quote}

\subsubsection{ICN Solution}
The performance of delivering content to the user will significantly improve by moving the content close to the user. As described in ICN caching system the content can be retrieve to any nearest node of the ICN network therefore heavy traffic and congestion being handled by the server can be reduced. By improving the performance of delivering content to the user it will also result to a server's capability to cater a large scale of audience therefore scalability can be solved. \bigskip

\noindent Security in ICN happens at the network level of the internet, Content Centric Network (CCN), one of the implementations of ICN proposed that - \textit{``CCN is built on the notion of content-based security: protection and trust travel with the content itself, rather than being a property of the connections over which it travels. In CCN, all content is authenticated with digital signatures, and private content is protected with encryption''} \citep[p. 6]{VAN} \bigskip

\noindent By solving this problems and limitations of performance, scalability, and security issues, companies small or big can benefit by reducing their cost in building and maintaining large and expensive servers. 

\subsection{Peer-to-Peer Network}
\subsubsection{Basic Structure}

Content distribution in Peer-to-Peer (P2P) network is done by running a peer-to-peer file sharing programs like BitTorrent, Napster, Limewire, etc. Unlike in the client/server model where the content is directly accessed from the server the content can be access through peers. The content in Peer-to-Peer networking is stored in each computer in the peer network. Every user in the peer-to-peer network can be a server or a client. To get a better picture of how the system works please refer to the figures below.

\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{./Pictures/Figure4.png}
\end{center}
\caption{simple reference architectures for both client-server model and peer-to-peer model.}
\label{fig:4}
\end{figure}

\subsubsection{Problems and Limitations}
An incredible amount of files and contents being downloaded in peer to peer network causing heavy traffic and network congestion. The more peers using the system the more downloads and uploads happening in the network running into the network. \textit{``Extensive use of P2P file exchange causes network congestion and performance deterioration, which can ultimately lead to customer dissatisfaction and turnover.''}\footnote{\url{http://www.cisco.com/c/en/us/products/collateral/service-exchange/service-control-application-broadband/prod_white_paper0900aecd8023500d.html}} \bigskip

\noindent According to Dirk Trossen, \textit{``P2P is indeed entirely on the top of IP, therefore it has less possibilities in terms of optimizations.''}, therefore P2P is not contributing to the performance optimization of the network as supported by Dirk Trossen.\bigskip

\noindent Content security and trust issues are one of the main problems in peer-to-peer networking. Anyone running a P2P program can be a client as well as a host server so you basically dealing or retrieving contents from anonymous users. This can be an opportunity for anyone to send and received files with malicious malwares and viruses. 

\subsubsection{ICN Solution}
As proposed by ICN as a solution for congestion problem by implementing congestion mechanisms, {\textit{``The content is usually divided into `chunks' that can be individually requested, sent back to the requester, cached into intermediate nodes.  The sending rate of content requests can be adjusted in order to perform congestion control, implementing a receiver driven transport protocol.''}\footnote{\url{http://tools.ietf.org/html/draft-salsano-ictp-02\#section-5}}  This will allow a fair sharing of resources in the network. \bigskip

\noindent The same ICN notion for security system, which is content-based security, can also be applied to protect the integrity and credibility of the content, which states \textit{``all content is authenticated with digital signatures, and private content is protected with encryption''} \citep[p. 6]{VAN}.

\subsection{Content Distribution Network}
\subsubsection{Basic Structure}
Users demand for a faster and efficient access of content in the internet increases especially in multimedia sector such as video online streaming, online gaming, programs and file downloads, and many more. To cope up with this demand, big companies and organizations offers their services with the deployment of Content Distribution Network (CDN), moreover building up their own CDNs.\bigskip

\noindent Content Distribution Network (CDN) is an interconnected system of servers in the Internet used mostly by ISPs and other companies which main goal is to place their content and services near to the client. The system is composed mainly of two servers, which are the origin or the main server and the replica servers. Some of the most popular CDN providers are CoDeen, Limelight, and Akamai. In CDN the content is copied on multiple replica servers, if the client request for a specific content to the origin server the origin server then redirects the client to the nearest replica server where the content can be access. \marginpar{\scriptsize Figure \ref{fig:5}: A simple example of how CDN work: (1) client A sends a content request to the origin server that (2) redirects this request to the replica server closest to A. Then (3) X sends the content to A.}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{./Pictures/Figure5.png}
\end{center}
\caption{Model of CDNs \citep[pp. 2-4]{BRITO}}
\label{fig:5}
\end{figure}

\noindent \textit{``A CDN is a privately owned overlay network that aims to optimize delivery of content from (Content Service Providers) CSPs to End Users. CDNs are independent of each other. Optimization is in terms of performance, availability and cost.''}\footnote{\url{http://tools.ietf.org/id/draft-fmn-cdni-advanced-use-cases-00.txt}}  The CDN provider collaborates with NSPs on surrogate placement and network resource reservation. The deployment, extension and (part of) the operation of CDNs are centrally managed. \bigskip

\subsubsection{Problems and Limitations}
CDN is an overlay of the current IP-based internet architecture therefore its inheriting some of the issues and limitations of client/server model such as content persistence. \textit{``The main problem of these two techniques, however, is to guarantee content persistence. If the owner, domain or any other property of a given content changes, users may not be able to retrieve this content by using the same URL already known''} \citep[p. 8]{BRITO}. \bigskip

\noindent In order for CDN to cater a large audience and live up with their users' demand for fast and efficient they need to install and maintain replicated servers. It is not only costly but also only large companies and businesses can afford it. 

\subsubsection{ICN Solution}
Both CDN and ICN share the same goal and that is to improve the efficiency of delivering content over the internet by bringing the content closer to the user. CDN can run on ICN-based architecture and get also the benefits in terms of performance and security, like what is mentioned to the possible solutions for client/server models limitations.
